Merge branch '4.1-Stable' into Fix-vencoder-warning
This commit is contained in:
commit
166b2beb20
42
README.md
42
README.md
|
@ -80,7 +80,7 @@ Based on our testing, we have determined that the project runs stable on `Python
|
|||
- Place it under the `pretrain` directory
|
||||
|
||||
Or download the following ContentVec, which is only 199MB in size but has the same effect:
|
||||
- contentvec :[hubert_base.pt](https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt)
|
||||
- ContentVec: [hubert_base.pt](https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt)
|
||||
- Change the file name to `checkpoint_best_legacy_500.pt` and place it in the `pretrain` directory
|
||||
|
||||
```shell
|
||||
|
@ -90,7 +90,7 @@ wget -P pretrain/ http://obs.cstcloud.cn/share/obs/sankagenkeshi/checkpoint_best
|
|||
```
|
||||
|
||||
##### **2. If hubertsoft is used as the speech encoder**
|
||||
- soft vc hubert:[hubert-soft-0d54a1f4.pt](https://github.com/bshall/hubert/releases/download/v0.1/hubert-soft-0d54a1f4.pt)
|
||||
- soft vc hubert: [hubert-soft-0d54a1f4.pt](https://github.com/bshall/hubert/releases/download/v0.1/hubert-soft-0d54a1f4.pt)
|
||||
- Place it under the `pretrain` directory
|
||||
|
||||
##### **3. If whisper-ppg as the encoder**
|
||||
|
@ -155,7 +155,7 @@ If you are using the `NSF-HIFIGAN enhancer` or `shallow diffusion`, you will nee
|
|||
wget -P pretrain/ https://github.com/openvpi/vocoders/releases/download/nsf-hifigan-v1/nsf_hifigan_20221211.zip
|
||||
\unzip -od pretrain/nsf_hifigan pretrain/nsf_hifigan_20221211.zip
|
||||
# Alternatively, you can manually download and place it in the pretrain/nsf_hifigan directory
|
||||
# URL:https://github.com/openvpi/vocoders/releases/tag/nsf-hifigan-v1
|
||||
# URL: https://github.com/openvpi/vocoders/releases/tag/nsf-hifigan-v1
|
||||
```
|
||||
|
||||
## 📊 Dataset Preparation
|
||||
|
@ -247,11 +247,23 @@ After enabling loudness embedding, the trained model will match the loudness of
|
|||
|
||||
* `keep_ckpts`: Keep the the the number of previous models during training. Set to `0` to keep them all. Default is `3`.
|
||||
|
||||
* `all_in_mem`, `cache_all_data`: Load all dataset to RAM. It can be enabled when the disk IO of some platforms is too low and the system memory is **much larger** than your dataset.
|
||||
* `all_in_mem`: Load all dataset to RAM. It can be enabled when the disk IO of some platforms is too low and the system memory is **much larger** than your dataset.
|
||||
|
||||
* `batch_size`: The amount of data loaded to the GPU for a single training session can be adjusted to a size lower than the GPU memory capacity.
|
||||
|
||||
* `vocoder_name` : Select a vocoder. The default is `nsf-hifigan`.
|
||||
* `vocoder_name`: Select a vocoder. The default is `nsf-hifigan`.
|
||||
|
||||
##### diffusion.yaml
|
||||
|
||||
* `cache_all_data`: Load all dataset to RAM. It can be enabled when the disk IO of some platforms is too low and the system memory is **much larger** than your dataset.
|
||||
|
||||
* `duration`: The duration of the audio slicing during training, can be adjusted according to the size of the video memory, **Note: this value must be less than the minimum time of the audio in the training set!**
|
||||
|
||||
* `batch_size`: The amount of data loaded to the GPU for a single training session can be adjusted to a size lower than the video memory capacity.
|
||||
|
||||
* `timesteps`: The total number of steps in the diffusion model, which defaults to 1000.
|
||||
|
||||
* `k_step_max`: Training can only train `k_step_max` step diffusion to save training time, note that the value must be less than `timesteps`, 0 is to train the entire diffusion model, **Note: if you do not train the entire diffusion model will not be able to use only_diffusion!**
|
||||
|
||||
##### **List of Vocoders**
|
||||
|
||||
|
@ -289,6 +301,12 @@ After completing the above steps, the dataset directory will contain the preproc
|
|||
|
||||
## 🏋️♀️ Training
|
||||
|
||||
### Sovits Model
|
||||
|
||||
```shell
|
||||
python train.py -c configs/config.json -m 44k
|
||||
```
|
||||
|
||||
### Diffusion Model (optional)
|
||||
|
||||
If the shallow diffusion function is needed, the diffusion model needs to be trained. The diffusion model training method is as follows:
|
||||
|
@ -297,12 +315,6 @@ If the shallow diffusion function is needed, the diffusion model needs to be tra
|
|||
python train_diff.py -c configs/diffusion.yaml
|
||||
```
|
||||
|
||||
### Sovits Model
|
||||
|
||||
```shell
|
||||
python train.py -c configs/config.json -m 44k
|
||||
```
|
||||
|
||||
During training, the model files will be saved to `logs/44k`, and the diffusion model will be saved to `logs/44k/diffusion`
|
||||
|
||||
## 🤖 Inference
|
||||
|
@ -340,7 +352,7 @@ Shallow diffusion settings:
|
|||
- `-ks` | `--k_step`: The larger the number of k_steps, the closer it is to the result of the diffusion model. The default is 100
|
||||
- `-od` | `--only_diffusion`: Whether to use Only diffusion mode, which does not load the sovits model to only use diffusion model inference
|
||||
- `-se` | `--second_encoding`:which involves applying an additional encoding to the original audio before shallow diffusion. This option can yield varying results - sometimes positive and sometimes negative.
|
||||
|
||||
|
||||
### Attention
|
||||
|
||||
If inferencing using `whisper-ppg` speech encoder, you need to set `--clip` to 25 and `-lg` to 1. Otherwise it will fail to infer properly.
|
||||
|
@ -373,8 +385,8 @@ No changes are required in the existing steps. Simply train an additional cluste
|
|||
|
||||
Introduction: As with the clustering scheme, the timbre leakage can be reduced, the enunciation is slightly better than clustering, but it will reduce the inference speed. By employing the fusion method, it becomes possible to linearly control the balance between feature retrieval and non-feature retrieval, allowing for fine-tuning of the desired proportion.
|
||||
|
||||
- Training process:
|
||||
First, it needs to be executed after generating hubert and f0:
|
||||
- Training process:
|
||||
First, it needs to be executed after generating hubert and f0:
|
||||
|
||||
```shell
|
||||
python train_index.py -c configs/config.json
|
||||
|
@ -382,7 +394,7 @@ python train_index.py -c configs/config.json
|
|||
|
||||
The output of the model will be in `logs/44k/feature_and_index.pkl`
|
||||
|
||||
- Inference process:
|
||||
- Inference process:
|
||||
- The `--feature_retrieval` needs to be formulated first, and the clustering mode automatically switches to the feature retrieval mode.
|
||||
- Specify `cluster_model_path` in `inference_main.py`.
|
||||
- Specify `cluster_infer_ratio` in `inference_main.py`, where `0` means not using feature retrieval at all, `1` means only using feature retrieval, and usually `0.5` is sufficient.
|
||||
|
|
|
@ -245,14 +245,28 @@ python preprocess_flist_config.py --speech_encoder vec768l12 --vol_aug
|
|||
|
||||
#### 此时可以在生成的config.json与diffusion.yaml修改部分参数
|
||||
|
||||
##### config.json
|
||||
|
||||
* `keep_ckpts`:训练时保留最后几个模型,`0`为保留所有,默认只保留最后`3`个
|
||||
|
||||
* `all_in_mem`,`cache_all_data`:加载所有数据集到内存中,某些平台的硬盘IO过于低下、同时内存容量 **远大于** 数据集体积时可以启用
|
||||
* `all_in_mem`:加载所有数据集到内存中,某些平台的硬盘IO过于低下、同时内存容量 **远大于** 数据集体积时可以启用
|
||||
|
||||
* `batch_size`:单次训练加载到GPU的数据量,调整到低于显存容量的大小即可
|
||||
|
||||
* `vocoder_name` : 选择一种声码器,默认为`nsf-hifigan`.
|
||||
|
||||
##### diffusion.yaml
|
||||
|
||||
* `cache_all_data`:加载所有数据集到内存中,某些平台的硬盘IO过于低下、同时内存容量 **远大于** 数据集体积时可以启用
|
||||
|
||||
* `duration`:训练时音频切片时长,可根据显存大小调整,**注意,该值必须小于训练集内音频的最短时间!**
|
||||
|
||||
* `batch_size`:单次训练加载到GPU的数据量,调整到低于显存容量的大小即可
|
||||
|
||||
* `timesteps` : 扩散模型总步数,默认为1000.
|
||||
|
||||
* `k_step_max` : 训练时可仅训练`k_step_max`步扩散以节约训练时间,注意,该值必须小于`timesteps`,0为训练整个扩散模型,**注意,如果不训练整个扩散模型将无法使用仅扩散模型推理!**
|
||||
|
||||
##### **声码器列表**
|
||||
|
||||
```
|
||||
|
@ -289,6 +303,12 @@ python preprocess_hubert_f0.py --f0_predictor dio --use_diff
|
|||
|
||||
## 🏋️♀️ 训练
|
||||
|
||||
### 主模型训练
|
||||
|
||||
```shell
|
||||
python train.py -c configs/config.json -m 44k
|
||||
```
|
||||
|
||||
### 扩散模型(可选)
|
||||
|
||||
尚若需要浅扩散功能,需要训练扩散模型,扩散模型训练方法为:
|
||||
|
@ -297,12 +317,6 @@ python preprocess_hubert_f0.py --f0_predictor dio --use_diff
|
|||
python train_diff.py -c configs/diffusion.yaml
|
||||
```
|
||||
|
||||
### 主模型训练
|
||||
|
||||
```shell
|
||||
python train.py -c configs/config.json -m 44k
|
||||
```
|
||||
|
||||
模型训练结束后,模型文件保存在`logs/44k`目录下,扩散模型在`logs/44k/diffusion`下
|
||||
|
||||
## 🤖 推理
|
||||
|
|
|
@ -17,7 +17,9 @@ model:
|
|||
n_layers: 20
|
||||
n_chans: 512
|
||||
n_hidden: 256
|
||||
use_pitch_aug: true
|
||||
use_pitch_aug: true
|
||||
timesteps : 1000
|
||||
k_step_max: 0 # must <= timesteps, If it is 0, train all
|
||||
n_spk: 1 # max number of different speakers
|
||||
device: cuda
|
||||
vocoder:
|
||||
|
@ -25,7 +27,7 @@ vocoder:
|
|||
ckpt: 'pretrain/nsf_hifigan/model'
|
||||
infer:
|
||||
speedup: 10
|
||||
method: 'dpm-solver' # 'pndm' or 'dpm-solver'
|
||||
method: 'dpm-solver++' # 'pndm' or 'dpm-solver' or 'ddim' or 'unipc' or 'dpm-solver++'
|
||||
env:
|
||||
expdir: logs/44k/diffusion
|
||||
gpu_id: 0
|
||||
|
|
|
@ -67,6 +67,7 @@ class GaussianDiffusion(nn.Module):
|
|||
max_beta=0.02,
|
||||
spec_min=-12,
|
||||
spec_max=2):
|
||||
|
||||
super().__init__()
|
||||
self.denoise_fn = denoise_fn
|
||||
self.out_dims = out_dims
|
||||
|
@ -78,7 +79,7 @@ class GaussianDiffusion(nn.Module):
|
|||
|
||||
timesteps, = betas.shape
|
||||
self.num_timesteps = int(timesteps)
|
||||
self.k_step = k_step
|
||||
self.k_step = k_step if k_step>0 and k_step<timesteps else timesteps
|
||||
|
||||
self.noise_list = deque(maxlen=4)
|
||||
|
||||
|
@ -139,6 +140,18 @@ class GaussianDiffusion(nn.Module):
|
|||
model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
|
||||
return model_mean, posterior_variance, posterior_log_variance
|
||||
|
||||
@torch.no_grad()
|
||||
def p_sample_ddim(self, x, t, interval, cond):
|
||||
"""
|
||||
Use the DDIM method from
|
||||
"""
|
||||
a_t = extract(self.alphas_cumprod, t, x.shape)
|
||||
a_prev = extract(self.alphas_cumprod, torch.max(t - interval, torch.zeros_like(t)), x.shape)
|
||||
|
||||
noise_pred = self.denoise_fn(x, t, cond=cond)
|
||||
x_prev = a_prev.sqrt() * (x / a_t.sqrt() + (((1 - a_prev) / a_prev).sqrt()-((1 - a_t) / a_t).sqrt()) * noise_pred)
|
||||
return x_prev
|
||||
|
||||
@torch.no_grad()
|
||||
def p_sample(self, x, t, cond, clip_denoised=True, repeat_noise=False):
|
||||
b, *_, device = *x.shape, x.device
|
||||
|
@ -239,7 +252,7 @@ class GaussianDiffusion(nn.Module):
|
|||
x = self.q_sample(x_start=norm_spec, t=torch.tensor([t - 1], device=device).long())
|
||||
|
||||
if method is not None and infer_speedup > 1:
|
||||
if method == 'dpm-solver':
|
||||
if method == 'dpm-solver' or method == 'dpm-solver++':
|
||||
from .dpm_solver_pytorch import NoiseScheduleVP, model_wrapper, DPM_Solver
|
||||
# 1. Define the noise schedule.
|
||||
noise_schedule = NoiseScheduleVP(schedule='discrete', betas=self.betas[:t])
|
||||
|
@ -267,17 +280,20 @@ class GaussianDiffusion(nn.Module):
|
|||
# (We recommend singlestep DPM-Solver for unconditional sampling)
|
||||
# You can adjust the `steps` to balance the computation
|
||||
# costs and the sample quality.
|
||||
dpm_solver = DPM_Solver(model_fn, noise_schedule)
|
||||
|
||||
if method == 'dpm-solver':
|
||||
dpm_solver = DPM_Solver(model_fn, noise_schedule, algorithm_type="dpmsolver")
|
||||
elif method == 'dpm-solver++':
|
||||
dpm_solver = DPM_Solver(model_fn, noise_schedule, algorithm_type="dpmsolver++")
|
||||
|
||||
steps = t // infer_speedup
|
||||
if use_tqdm:
|
||||
self.bar = tqdm(desc="sample time step", total=steps)
|
||||
x = dpm_solver.sample(
|
||||
x,
|
||||
steps=steps,
|
||||
order=3,
|
||||
order=2,
|
||||
skip_type="time_uniform",
|
||||
method="singlestep",
|
||||
method="multistep",
|
||||
)
|
||||
if use_tqdm:
|
||||
self.bar.close()
|
||||
|
@ -298,6 +314,63 @@ class GaussianDiffusion(nn.Module):
|
|||
x, torch.full((b,), i, device=device, dtype=torch.long),
|
||||
infer_speedup, cond=cond
|
||||
)
|
||||
elif method == 'ddim':
|
||||
if use_tqdm:
|
||||
for i in tqdm(
|
||||
reversed(range(0, t, infer_speedup)), desc='sample time step',
|
||||
total=t // infer_speedup,
|
||||
):
|
||||
x = self.p_sample_ddim(
|
||||
x, torch.full((b,), i, device=device, dtype=torch.long),
|
||||
infer_speedup, cond=cond
|
||||
)
|
||||
else:
|
||||
for i in reversed(range(0, t, infer_speedup)):
|
||||
x = self.p_sample_ddim(
|
||||
x, torch.full((b,), i, device=device, dtype=torch.long),
|
||||
infer_speedup, cond=cond
|
||||
)
|
||||
elif method == 'unipc':
|
||||
from .uni_pc import NoiseScheduleVP, model_wrapper, UniPC
|
||||
# 1. Define the noise schedule.
|
||||
noise_schedule = NoiseScheduleVP(schedule='discrete', betas=self.betas[:t])
|
||||
|
||||
# 2. Convert your discrete-time `model` to the continuous-time
|
||||
# noise prediction model. Here is an example for a diffusion model
|
||||
# `model` with the noise prediction type ("noise") .
|
||||
def my_wrapper(fn):
|
||||
def wrapped(x, t, **kwargs):
|
||||
ret = fn(x, t, **kwargs)
|
||||
if use_tqdm:
|
||||
self.bar.update(1)
|
||||
return ret
|
||||
|
||||
return wrapped
|
||||
|
||||
model_fn = model_wrapper(
|
||||
my_wrapper(self.denoise_fn),
|
||||
noise_schedule,
|
||||
model_type="noise", # or "x_start" or "v" or "score"
|
||||
model_kwargs={"cond": cond}
|
||||
)
|
||||
|
||||
# 3. Define uni_pc and sample by multistep UniPC.
|
||||
# You can adjust the `steps` to balance the computation
|
||||
# costs and the sample quality.
|
||||
uni_pc = UniPC(model_fn, noise_schedule, variant='bh2')
|
||||
|
||||
steps = t // infer_speedup
|
||||
if use_tqdm:
|
||||
self.bar = tqdm(desc="sample time step", total=steps)
|
||||
x = uni_pc.sample(
|
||||
x,
|
||||
steps=steps,
|
||||
order=2,
|
||||
skip_type="time_uniform",
|
||||
method="multistep",
|
||||
)
|
||||
if use_tqdm:
|
||||
self.bar.close()
|
||||
else:
|
||||
raise NotImplementedError(method)
|
||||
else:
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -125,12 +125,7 @@ class Saver(object):
|
|||
torch.save({
|
||||
'global_step': self.global_step,
|
||||
'model': model.state_dict()}, path_pt)
|
||||
|
||||
# to json
|
||||
if to_json:
|
||||
path_json = os.path.join(
|
||||
self.expdir , name+'.json')
|
||||
utils.to_json(path_params, path_json)
|
||||
|
||||
|
||||
def delete_model(self, name='model', postfix=''):
|
||||
# path
|
||||
|
|
|
@ -33,7 +33,9 @@ def load_model_vocoder(
|
|||
128,
|
||||
args.model.n_layers,
|
||||
args.model.n_chans,
|
||||
args.model.n_hidden)
|
||||
args.model.n_hidden,
|
||||
args.model.timesteps,
|
||||
args.model.k_step_max)
|
||||
|
||||
print(' [Loading] ' + model_path)
|
||||
ckpt = torch.load(model_path, map_location=torch.device(device))
|
||||
|
@ -52,8 +54,11 @@ class Unit2Mel(nn.Module):
|
|||
out_dims=128,
|
||||
n_layers=20,
|
||||
n_chans=384,
|
||||
n_hidden=256):
|
||||
n_hidden=256,
|
||||
timesteps=1000,
|
||||
k_step_max=1000):
|
||||
super().__init__()
|
||||
|
||||
self.unit_embed = nn.Linear(input_channel, n_hidden)
|
||||
self.f0_embed = nn.Linear(1, n_hidden)
|
||||
self.volume_embed = nn.Linear(1, n_hidden)
|
||||
|
@ -64,9 +69,13 @@ class Unit2Mel(nn.Module):
|
|||
self.n_spk = n_spk
|
||||
if n_spk is not None and n_spk > 1:
|
||||
self.spk_embed = nn.Embedding(n_spk, n_hidden)
|
||||
|
||||
|
||||
self.timesteps = timesteps if timesteps is not None else 1000
|
||||
self.k_step_max = k_step_max if k_step_max is not None and k_step_max>0 and k_step_max<self.timesteps else self.timesteps
|
||||
|
||||
|
||||
# diffusion
|
||||
self.decoder = GaussianDiffusion(out_dims, n_layers, n_chans, n_hidden)
|
||||
self.decoder = GaussianDiffusion(out_dims, n_layers, n_chans, n_hidden,self.timesteps,self.k_step_max)
|
||||
self.hidden_size = n_hidden
|
||||
self.speaker_map = torch.zeros((self.n_spk,1,1,n_hidden))
|
||||
|
||||
|
|
|
@ -40,10 +40,12 @@ def test(args, model, vocoder, loader_test, saver):
|
|||
data['f0'],
|
||||
data['volume'],
|
||||
data['spk_id'],
|
||||
gt_spec=None,
|
||||
gt_spec=None if model.k_step_max == model.timesteps else data['mel'],
|
||||
infer=True,
|
||||
infer_speedup=args.infer.speedup,
|
||||
method=args.infer.method)
|
||||
method=args.infer.method,
|
||||
k_step=model.k_step_max
|
||||
)
|
||||
signal = vocoder.infer(mel, data['f0'])
|
||||
ed_time = time.time()
|
||||
|
||||
|
@ -62,7 +64,8 @@ def test(args, model, vocoder, loader_test, saver):
|
|||
data['volume'],
|
||||
data['spk_id'],
|
||||
gt_spec=data['mel'],
|
||||
infer=False)
|
||||
infer=False,
|
||||
k_step=model.k_step_max)
|
||||
test_loss += loss.item()
|
||||
|
||||
# log mel
|
||||
|
@ -121,11 +124,11 @@ def train(args, initial_global_step, model, optimizer, scheduler, vocoder, loade
|
|||
# forward
|
||||
if dtype == torch.float32:
|
||||
loss = model(data['units'].float(), data['f0'], data['volume'], data['spk_id'],
|
||||
aug_shift = data['aug_shift'], gt_spec=data['mel'].float(), infer=False)
|
||||
aug_shift = data['aug_shift'], gt_spec=data['mel'].float(), infer=False, k_step=model.k_step_max)
|
||||
else:
|
||||
with autocast(device_type=args.device, dtype=dtype):
|
||||
loss = model(data['units'], data['f0'], data['volume'], data['spk_id'],
|
||||
aug_shift = data['aug_shift'], gt_spec=data['mel'], infer=False)
|
||||
aug_shift = data['aug_shift'], gt_spec=data['mel'], infer=False, k_step=model.k_step_max)
|
||||
|
||||
# handle nan loss
|
||||
if torch.isnan(loss):
|
||||
|
|
|
@ -0,0 +1,731 @@
|
|||
import torch
|
||||
import torch.nn.functional as F
|
||||
import math
|
||||
|
||||
|
||||
class NoiseScheduleVP:
|
||||
def __init__(
|
||||
self,
|
||||
schedule='discrete',
|
||||
betas=None,
|
||||
alphas_cumprod=None,
|
||||
continuous_beta_0=0.1,
|
||||
continuous_beta_1=20.,
|
||||
dtype=torch.float32,
|
||||
):
|
||||
"""Create a wrapper class for the forward SDE (VP type).
|
||||
***
|
||||
Update: We support discrete-time diffusion models by implementing a picewise linear interpolation for log_alpha_t.
|
||||
We recommend to use schedule='discrete' for the discrete-time diffusion models, especially for high-resolution images.
|
||||
***
|
||||
The forward SDE ensures that the condition distribution q_{t|0}(x_t | x_0) = N ( alpha_t * x_0, sigma_t^2 * I ).
|
||||
We further define lambda_t = log(alpha_t) - log(sigma_t), which is the half-logSNR (described in the DPM-Solver paper).
|
||||
Therefore, we implement the functions for computing alpha_t, sigma_t and lambda_t. For t in [0, T], we have:
|
||||
log_alpha_t = self.marginal_log_mean_coeff(t)
|
||||
sigma_t = self.marginal_std(t)
|
||||
lambda_t = self.marginal_lambda(t)
|
||||
Moreover, as lambda(t) is an invertible function, we also support its inverse function:
|
||||
t = self.inverse_lambda(lambda_t)
|
||||
===============================================================
|
||||
We support both discrete-time DPMs (trained on n = 0, 1, ..., N-1) and continuous-time DPMs (trained on t in [t_0, T]).
|
||||
1. For discrete-time DPMs:
|
||||
For discrete-time DPMs trained on n = 0, 1, ..., N-1, we convert the discrete steps to continuous time steps by:
|
||||
t_i = (i + 1) / N
|
||||
e.g. for N = 1000, we have t_0 = 1e-3 and T = t_{N-1} = 1.
|
||||
We solve the corresponding diffusion ODE from time T = 1 to time t_0 = 1e-3.
|
||||
Args:
|
||||
betas: A `torch.Tensor`. The beta array for the discrete-time DPM. (See the original DDPM paper for details)
|
||||
alphas_cumprod: A `torch.Tensor`. The cumprod alphas for the discrete-time DPM. (See the original DDPM paper for details)
|
||||
Note that we always have alphas_cumprod = cumprod(1 - betas). Therefore, we only need to set one of `betas` and `alphas_cumprod`.
|
||||
**Important**: Please pay special attention for the args for `alphas_cumprod`:
|
||||
The `alphas_cumprod` is the \hat{alpha_n} arrays in the notations of DDPM. Specifically, DDPMs assume that
|
||||
q_{t_n | 0}(x_{t_n} | x_0) = N ( \sqrt{\hat{alpha_n}} * x_0, (1 - \hat{alpha_n}) * I ).
|
||||
Therefore, the notation \hat{alpha_n} is different from the notation alpha_t in DPM-Solver. In fact, we have
|
||||
alpha_{t_n} = \sqrt{\hat{alpha_n}},
|
||||
and
|
||||
log(alpha_{t_n}) = 0.5 * log(\hat{alpha_n}).
|
||||
2. For continuous-time DPMs:
|
||||
We support two types of VPSDEs: linear (DDPM) and cosine (improved-DDPM). The hyperparameters for the noise
|
||||
schedule are the default settings in DDPM and improved-DDPM:
|
||||
Args:
|
||||
beta_min: A `float` number. The smallest beta for the linear schedule.
|
||||
beta_max: A `float` number. The largest beta for the linear schedule.
|
||||
cosine_s: A `float` number. The hyperparameter in the cosine schedule.
|
||||
cosine_beta_max: A `float` number. The hyperparameter in the cosine schedule.
|
||||
T: A `float` number. The ending time of the forward process.
|
||||
===============================================================
|
||||
Args:
|
||||
schedule: A `str`. The noise schedule of the forward SDE. 'discrete' for discrete-time DPMs,
|
||||
'linear' or 'cosine' for continuous-time DPMs.
|
||||
Returns:
|
||||
A wrapper object of the forward SDE (VP type).
|
||||
|
||||
===============================================================
|
||||
Example:
|
||||
# For discrete-time DPMs, given betas (the beta array for n = 0, 1, ..., N - 1):
|
||||
>>> ns = NoiseScheduleVP('discrete', betas=betas)
|
||||
# For discrete-time DPMs, given alphas_cumprod (the \hat{alpha_n} array for n = 0, 1, ..., N - 1):
|
||||
>>> ns = NoiseScheduleVP('discrete', alphas_cumprod=alphas_cumprod)
|
||||
# For continuous-time DPMs (VPSDE), linear schedule:
|
||||
>>> ns = NoiseScheduleVP('linear', continuous_beta_0=0.1, continuous_beta_1=20.)
|
||||
"""
|
||||
|
||||
if schedule not in ['discrete', 'linear', 'cosine']:
|
||||
raise ValueError("Unsupported noise schedule {}. The schedule needs to be 'discrete' or 'linear' or 'cosine'".format(schedule))
|
||||
|
||||
self.schedule = schedule
|
||||
if schedule == 'discrete':
|
||||
if betas is not None:
|
||||
log_alphas = 0.5 * torch.log(1 - betas).cumsum(dim=0)
|
||||
else:
|
||||
assert alphas_cumprod is not None
|
||||
log_alphas = 0.5 * torch.log(alphas_cumprod)
|
||||
self.total_N = len(log_alphas)
|
||||
self.T = 1.
|
||||
self.t_array = torch.linspace(0., 1., self.total_N + 1)[1:].reshape((1, -1)).to(dtype=dtype)
|
||||
self.log_alpha_array = log_alphas.reshape((1, -1,)).to(dtype=dtype)
|
||||
else:
|
||||
self.total_N = 1000
|
||||
self.beta_0 = continuous_beta_0
|
||||
self.beta_1 = continuous_beta_1
|
||||
self.cosine_s = 0.008
|
||||
self.cosine_beta_max = 999.
|
||||
self.cosine_t_max = math.atan(self.cosine_beta_max * (1. + self.cosine_s) / math.pi) * 2. * (1. + self.cosine_s) / math.pi - self.cosine_s
|
||||
self.cosine_log_alpha_0 = math.log(math.cos(self.cosine_s / (1. + self.cosine_s) * math.pi / 2.))
|
||||
self.schedule = schedule
|
||||
if schedule == 'cosine':
|
||||
# For the cosine schedule, T = 1 will have numerical issues. So we manually set the ending time T.
|
||||
# Note that T = 0.9946 may be not the optimal setting. However, we find it works well.
|
||||
self.T = 0.9946
|
||||
else:
|
||||
self.T = 1.
|
||||
|
||||
def marginal_log_mean_coeff(self, t):
|
||||
"""
|
||||
Compute log(alpha_t) of a given continuous-time label t in [0, T].
|
||||
"""
|
||||
if self.schedule == 'discrete':
|
||||
return interpolate_fn(t.reshape((-1, 1)), self.t_array.to(t.device), self.log_alpha_array.to(t.device)).reshape((-1))
|
||||
elif self.schedule == 'linear':
|
||||
return -0.25 * t ** 2 * (self.beta_1 - self.beta_0) - 0.5 * t * self.beta_0
|
||||
elif self.schedule == 'cosine':
|
||||
log_alpha_fn = lambda s: torch.log(torch.cos((s + self.cosine_s) / (1. + self.cosine_s) * math.pi / 2.))
|
||||
log_alpha_t = log_alpha_fn(t) - self.cosine_log_alpha_0
|
||||
return log_alpha_t
|
||||
|
||||
def marginal_alpha(self, t):
|
||||
"""
|
||||
Compute alpha_t of a given continuous-time label t in [0, T].
|
||||
"""
|
||||
return torch.exp(self.marginal_log_mean_coeff(t))
|
||||
|
||||
def marginal_std(self, t):
|
||||
"""
|
||||
Compute sigma_t of a given continuous-time label t in [0, T].
|
||||
"""
|
||||
return torch.sqrt(1. - torch.exp(2. * self.marginal_log_mean_coeff(t)))
|
||||
|
||||
def marginal_lambda(self, t):
|
||||
"""
|
||||
Compute lambda_t = log(alpha_t) - log(sigma_t) of a given continuous-time label t in [0, T].
|
||||
"""
|
||||
log_mean_coeff = self.marginal_log_mean_coeff(t)
|
||||
log_std = 0.5 * torch.log(1. - torch.exp(2. * log_mean_coeff))
|
||||
return log_mean_coeff - log_std
|
||||
|
||||
def inverse_lambda(self, lamb):
|
||||
"""
|
||||
Compute the continuous-time label t in [0, T] of a given half-logSNR lambda_t.
|
||||
"""
|
||||
if self.schedule == 'linear':
|
||||
tmp = 2. * (self.beta_1 - self.beta_0) * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb))
|
||||
Delta = self.beta_0**2 + tmp
|
||||
return tmp / (torch.sqrt(Delta) + self.beta_0) / (self.beta_1 - self.beta_0)
|
||||
elif self.schedule == 'discrete':
|
||||
log_alpha = -0.5 * torch.logaddexp(torch.zeros((1,)).to(lamb.device), -2. * lamb)
|
||||
t = interpolate_fn(log_alpha.reshape((-1, 1)), torch.flip(self.log_alpha_array.to(lamb.device), [1]), torch.flip(self.t_array.to(lamb.device), [1]))
|
||||
return t.reshape((-1,))
|
||||
else:
|
||||
log_alpha = -0.5 * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb))
|
||||
t_fn = lambda log_alpha_t: torch.arccos(torch.exp(log_alpha_t + self.cosine_log_alpha_0)) * 2. * (1. + self.cosine_s) / math.pi - self.cosine_s
|
||||
t = t_fn(log_alpha)
|
||||
return t
|
||||
|
||||
|
||||
def model_wrapper(
|
||||
model,
|
||||
noise_schedule,
|
||||
model_type="noise",
|
||||
model_kwargs={},
|
||||
guidance_type="uncond",
|
||||
condition=None,
|
||||
unconditional_condition=None,
|
||||
guidance_scale=1.,
|
||||
classifier_fn=None,
|
||||
classifier_kwargs={},
|
||||
):
|
||||
"""Create a wrapper function for the noise prediction model.
|
||||
"""
|
||||
|
||||
def get_model_input_time(t_continuous):
|
||||
"""
|
||||
Convert the continuous-time `t_continuous` (in [epsilon, T]) to the model input time.
|
||||
For discrete-time DPMs, we convert `t_continuous` in [1 / N, 1] to `t_input` in [0, 1000 * (N - 1) / N].
|
||||
For continuous-time DPMs, we just use `t_continuous`.
|
||||
"""
|
||||
if noise_schedule.schedule == 'discrete':
|
||||
return (t_continuous - 1. / noise_schedule.total_N) * noise_schedule.total_N
|
||||
else:
|
||||
return t_continuous
|
||||
|
||||
def noise_pred_fn(x, t_continuous, cond=None):
|
||||
t_input = get_model_input_time(t_continuous)
|
||||
if cond is None:
|
||||
output = model(x, t_input, **model_kwargs)
|
||||
else:
|
||||
output = model(x, t_input, cond, **model_kwargs)
|
||||
if model_type == "noise":
|
||||
return output
|
||||
elif model_type == "x_start":
|
||||
alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous)
|
||||
return (x - alpha_t * output) / sigma_t
|
||||
elif model_type == "v":
|
||||
alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous)
|
||||
return alpha_t * output + sigma_t * x
|
||||
elif model_type == "score":
|
||||
sigma_t = noise_schedule.marginal_std(t_continuous)
|
||||
return -sigma_t * output
|
||||
|
||||
def cond_grad_fn(x, t_input):
|
||||
"""
|
||||
Compute the gradient of the classifier, i.e. nabla_{x} log p_t(cond | x_t).
|
||||
"""
|
||||
with torch.enable_grad():
|
||||
x_in = x.detach().requires_grad_(True)
|
||||
log_prob = classifier_fn(x_in, t_input, condition, **classifier_kwargs)
|
||||
return torch.autograd.grad(log_prob.sum(), x_in)[0]
|
||||
|
||||
def model_fn(x, t_continuous):
|
||||
"""
|
||||
The noise predicition model function that is used for DPM-Solver.
|
||||
"""
|
||||
if guidance_type == "uncond":
|
||||
return noise_pred_fn(x, t_continuous)
|
||||
elif guidance_type == "classifier":
|
||||
assert classifier_fn is not None
|
||||
t_input = get_model_input_time(t_continuous)
|
||||
cond_grad = cond_grad_fn(x, t_input)
|
||||
sigma_t = noise_schedule.marginal_std(t_continuous)
|
||||
noise = noise_pred_fn(x, t_continuous)
|
||||
return noise - guidance_scale * sigma_t * cond_grad
|
||||
elif guidance_type == "classifier-free":
|
||||
if guidance_scale == 1. or unconditional_condition is None:
|
||||
return noise_pred_fn(x, t_continuous, cond=condition)
|
||||
else:
|
||||
x_in = torch.cat([x] * 2)
|
||||
t_in = torch.cat([t_continuous] * 2)
|
||||
c_in = torch.cat([unconditional_condition, condition])
|
||||
noise_uncond, noise = noise_pred_fn(x_in, t_in, cond=c_in).chunk(2)
|
||||
return noise_uncond + guidance_scale * (noise - noise_uncond)
|
||||
|
||||
assert model_type in ["noise", "x_start", "v"]
|
||||
assert guidance_type in ["uncond", "classifier", "classifier-free"]
|
||||
return model_fn
|
||||
|
||||
|
||||
class UniPC:
|
||||
def __init__(
|
||||
self,
|
||||
model_fn,
|
||||
noise_schedule,
|
||||
algorithm_type="data_prediction",
|
||||
correcting_x0_fn=None,
|
||||
correcting_xt_fn=None,
|
||||
thresholding_max_val=1.,
|
||||
dynamic_thresholding_ratio=0.995,
|
||||
variant='bh1'
|
||||
):
|
||||
"""Construct a UniPC.
|
||||
|
||||
We support both data_prediction and noise_prediction.
|
||||
"""
|
||||
self.model = lambda x, t: model_fn(x, t.expand((x.shape[0])))
|
||||
self.noise_schedule = noise_schedule
|
||||
assert algorithm_type in ["data_prediction", "noise_prediction"]
|
||||
|
||||
if correcting_x0_fn == "dynamic_thresholding":
|
||||
self.correcting_x0_fn = self.dynamic_thresholding_fn
|
||||
else:
|
||||
self.correcting_x0_fn = correcting_x0_fn
|
||||
|
||||
self.correcting_xt_fn = correcting_xt_fn
|
||||
self.dynamic_thresholding_ratio = dynamic_thresholding_ratio
|
||||
self.thresholding_max_val = thresholding_max_val
|
||||
|
||||
self.variant = variant
|
||||
self.predict_x0 = algorithm_type == "data_prediction"
|
||||
|
||||
def dynamic_thresholding_fn(self, x0, t=None):
|
||||
"""
|
||||
The dynamic thresholding method.
|
||||
"""
|
||||
dims = x0.dim()
|
||||
p = self.dynamic_thresholding_ratio
|
||||
s = torch.quantile(torch.abs(x0).reshape((x0.shape[0], -1)), p, dim=1)
|
||||
s = expand_dims(torch.maximum(s, self.thresholding_max_val * torch.ones_like(s).to(s.device)), dims)
|
||||
x0 = torch.clamp(x0, -s, s) / s
|
||||
return x0
|
||||
|
||||
def noise_prediction_fn(self, x, t):
|
||||
"""
|
||||
Return the noise prediction model.
|
||||
"""
|
||||
return self.model(x, t)
|
||||
|
||||
def data_prediction_fn(self, x, t):
|
||||
"""
|
||||
Return the data prediction model (with corrector).
|
||||
"""
|
||||
noise = self.noise_prediction_fn(x, t)
|
||||
alpha_t, sigma_t = self.noise_schedule.marginal_alpha(t), self.noise_schedule.marginal_std(t)
|
||||
x0 = (x - sigma_t * noise) / alpha_t
|
||||
if self.correcting_x0_fn is not None:
|
||||
x0 = self.correcting_x0_fn(x0)
|
||||
return x0
|
||||
|
||||
def model_fn(self, x, t):
|
||||
"""
|
||||
Convert the model to the noise prediction model or the data prediction model.
|
||||
"""
|
||||
if self.predict_x0:
|
||||
return self.data_prediction_fn(x, t)
|
||||
else:
|
||||
return self.noise_prediction_fn(x, t)
|
||||
|
||||
def get_time_steps(self, skip_type, t_T, t_0, N, device):
|
||||
"""Compute the intermediate time steps for sampling.
|
||||
"""
|
||||
if skip_type == 'logSNR':
|
||||
lambda_T = self.noise_schedule.marginal_lambda(torch.tensor(t_T).to(device))
|
||||
lambda_0 = self.noise_schedule.marginal_lambda(torch.tensor(t_0).to(device))
|
||||
logSNR_steps = torch.linspace(lambda_T.cpu().item(), lambda_0.cpu().item(), N + 1).to(device)
|
||||
return self.noise_schedule.inverse_lambda(logSNR_steps)
|
||||
elif skip_type == 'time_uniform':
|
||||
return torch.linspace(t_T, t_0, N + 1).to(device)
|
||||
elif skip_type == 'time_quadratic':
|
||||
t_order = 2
|
||||
t = torch.linspace(t_T**(1. / t_order), t_0**(1. / t_order), N + 1).pow(t_order).to(device)
|
||||
return t
|
||||
else:
|
||||
raise ValueError("Unsupported skip_type {}, need to be 'logSNR' or 'time_uniform' or 'time_quadratic'".format(skip_type))
|
||||
|
||||
def get_orders_and_timesteps_for_singlestep_solver(self, steps, order, skip_type, t_T, t_0, device):
|
||||
"""
|
||||
Get the order of each step for sampling by the singlestep DPM-Solver.
|
||||
"""
|
||||
if order == 3:
|
||||
K = steps // 3 + 1
|
||||
if steps % 3 == 0:
|
||||
orders = [3,] * (K - 2) + [2, 1]
|
||||
elif steps % 3 == 1:
|
||||
orders = [3,] * (K - 1) + [1]
|
||||
else:
|
||||
orders = [3,] * (K - 1) + [2]
|
||||
elif order == 2:
|
||||
if steps % 2 == 0:
|
||||
K = steps // 2
|
||||
orders = [2,] * K
|
||||
else:
|
||||
K = steps // 2 + 1
|
||||
orders = [2,] * (K - 1) + [1]
|
||||
elif order == 1:
|
||||
K = steps
|
||||
orders = [1,] * steps
|
||||
else:
|
||||
raise ValueError("'order' must be '1' or '2' or '3'.")
|
||||
if skip_type == 'logSNR':
|
||||
# To reproduce the results in DPM-Solver paper
|
||||
timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, K, device)
|
||||
else:
|
||||
timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, steps, device)[torch.cumsum(torch.tensor([0,] + orders), 0).to(device)]
|
||||
return timesteps_outer, orders
|
||||
|
||||
def denoise_to_zero_fn(self, x, s):
|
||||
"""
|
||||
Denoise at the final step, which is equivalent to solve the ODE from lambda_s to infty by first-order discretization.
|
||||
"""
|
||||
return self.data_prediction_fn(x, s)
|
||||
|
||||
def multistep_uni_pc_update(self, x, model_prev_list, t_prev_list, t, order, **kwargs):
|
||||
if len(t.shape) == 0:
|
||||
t = t.view(-1)
|
||||
if 'bh' in self.variant:
|
||||
return self.multistep_uni_pc_bh_update(x, model_prev_list, t_prev_list, t, order, **kwargs)
|
||||
else:
|
||||
assert self.variant == 'vary_coeff'
|
||||
return self.multistep_uni_pc_vary_update(x, model_prev_list, t_prev_list, t, order, **kwargs)
|
||||
|
||||
def multistep_uni_pc_vary_update(self, x, model_prev_list, t_prev_list, t, order, use_corrector=True):
|
||||
#print(f'using unified predictor-corrector with order {order} (solver type: vary coeff)')
|
||||
ns = self.noise_schedule
|
||||
assert order <= len(model_prev_list)
|
||||
|
||||
# first compute rks
|
||||
t_prev_0 = t_prev_list[-1]
|
||||
lambda_prev_0 = ns.marginal_lambda(t_prev_0)
|
||||
lambda_t = ns.marginal_lambda(t)
|
||||
model_prev_0 = model_prev_list[-1]
|
||||
sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t)
|
||||
log_alpha_t = ns.marginal_log_mean_coeff(t)
|
||||
alpha_t = torch.exp(log_alpha_t)
|
||||
|
||||
h = lambda_t - lambda_prev_0
|
||||
|
||||
rks = []
|
||||
D1s = []
|
||||
for i in range(1, order):
|
||||
t_prev_i = t_prev_list[-(i + 1)]
|
||||
model_prev_i = model_prev_list[-(i + 1)]
|
||||
lambda_prev_i = ns.marginal_lambda(t_prev_i)
|
||||
rk = (lambda_prev_i - lambda_prev_0) / h
|
||||
rks.append(rk)
|
||||
D1s.append((model_prev_i - model_prev_0) / rk)
|
||||
|
||||
rks.append(1.)
|
||||
rks = torch.tensor(rks, device=x.device)
|
||||
|
||||
K = len(rks)
|
||||
# build C matrix
|
||||
C = []
|
||||
|
||||
col = torch.ones_like(rks)
|
||||
for k in range(1, K + 1):
|
||||
C.append(col)
|
||||
col = col * rks / (k + 1)
|
||||
C = torch.stack(C, dim=1)
|
||||
|
||||
if len(D1s) > 0:
|
||||
D1s = torch.stack(D1s, dim=1) # (B, K)
|
||||
C_inv_p = torch.linalg.inv(C[:-1, :-1])
|
||||
A_p = C_inv_p
|
||||
|
||||
if use_corrector:
|
||||
#print('using corrector')
|
||||
C_inv = torch.linalg.inv(C)
|
||||
A_c = C_inv
|
||||
|
||||
hh = -h if self.predict_x0 else h
|
||||
h_phi_1 = torch.expm1(hh)
|
||||
h_phi_ks = []
|
||||
factorial_k = 1
|
||||
h_phi_k = h_phi_1
|
||||
for k in range(1, K + 2):
|
||||
h_phi_ks.append(h_phi_k)
|
||||
h_phi_k = h_phi_k / hh - 1 / factorial_k
|
||||
factorial_k *= (k + 1)
|
||||
|
||||
model_t = None
|
||||
if self.predict_x0:
|
||||
x_t_ = (
|
||||
sigma_t / sigma_prev_0 * x
|
||||
- alpha_t * h_phi_1 * model_prev_0
|
||||
)
|
||||
# now predictor
|
||||
x_t = x_t_
|
||||
if len(D1s) > 0:
|
||||
# compute the residuals for predictor
|
||||
for k in range(K - 1):
|
||||
x_t = x_t - alpha_t * h_phi_ks[k + 1] * torch.einsum('bkchw,k->bchw', D1s, A_p[k])
|
||||
# now corrector
|
||||
if use_corrector:
|
||||
model_t = self.model_fn(x_t, t)
|
||||
D1_t = (model_t - model_prev_0)
|
||||
x_t = x_t_
|
||||
k = 0
|
||||
for k in range(K - 1):
|
||||
x_t = x_t - alpha_t * h_phi_ks[k + 1] * torch.einsum('bkchw,k->bchw', D1s, A_c[k][:-1])
|
||||
x_t = x_t - alpha_t * h_phi_ks[K] * (D1_t * A_c[k][-1])
|
||||
else:
|
||||
log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t)
|
||||
x_t_ = (
|
||||
(torch.exp(log_alpha_t - log_alpha_prev_0)) * x
|
||||
- (sigma_t * h_phi_1) * model_prev_0
|
||||
)
|
||||
# now predictor
|
||||
x_t = x_t_
|
||||
if len(D1s) > 0:
|
||||
# compute the residuals for predictor
|
||||
for k in range(K - 1):
|
||||
x_t = x_t - sigma_t * h_phi_ks[k + 1] * torch.einsum('bkchw,k->bchw', D1s, A_p[k])
|
||||
# now corrector
|
||||
if use_corrector:
|
||||
model_t = self.model_fn(x_t, t)
|
||||
D1_t = (model_t - model_prev_0)
|
||||
x_t = x_t_
|
||||
k = 0
|
||||
for k in range(K - 1):
|
||||
x_t = x_t - sigma_t * h_phi_ks[k + 1] * torch.einsum('bkchw,k->bchw', D1s, A_c[k][:-1])
|
||||
x_t = x_t - sigma_t * h_phi_ks[K] * (D1_t * A_c[k][-1])
|
||||
return x_t, model_t
|
||||
|
||||
def multistep_uni_pc_bh_update(self, x, model_prev_list, t_prev_list, t, order, x_t=None, use_corrector=True):
|
||||
#print(f'using unified predictor-corrector with order {order} (solver type: B(h))')
|
||||
ns = self.noise_schedule
|
||||
assert order <= len(model_prev_list)
|
||||
|
||||
# first compute rks
|
||||
t_prev_0 = t_prev_list[-1]
|
||||
lambda_prev_0 = ns.marginal_lambda(t_prev_0)
|
||||
lambda_t = ns.marginal_lambda(t)
|
||||
model_prev_0 = model_prev_list[-1]
|
||||
sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t)
|
||||
log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t)
|
||||
alpha_t = torch.exp(log_alpha_t)
|
||||
|
||||
h = lambda_t - lambda_prev_0
|
||||
|
||||
rks = []
|
||||
D1s = []
|
||||
for i in range(1, order):
|
||||
t_prev_i = t_prev_list[-(i + 1)]
|
||||
model_prev_i = model_prev_list[-(i + 1)]
|
||||
lambda_prev_i = ns.marginal_lambda(t_prev_i)
|
||||
rk = (lambda_prev_i - lambda_prev_0) / h
|
||||
rks.append(rk)
|
||||
D1s.append((model_prev_i - model_prev_0) / rk)
|
||||
|
||||
rks.append(1.)
|
||||
rks = torch.tensor(rks, device=x.device)
|
||||
|
||||
R = []
|
||||
b = []
|
||||
|
||||
hh = -h if self.predict_x0 else h
|
||||
h_phi_1 = torch.expm1(hh) # h\phi_1(h) = e^h - 1
|
||||
h_phi_k = h_phi_1 / hh - 1
|
||||
|
||||
factorial_i = 1
|
||||
|
||||
if self.variant == 'bh1':
|
||||
B_h = hh
|
||||
elif self.variant == 'bh2':
|
||||
B_h = torch.expm1(hh)
|
||||
else:
|
||||
raise NotImplementedError()
|
||||
|
||||
for i in range(1, order + 1):
|
||||
R.append(torch.pow(rks, i - 1))
|
||||
b.append(h_phi_k * factorial_i / B_h)
|
||||
factorial_i *= (i + 1)
|
||||
h_phi_k = h_phi_k / hh - 1 / factorial_i
|
||||
|
||||
R = torch.stack(R)
|
||||
b = torch.cat(b)
|
||||
|
||||
# now predictor
|
||||
use_predictor = len(D1s) > 0 and x_t is None
|
||||
if len(D1s) > 0:
|
||||
D1s = torch.stack(D1s, dim=1) # (B, K)
|
||||
if x_t is None:
|
||||
# for order 2, we use a simplified version
|
||||
if order == 2:
|
||||
rhos_p = torch.tensor([0.5], device=b.device)
|
||||
else:
|
||||
rhos_p = torch.linalg.solve(R[:-1, :-1], b[:-1])
|
||||
else:
|
||||
D1s = None
|
||||
|
||||
if use_corrector:
|
||||
#print('using corrector')
|
||||
# for order 1, we use a simplified version
|
||||
if order == 1:
|
||||
rhos_c = torch.tensor([0.5], device=b.device)
|
||||
else:
|
||||
rhos_c = torch.linalg.solve(R, b)
|
||||
|
||||
model_t = None
|
||||
if self.predict_x0:
|
||||
x_t_ = (
|
||||
sigma_t / sigma_prev_0 * x
|
||||
- alpha_t * h_phi_1 * model_prev_0
|
||||
)
|
||||
|
||||
if x_t is None:
|
||||
if use_predictor:
|
||||
pred_res = torch.einsum('k,bkchw->bchw', rhos_p, D1s)
|
||||
else:
|
||||
pred_res = 0
|
||||
x_t = x_t_ - alpha_t * B_h * pred_res
|
||||
|
||||
if use_corrector:
|
||||
model_t = self.model_fn(x_t, t)
|
||||
if D1s is not None:
|
||||
corr_res = torch.einsum('k,bkchw->bchw', rhos_c[:-1], D1s)
|
||||
else:
|
||||
corr_res = 0
|
||||
D1_t = (model_t - model_prev_0)
|
||||
x_t = x_t_ - alpha_t * B_h * (corr_res + rhos_c[-1] * D1_t)
|
||||
else:
|
||||
x_t_ = (
|
||||
torch.exp(log_alpha_t - log_alpha_prev_0) * x
|
||||
- sigma_t * h_phi_1 * model_prev_0
|
||||
)
|
||||
if x_t is None:
|
||||
if use_predictor:
|
||||
pred_res = torch.einsum('k,bkchw->bchw', rhos_p, D1s)
|
||||
else:
|
||||
pred_res = 0
|
||||
x_t = x_t_ - sigma_t * B_h * pred_res
|
||||
|
||||
if use_corrector:
|
||||
model_t = self.model_fn(x_t, t)
|
||||
if D1s is not None:
|
||||
corr_res = torch.einsum('k,bkchw->bchw', rhos_c[:-1], D1s)
|
||||
else:
|
||||
corr_res = 0
|
||||
D1_t = (model_t - model_prev_0)
|
||||
x_t = x_t_ - sigma_t * B_h * (corr_res + rhos_c[-1] * D1_t)
|
||||
return x_t, model_t
|
||||
|
||||
def sample(self, x, steps=20, t_start=None, t_end=None, order=2, skip_type='time_uniform',
|
||||
method='multistep', lower_order_final=True, denoise_to_zero=False, atol=0.0078, rtol=0.05, return_intermediate=False,
|
||||
):
|
||||
"""
|
||||
Compute the sample at time `t_end` by UniPC, given the initial `x` at time `t_start`.
|
||||
"""
|
||||
t_0 = 1. / self.noise_schedule.total_N if t_end is None else t_end
|
||||
t_T = self.noise_schedule.T if t_start is None else t_start
|
||||
assert t_0 > 0 and t_T > 0, "Time range needs to be greater than 0. For discrete-time DPMs, it needs to be in [1 / N, 1], where N is the length of betas array"
|
||||
if return_intermediate:
|
||||
assert method in ['multistep', 'singlestep', 'singlestep_fixed'], "Cannot use adaptive solver when saving intermediate values"
|
||||
if self.correcting_xt_fn is not None:
|
||||
assert method in ['multistep', 'singlestep', 'singlestep_fixed'], "Cannot use adaptive solver when correcting_xt_fn is not None"
|
||||
device = x.device
|
||||
intermediates = []
|
||||
with torch.no_grad():
|
||||
if method == 'multistep':
|
||||
assert steps >= order
|
||||
timesteps = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=steps, device=device)
|
||||
assert timesteps.shape[0] - 1 == steps
|
||||
# Init the initial values.
|
||||
step = 0
|
||||
t = timesteps[step]
|
||||
t_prev_list = [t]
|
||||
model_prev_list = [self.model_fn(x, t)]
|
||||
if self.correcting_xt_fn is not None:
|
||||
x = self.correcting_xt_fn(x, t, step)
|
||||
if return_intermediate:
|
||||
intermediates.append(x)
|
||||
|
||||
# Init the first `order` values by lower order multistep UniPC.
|
||||
for step in range(1, order):
|
||||
t = timesteps[step]
|
||||
x, model_x = self.multistep_uni_pc_update(x, model_prev_list, t_prev_list, t, step, use_corrector=True)
|
||||
if model_x is None:
|
||||
model_x = self.model_fn(x, t)
|
||||
if self.correcting_xt_fn is not None:
|
||||
x = self.correcting_xt_fn(x, t, step)
|
||||
if return_intermediate:
|
||||
intermediates.append(x)
|
||||
t_prev_list.append(t)
|
||||
model_prev_list.append(model_x)
|
||||
|
||||
# Compute the remaining values by `order`-th order multistep DPM-Solver.
|
||||
for step in range(order, steps + 1):
|
||||
t = timesteps[step]
|
||||
if lower_order_final:
|
||||
step_order = min(order, steps + 1 - step)
|
||||
else:
|
||||
step_order = order
|
||||
if step == steps:
|
||||
#print('do not run corrector at the last step')
|
||||
use_corrector = False
|
||||
else:
|
||||
use_corrector = True
|
||||
x, model_x = self.multistep_uni_pc_update(x, model_prev_list, t_prev_list, t, step_order, use_corrector=use_corrector)
|
||||
if self.correcting_xt_fn is not None:
|
||||
x = self.correcting_xt_fn(x, t, step)
|
||||
if return_intermediate:
|
||||
intermediates.append(x)
|
||||
for i in range(order - 1):
|
||||
t_prev_list[i] = t_prev_list[i + 1]
|
||||
model_prev_list[i] = model_prev_list[i + 1]
|
||||
t_prev_list[-1] = t
|
||||
# We do not need to evaluate the final model value.
|
||||
if step < steps:
|
||||
if model_x is None:
|
||||
model_x = self.model_fn(x, t)
|
||||
model_prev_list[-1] = model_x
|
||||
else:
|
||||
raise ValueError("Got wrong method {}".format(method))
|
||||
|
||||
if denoise_to_zero:
|
||||
t = torch.ones((1,)).to(device) * t_0
|
||||
x = self.denoise_to_zero_fn(x, t)
|
||||
if self.correcting_xt_fn is not None:
|
||||
x = self.correcting_xt_fn(x, t, step + 1)
|
||||
if return_intermediate:
|
||||
intermediates.append(x)
|
||||
if return_intermediate:
|
||||
return x, intermediates
|
||||
else:
|
||||
return x
|
||||
|
||||
|
||||
#############################################################
|
||||
# other utility functions
|
||||
#############################################################
|
||||
|
||||
def interpolate_fn(x, xp, yp):
|
||||
"""
|
||||
A piecewise linear function y = f(x), using xp and yp as keypoints.
|
||||
We implement f(x) in a differentiable way (i.e. applicable for autograd).
|
||||
The function f(x) is well-defined for all x-axis. (For x beyond the bounds of xp, we use the outmost points of xp to define the linear function.)
|
||||
|
||||
Args:
|
||||
x: PyTorch tensor with shape [N, C], where N is the batch size, C is the number of channels (we use C = 1 for DPM-Solver).
|
||||
xp: PyTorch tensor with shape [C, K], where K is the number of keypoints.
|
||||
yp: PyTorch tensor with shape [C, K].
|
||||
Returns:
|
||||
The function values f(x), with shape [N, C].
|
||||
"""
|
||||
N, K = x.shape[0], xp.shape[1]
|
||||
all_x = torch.cat([x.unsqueeze(2), xp.unsqueeze(0).repeat((N, 1, 1))], dim=2)
|
||||
sorted_all_x, x_indices = torch.sort(all_x, dim=2)
|
||||
x_idx = torch.argmin(x_indices, dim=2)
|
||||
cand_start_idx = x_idx - 1
|
||||
start_idx = torch.where(
|
||||
torch.eq(x_idx, 0),
|
||||
torch.tensor(1, device=x.device),
|
||||
torch.where(
|
||||
torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx,
|
||||
),
|
||||
)
|
||||
end_idx = torch.where(torch.eq(start_idx, cand_start_idx), start_idx + 2, start_idx + 1)
|
||||
start_x = torch.gather(sorted_all_x, dim=2, index=start_idx.unsqueeze(2)).squeeze(2)
|
||||
end_x = torch.gather(sorted_all_x, dim=2, index=end_idx.unsqueeze(2)).squeeze(2)
|
||||
start_idx2 = torch.where(
|
||||
torch.eq(x_idx, 0),
|
||||
torch.tensor(0, device=x.device),
|
||||
torch.where(
|
||||
torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx,
|
||||
),
|
||||
)
|
||||
y_positions_expanded = yp.unsqueeze(0).expand(N, -1, -1)
|
||||
start_y = torch.gather(y_positions_expanded, dim=2, index=start_idx2.unsqueeze(2)).squeeze(2)
|
||||
end_y = torch.gather(y_positions_expanded, dim=2, index=(start_idx2 + 1).unsqueeze(2)).squeeze(2)
|
||||
cand = start_y + (x - start_x) * (end_y - start_y) / (end_x - start_x)
|
||||
return cand
|
||||
|
||||
|
||||
def expand_dims(v, dims):
|
||||
"""
|
||||
Expand the tensor `v` to the dim `dims`.
|
||||
|
||||
Args:
|
||||
`v`: a PyTorch tensor with shape [N].
|
||||
`dim`: a `int`.
|
||||
Returns:
|
||||
a PyTorch tensor with shape [N, 1, 1, ..., 1] and the total dimension is `dims`.
|
||||
"""
|
||||
return v[(...,) + (None,)*(dims - 1)]
|
|
@ -39,13 +39,17 @@ def load_model_vocoder(
|
|||
vocoder.dimension,
|
||||
args.model.n_layers,
|
||||
args.model.n_chans,
|
||||
args.model.n_hidden)
|
||||
args.model.n_hidden,
|
||||
args.model.timesteps,
|
||||
args.model.k_step_max
|
||||
)
|
||||
|
||||
print(' [Loading] ' + model_path)
|
||||
ckpt = torch.load(model_path, map_location=torch.device(device))
|
||||
model.to(device)
|
||||
model.load_state_dict(ckpt['model'])
|
||||
model.eval()
|
||||
print(f'Loaded diffusion model, sampler is {ckpt["infer"]["methold"]}, speedup: {ckpt["infer"]["speedup"]} ')
|
||||
return model, vocoder, args
|
||||
|
||||
|
||||
|
@ -58,7 +62,10 @@ class Unit2Mel(nn.Module):
|
|||
out_dims=128,
|
||||
n_layers=20,
|
||||
n_chans=384,
|
||||
n_hidden=256):
|
||||
n_hidden=256,
|
||||
timesteps=1000,
|
||||
k_step_max=1000
|
||||
):
|
||||
super().__init__()
|
||||
self.unit_embed = nn.Linear(input_channel, n_hidden)
|
||||
self.f0_embed = nn.Linear(1, n_hidden)
|
||||
|
@ -71,9 +78,12 @@ class Unit2Mel(nn.Module):
|
|||
if n_spk is not None and n_spk > 1:
|
||||
self.spk_embed = nn.Embedding(n_spk, n_hidden)
|
||||
|
||||
self.timesteps = timesteps if timesteps is not None else 1000
|
||||
self.k_step_max = k_step_max if k_step_max is not None and k_step_max>0 and k_step_max<self.timesteps else self.timesteps
|
||||
|
||||
self.n_hidden = n_hidden
|
||||
# diffusion
|
||||
self.decoder = GaussianDiffusion(WaveNet(out_dims, n_layers, n_chans, n_hidden), out_dims=out_dims)
|
||||
self.decoder = GaussianDiffusion(WaveNet(out_dims, n_layers, n_chans, n_hidden),timesteps=self.timesteps,k_step=self.k_step_max, out_dims=out_dims)
|
||||
self.input_channel = input_channel
|
||||
|
||||
def init_spkembed(self, units, f0, volume, spk_id = None, spk_mix_dict = None, aug_shift = None,
|
||||
|
@ -124,6 +134,12 @@ class Unit2Mel(nn.Module):
|
|||
dict of B x n_frames x feat
|
||||
'''
|
||||
|
||||
if not self.training and gt_spec is not None and k_step>self.k_step_max:
|
||||
raise Exception("The shallow diffusion k_step is greater than the maximum diffusion k_step(k_step_max)!")
|
||||
|
||||
if not self.training and gt_spec is None and self.k_step_max!=self.timesteps:
|
||||
raise Exception("This model can only be used for shallow diffusion and can not infer alone!")
|
||||
|
||||
x = self.unit_embed(units) + self.f0_embed((1+ f0 / 700).log()) + self.volume_embed(volume)
|
||||
if self.n_spk is not None and self.n_spk > 1:
|
||||
if spk_mix_dict is not None:
|
||||
|
|
|
@ -64,8 +64,8 @@ def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False)
|
|||
y = y.squeeze(1)
|
||||
|
||||
spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device],
|
||||
center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
|
||||
|
||||
center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=True)
|
||||
spec = torch.view_as_real(spec)
|
||||
spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6)
|
||||
return spec
|
||||
|
||||
|
|
44
resample.py
44
resample.py
|
@ -12,18 +12,22 @@ from tqdm import tqdm
|
|||
def load_wav(wav_path):
|
||||
return librosa.load(wav_path, sr=None)
|
||||
|
||||
|
||||
def trim_wav(wav, top_db=40):
|
||||
return librosa.effects.trim(wav, top_db=top_db)
|
||||
|
||||
|
||||
def normalize_peak(wav, threshold=1.0):
|
||||
peak = np.abs(wav).max()
|
||||
if peak > threshold:
|
||||
wav = 0.98 * wav / peak
|
||||
return wav
|
||||
|
||||
|
||||
def resample_wav(wav, sr, target_sr):
|
||||
return librosa.resample(wav, orig_sr=sr, target_sr=target_sr)
|
||||
|
||||
|
||||
def save_wav_to_path(wav, save_path, sr):
|
||||
wavfile.write(
|
||||
save_path,
|
||||
|
@ -31,8 +35,9 @@ def save_wav_to_path(wav, save_path, sr):
|
|||
(wav * np.iinfo(np.int16).max).astype(np.int16)
|
||||
)
|
||||
|
||||
|
||||
def process(item):
|
||||
spkdir, wav_name, args = item
|
||||
spkdir, wav_name = item
|
||||
speaker = spkdir.replace("\\", "/").split("/")[-1]
|
||||
|
||||
wav_path = os.path.join(args.in_dir, speaker, wav_name)
|
||||
|
@ -45,27 +50,17 @@ def process(item):
|
|||
resampled_wav = resample_wav(wav, sr, args.sr2)
|
||||
|
||||
if not args.skip_loudnorm:
|
||||
resampled_wav /= max(resampled_wav.max(), -resampled_wav.min())
|
||||
resampled_wav /= np.max(np.abs(resampled_wav))
|
||||
|
||||
save_path2 = os.path.join(args.out_dir2, speaker, wav_name)
|
||||
save_wav_to_path(resampled_wav, save_path2, args.sr2)
|
||||
|
||||
# def process_all_speakers(speakers, args):
|
||||
# process_count = 30 if os.cpu_count() > 60 else (os.cpu_count() - 2 if os.cpu_count() > 4 else 1)
|
||||
|
||||
# with ThreadPoolExecutor(max_workers=process_count) as executor:
|
||||
# for speaker in speakers:
|
||||
# spk_dir = os.path.join(args.in_dir, speaker)
|
||||
# if os.path.isdir(spk_dir):
|
||||
# print(spk_dir)
|
||||
# futures = [executor.submit(process, (spk_dir, i, args)) for i in os.listdir(spk_dir) if i.endswith("wav")]
|
||||
# for _ in tqdm(concurrent.futures.as_completed(futures), total=len(futures)):
|
||||
# pass
|
||||
|
||||
# multi process
|
||||
def process_all_speakers(speakers, args):
|
||||
"""
|
||||
def process_all_speakers():
|
||||
process_count = 30 if os.cpu_count() > 60 else (os.cpu_count() - 2 if os.cpu_count() > 4 else 1)
|
||||
with ProcessPoolExecutor(max_workers=process_count) as executor:
|
||||
|
||||
with ThreadPoolExecutor(max_workers=process_count) as executor:
|
||||
for speaker in speakers:
|
||||
spk_dir = os.path.join(args.in_dir, speaker)
|
||||
if os.path.isdir(spk_dir):
|
||||
|
@ -73,6 +68,21 @@ def process_all_speakers(speakers, args):
|
|||
futures = [executor.submit(process, (spk_dir, i, args)) for i in os.listdir(spk_dir) if i.endswith("wav")]
|
||||
for _ in tqdm(concurrent.futures.as_completed(futures), total=len(futures)):
|
||||
pass
|
||||
"""
|
||||
# multi process
|
||||
|
||||
|
||||
def process_all_speakers():
|
||||
process_count = 30 if os.cpu_count() > 60 else (os.cpu_count() - 2 if os.cpu_count() > 4 else 1)
|
||||
with ProcessPoolExecutor(max_workers=process_count) as executor:
|
||||
for speaker in speakers:
|
||||
spk_dir = os.path.join(args.in_dir, speaker)
|
||||
if os.path.isdir(spk_dir):
|
||||
print(spk_dir)
|
||||
futures = [executor.submit(process, (spk_dir, i)) for i in os.listdir(spk_dir) if i.endswith("wav")]
|
||||
for _ in tqdm(concurrent.futures.as_completed(futures), total=len(futures)):
|
||||
pass
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser()
|
||||
|
@ -84,4 +94,4 @@ if __name__ == "__main__":
|
|||
|
||||
print(f"CPU count: {cpu_count()}")
|
||||
speakers = os.listdir(args.in_dir)
|
||||
process_all_speakers(speakers, args)
|
||||
process_all_speakers()
|
||||
|
|
|
@ -41,8 +41,12 @@ if __name__ == '__main__':
|
|||
vocoder.dimension,
|
||||
args.model.n_layers,
|
||||
args.model.n_chans,
|
||||
args.model.n_hidden)
|
||||
args.model.n_hidden,
|
||||
args.model.timesteps,
|
||||
args.model.k_step_max
|
||||
)
|
||||
|
||||
print(f' > INFO: now model timesteps is {model.timesteps}, and k_step_max is {model.k_step_max}')
|
||||
|
||||
# load parameters
|
||||
optimizer = torch.optim.AdamW(model.parameters())
|
||||
|
|
2
utils.py
2
utils.py
|
@ -534,6 +534,6 @@ class Volume_Extractor:
|
|||
n_frames = int(audio.size(-1) // self.hop_size)
|
||||
audio2 = audio ** 2
|
||||
audio2 = torch.nn.functional.pad(audio2, (int(self.hop_size // 2), int((self.hop_size + 1) // 2)), mode = 'reflect')
|
||||
volume = torch.FloatTensor([torch.mean(audio2[:,int(n * self.hop_size) : int((n + 1) * self.hop_size)]) for n in range(n_frames)])
|
||||
volume = torch.nn.functional.unfold(audio2[:,None,None,:],(1,self.hop_size),stride=self.hop_size)[:,:,:n_frames].mean(dim=1)[0]
|
||||
volume = torch.sqrt(volume)
|
||||
return volume
|
||||
|
|
|
@ -4,9 +4,10 @@ class SpeechEncoder(object):
|
|||
self.hidden_dim = 768
|
||||
pass
|
||||
|
||||
def encoder(self, wav):
|
||||
"""
|
||||
input: wav:[batchsize,signal_length]
|
||||
|
||||
def encoder(self,wav):
|
||||
'''
|
||||
input: wav:[signal_length]
|
||||
output: embedding:[batchsize,hidden_dim,wav_frame]
|
||||
"""
|
||||
pass
|
||||
|
|
3
webUI.py
3
webUI.py
|
@ -209,7 +209,7 @@ def vc_fn2(sid, input_audio, vc_transform, auto_f0,cluster_ratio, slice_db, nois
|
|||
output_file=tts_func(text2tts,tts_rate,tts_voice)
|
||||
|
||||
#调整采样率
|
||||
sr2=44100
|
||||
sr2=model.target_sample
|
||||
wav, sr = librosa.load(output_file)
|
||||
wav2 = librosa.resample(wav, orig_sr=sr, target_sr=sr2)
|
||||
save_path2= text2tts[0:10]+"_44k"+".wav"
|
||||
|
@ -373,6 +373,7 @@ with gr.Blocks(
|
|||
debug_button.change(debug_change,[],[])
|
||||
model_load_button.click(modelAnalysis,[model_path,config_path,cluster_model_path,device,enhance,diff_model_path,diff_config_path,only_diffusion,use_spk_mix],[sid,sid_output])
|
||||
model_unload_button.click(modelUnload,[],[sid,sid_output])
|
||||
os.system("start http://127.0.0.1:7860")
|
||||
app.launch()
|
||||
|
||||
|
||||
|
|
Loading…
Reference in New Issue