Updata inference and readme

This commit is contained in:
ylzz1997 2023-05-30 01:48:41 +08:00
parent 807bb2adfb
commit 10c7c06acb
5 changed files with 75 additions and 15 deletions

View File

@ -309,13 +309,15 @@ Optional parameters: see the next section
- `-eh` | `--enhance`: Whether to use NSF_HIFIGAN enhancer, this option has certain effect on sound quality enhancement for some models with few training sets, but has negative effect on well-trained models, so it is turned off by default.
- `-shd` | `--shallow_diffusion`: Whether to use shallow diffusion, which can solve some electrical sound problems after use. This option is turned off by default. When this option is enabled, NSF_HIFIGAN intensifier will be disabled
- `-usm` | `--use_spk_mix`: whether to use dynamic voice/merge their role
- `-lea` | `--loudness_envelope_adjustment`The input source loudness envelope replaces the output loudness envelope fusion ratio. The closer to 1, the more the output loudness envelope is used
Shallow diffusion settings:
- `-dm` | `--diffusion_model_path`: Diffusion model path
- `-dc` | `--diffusion_config_path`: Diffusion model profile path
- `-ks` | `--k_step`: The larger the number of diffusion steps, the closer it is to the result of the diffusion model. The default is 100
- `-od` | `--only_diffusion`: Only diffusion mode, which does not load the sovits model to the diffusion model inference
- `-se` | `--second_encoding`Secondary encoding, secondary coding of the original audio before shallow diffusion, mystery options, sometimes good, sometimes bad
### Attention
If reasoning using `whisp-ppg` speech encoder, you need to set `--clip` to 25 and `-lg` to 1. Otherwise it will fail to reason properly.
@ -377,10 +379,17 @@ Introduction: This function can combine multiple sound models into one sound mod
**Refer to the `spkmix.py` file for an introduction to dynamic timbre mixing**
Character mix track writing rules:
Role ID: \[\[Start time 1, end time 1, start value 1, start value 1], [Start time 2, end time 2, start value 2]]
The start time must be the same as the end time of the previous one. The first start time must be 0, and the last end time must be 1 (time ranges from 0 to 1).
All roles must be filled in. For unused roles, fill \[\[0., 1., 0., 0.]]
The fusion value can be filled in arbitrarily, and the linear change from the start value to the end value within the specified period of time. The internal linear combination will be automatically guaranteed to be 1 (convex combination condition), so it can be used safely
The fusion value can be filled in arbitrarily, and the linear change from the start value to the end value within the specified period of time. The
internal linear combination will be automatically guaranteed to be 1 (convex combination condition), so it can be used safely
Use the `--use_spk_mix` parameter when reasoning to enable dynamic timbre mixing
## 📤 Exporting to Onnx

View File

@ -311,12 +311,14 @@ python inference_main.py -m "logs/44k/G_30400.pth" -c "configs/config.json" -n "
+ `-eh` | `--enhance`是否使用NSF_HIFIGAN增强器,该选项对部分训练集少的模型有一定的音质增强效果,但是对训练好的模型有反面效果,默认关闭
+ `-shd` | `--shallow_diffusion`是否使用浅层扩散使用后可解决一部分电音问题默认关闭该选项打开时NSF_HIFIGAN增强器将会被禁止
+ `-usm` | `--use_spk_mix`:是否使用角色融合/动态声线融合
+ `-lea` | `--loudness_envelope_adjustment`输入源响度包络替换输出响度包络融合比例越靠近1越使用输出响度包络
浅扩散设置:
+ `-dm` | `--diffusion_model_path`:扩散模型路径
+ `-dc` | `--diffusion_config_path`:扩散模型配置文件路径
+ `-ks` | `--k_step`扩散步数越大越接近扩散模型的结果默认100
+ `-od` | `--only_diffusion`纯扩散模式该模式不会加载sovits模型以扩散模型推理
+ `-se` | `--second_encoding`:二次编码,浅扩散前会对原始音频进行二次编码,玄学选项,有时候效果好,有时候效果差
### 注意!
@ -366,24 +368,30 @@ python compress_model.py -c="configs/config.json" -i="logs/44k/G_30400.pth" -o="
介绍:该功能可以将多个声音模型合成为一个声音模型(多个模型参数的凸组合或线性组合),从而制造出现实中不存在的声线
**注意:**
1.该功能仅支持单说话人的模型
2.如果强行使用多说话人模型需要保证多个模型的说话人数量相同这样可以混合同一个SpaekerID下的声音
3.保证所有待混合模型的config.json中的model字段是相同的
4.输出的混合模型可以使用待合成模型的任意一个config.json但聚类模型将不能使用
5.批量上传模型的时候最好把模型放到一个文件夹选中后一起上传
6.混合比例调整建议大小在0-100之间也可以调为其他数字但在线性组合模式下会出现未知的效果
7.混合完毕后文件将会保存在项目根目录中文件名为output.pth
8.凸组合模式会将混合比例执行Softmax使混合比例相加为1而线性组合模式不会
1. 该功能仅支持单说话人的模型
2. 如果强行使用多说话人模型需要保证多个模型的说话人数量相同这样可以混合同一个SpaekerID下的声音
3. 保证所有待混合模型的config.json中的model字段是相同的
4. 输出的混合模型可以使用待合成模型的任意一个config.json但聚类模型将不能使用
5. 批量上传模型的时候最好把模型放到一个文件夹选中后一起上传
6. 混合比例调整建议大小在0-100之间也可以调为其他数字但在线性组合模式下会出现未知的效果
7. 混合完毕后文件将会保存在项目根目录中文件名为output.pth
8. 凸组合模式会将混合比例执行Softmax使混合比例相加为1而线性组合模式不会
### 动态声线混合
**参考`spkmix.py`文件中关于动态声线混合的介绍**
角色混合轨道 编写规则:
角色ID : \[\[起始时间1, 终止时间1, 起始数值1, 起始数值1], [起始时间2, 终止时间2, 起始数值2, 起始数值2]]
起始时间和前一个的终止时间必须相同第一个起始时间必须为0最后一个终止时间必须为1 时间的范围为0-1
全部角色必须填写,不使用的角色填\[\[0., 1., 0., 0.]]即可
融合数值可以随便填在指定的时间段内从起始数值线性变化为终止数值内部会自动确保线性组合为1凸组合条件可以放心使用
推理的时候使用`--use_spk_mix`参数即可启用动态声线混合
## 📤 Onnx导出

View File

@ -228,7 +228,9 @@ class Svc(object):
cr_threshold = 0.05,
k_step = 100,
frame = 0,
spk_mix = False
spk_mix = False,
second_encoding = False,
loudness_envelope_adjustment = 1
):
wav, sr = librosa.load(raw_path, sr=self.target_sample)
if spk_mix:
@ -260,6 +262,11 @@ class Svc(object):
audio_mel = None
if self.only_diffusion or self.shallow_diffusion:
vol = self.volume_extractor.extract(audio[None,:])[None,:,None].to(self.dev) if vol==None else vol[:,:,None]
if self.shallow_diffusion and second_encoding:
audio16k = librosa.resample(audio.detach().cpu().numpy(), orig_sr=self.target_sample, target_sr=16000)
audio16k = torch.from_numpy(audio16k).to(self.dev)
c = self.hubert_model.encoder(audio16k)
c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1])
f0 = f0[:,:,None]
c = c.transpose(-1,-2)
audio_mel = self.diffusion_model(
@ -281,6 +288,8 @@ class Svc(object):
f0[:,:,None],
self.hps_ms.data.hop_length,
adaptive_key = enhancer_adaptive_key)
if loudness_envelope_adjustment != 1:
audio = utils.change_rms(wav,self.target_sample,audio,self.target_sample,loudness_envelope_adjustment)
use_time = time.time() - start
print("vits use time:{}".format(use_time))
return audio, audio.shape[-1], n_frames
@ -315,7 +324,9 @@ class Svc(object):
enhancer_adaptive_key = 0,
cr_threshold = 0.05,
k_step = 100,
use_spk_mix = False
use_spk_mix = False,
second_encoding = False,
loudness_envelope_adjustment = 1
):
if use_spk_mix:
if len(self.spk2id) == 1:
@ -419,7 +430,9 @@ class Svc(object):
cr_threshold = cr_threshold,
k_step = k_step,
frame = global_frame,
spk_mix = use_spk_mix
spk_mix = use_spk_mix,
second_encoding = second_encoding,
loudness_envelope_adjustment = loudness_envelope_adjustment
)
global_frame += out_frame
_audio = out_audio.cpu().numpy()

View File

@ -38,12 +38,15 @@ def main():
parser.add_argument('-eh', '--enhance', action='store_true', default=False, help='是否使用NSF_HIFIGAN增强器,该选项对部分训练集少的模型有一定的音质增强效果,但是对训练好的模型有反面效果,默认关闭')
parser.add_argument('-shd', '--shallow_diffusion', action='store_true', default=False, help='是否使用浅层扩散使用后可解决一部分电音问题默认关闭该选项打开时NSF_HIFIGAN增强器将会被禁止')
parser.add_argument('-usm', '--use_spk_mix', action='store_true', default=False, help='是否使用角色融合')
parser.add_argument('-lea', '--loudness_envelope_adjustment', type=float, default=1, help='输入源响度包络替换输出响度包络融合比例越靠近1越使用输出响度包络')
# 浅扩散设置
parser.add_argument('-dm', '--diffusion_model_path', type=str, default="logs/44k/diffusion/model_0.pt", help='扩散模型路径')
parser.add_argument('-dc', '--diffusion_config_path', type=str, default="logs/44k/diffusion/config.yaml", help='扩散模型配置文件路径')
parser.add_argument('-ks', '--k_step', type=int, default=100, help='扩散步数越大越接近扩散模型的结果默认100')
parser.add_argument('-se', '--second_encoding', action='store_true', default=False, help='二次编码,浅扩散前会对原始音频进行二次编码,玄学选项,有时候效果好,有时候效果差')
parser.add_argument('-od', '--only_diffusion', action='store_true', default=False, help='纯扩散模式该模式不会加载sovits模型以扩散模型推理')
# 不用动的部分
parser.add_argument('-sd', '--slice_db', type=int, default=-40, help='默认-40嘈杂的音频可以-30干声保留呼吸可以-50')
@ -80,8 +83,12 @@ def main():
only_diffusion = args.only_diffusion
shallow_diffusion = args.shallow_diffusion
use_spk_mix = args.use_spk_mix
second_encoding = args.second_encoding
loudness_envelope_adjustment = args.loudness_envelope_adjustment
svc_model = Svc(args.model_path, args.config_path, args.device, args.cluster_model_path,enhance,diffusion_model_path,diffusion_config_path,shallow_diffusion,only_diffusion,use_spk_mix)
infer_tool.mkdir(["raw", "results"])
if len(spk_mix_map)<=1:
use_spk_mix = False
if use_spk_mix:
@ -110,7 +117,9 @@ def main():
"enhancer_adaptive_key" : enhancer_adaptive_key,
"cr_threshold" : cr_threshold,
"k_step":k_step,
"use_spk_mix":use_spk_mix
"use_spk_mix":use_spk_mix,
"second_encoding":second_encoding,
"loudness_envelope_adjustment":loudness_envelope_adjustment
}
audio = svc_model.slice_inference(**kwarg)
key = "auto" if auto_predict_f0 else f"{tran}key"

View File

@ -396,6 +396,27 @@ def mix_model(model_paths,mix_rate,mode):
torch.save(model_tem,os.path.join(os.path.curdir,"output.pth"))
return os.path.join(os.path.curdir,"output.pth")
def change_rms(data1, sr1, data2, sr2, rate): # 1是输入音频2是输出音频,rate是2的占比 from RVC
# print(data1.max(),data2.max())
rms1 = librosa.feature.rms(
y=data1, frame_length=sr1 // 2 * 2, hop_length=sr1 // 2
) # 每半秒一个点
rms2 = librosa.feature.rms(y=data2.detach().cpu().numpy(), frame_length=sr2 // 2 * 2, hop_length=sr2 // 2)
rms1 = torch.from_numpy(rms1).to(data2.device)
rms1 = F.interpolate(
rms1.unsqueeze(0), size=data2.shape[0], mode="linear"
).squeeze()
rms2 = torch.from_numpy(rms2).to(data2.device)
rms2 = F.interpolate(
rms2.unsqueeze(0), size=data2.shape[0], mode="linear"
).squeeze()
rms2 = torch.max(rms2, torch.zeros_like(rms2) + 1e-6)
data2 *= (
torch.pow(rms1, torch.tensor(1 - rate))
* torch.pow(rms2, torch.tensor(rate - 1))
)
return data2
class HParams():
def __init__(self, **kwargs):
for k, v in kwargs.items():