diff --git a/README.md b/README.md index d3ea6ac..e2fbba9 100644 --- a/README.md +++ b/README.md @@ -113,7 +113,7 @@ After completing the above steps, the dataset directory will contain the preproc * `keep_ckpts`: Keep the last `keep_ckpts` models during training. Set to `0` will keep them all. Default is `3`. -* `all_in_mem`: Load all dataset to RAM. It can be enabled when the disk IO of some platforms is too low and the system memory is much larger than your dataset. +* `all_in_mem`: Load all dataset to RAM. It can be enabled when the disk IO of some platforms is too low and the system memory is **much larger** than your dataset. ## 🏋️‍♀️ Training diff --git a/README_zh_CN.md b/README_zh_CN.md index 66738b2..9cbd020 100644 --- a/README_zh_CN.md +++ b/README_zh_CN.md @@ -113,7 +113,7 @@ python preprocess_hubert_f0.py * `keep_ckpts`:训练时保留最后几个模型,`0`为保留所有,默认只保留最后`3`个 -* `all_in_mem`:加载所有数据集到内存中,某些平台的硬盘IO过于低下、同时内存容量*远大于*数据集体积时可以启用 +* `all_in_mem`:加载所有数据集到内存中,某些平台的硬盘IO过于低下、同时内存容量 **远大于** 数据集体积时可以启用 ## 🏋️‍♀️ 训练