upload code

This commit is contained in:
Erythrocyte3803 2023-03-10 20:08:57 +09:00
commit ae4099047c
48 changed files with 8670 additions and 0 deletions

BIN
.DS_Store vendored Normal file

Binary file not shown.

20
.gitignore vendored Normal file
View File

@ -0,0 +1,20 @@
.idea
*.pyc
__pycache__/
*.sh
local_tools/
*.ckpt
*.pth
infer_out/
*.onnx
data/
checkpoints/
processcmd.py
.vscode
WPy64-38100
Winpython64-3.8.10.0dot.exe
*.pkf
*.wav
*.json
*.flac
*.xmp

661
LICENSE.md Normal file
View File

@ -0,0 +1,661 @@
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

194
README.md Normal file
View File

@ -0,0 +1,194 @@
# Diff-SVC
Singing Voice Conversion via diffusion model
## 本仓库为 diff-svc fork 重构版,新增多说话人、辅助脚本、新 hubert 等,请自行评估并承担风险
> 建议使用稳定版:[Diff-SVC](https://github.com/prophesier/diff-svc)
>
> 项目教程在 doc 文件夹下此魔改版问题请勿在原项目频道、discord 等询问。
>
> 同参数下,中文 hubert 所需训练步数约为 soft hubert 的 1.5~2 倍,不建议新手使用
## 更新日志 /Changes Log
> 2023.03.09
>
> 优化nsf-hifigan速度 @diffsinger
>
> 2023.02.18
>
> 更新config参数增加flask_api多人模型取消midi a模式diffsinger套娃支持 @小狼
>
> 2023.01.20
>
> 重构目录,精简代码,去除多层继承 @小狼
>
> 2023.01.18
>
> 配置文件改为级联,仅需修改 config_nsf、config_ms二选一即可预处理 @小狼
>
> 2023.01.16
>
> 增加多说话人支持 (config_ms.yaml),预处理代码参考 diffsinger 修改 @小狼
>
> 2023.01.09
>
> 新增 select.py 筛选数据集音域(数据量足够时,删去重复音域部分,加快高低音收敛)
>
> 删除 24k 的 pe、hifigan 等依赖,删除 pitch cwt 模式infer 复用预处理部分代码 @小狼
>
> 2023.01.07
>
> 预处理新增 f0_static 超参统计音域,新增自适应变调功能 (需 f0_static旧模型 config 可用 data_static 添加此超参) @小狼
>
> 2023.01.05
>
> 取消 24k 采样率、pe 支持删减部分参数、doc 新增特化教程batch.py 支持特化、套娃两种模式的导出;
>
> pre_hubert 为分步预处理4g 及以下内存预处理使用data_static 为数据集音域统计(仅供参考);中文 hubert 所需依赖 fairseq 请自行安装 @小狼
>
> 2023.01.01
>
> 更新切片机 v2、取消切片缓存简化部分 infer 流程;取消 vec 支持、增加中文 hubert仅 base 模型1.1g 左右)@小狼
>
> 2022.12.17
>
> 新增 pre_check 检测环境、数据 @深夜诗人;改进 simplify 精简模型 @九尾玄狐;监修代码 @小狼
>
> 2022.12.16
>
> 修复推理时 hubert 模型重复加载的问题 @小狼
>
> 2022.12.04
>
> 44.1kHz 声码器开放申请,正式提供对 44.1kHz 的支持
>
> 2022.11.28
>
> 增加了默认打开的 no_fs2 选项,可优化部分网络,提升训练速度、缩减模型体积,对于未来新训练的模型有效
>
> 2022.11.23
>
> 修复了一个重大 bug曾导致可能将用于推理的原始 gt 音频转变采样率为 22.05kHz, 对于由此造成的影响我们表示十分抱歉,请务必检查自己的测试音频,并使用更新后的代码
>
> 2022.11.22
>
> 修复了很多 bug其中有几个影响推理效果重大的 bug
>
> 2022.11.20
>
> 增加对推理时多数格式的输入和保存,无需手动借助其他软件转换
>
> 2022.11.13
>
> 修正中断后读取模型的 epoch/steps 显示问题,添加 f0 处理的磁盘缓存,添加实时变声推理的支持文件
>
> 2022.11.11
>
> 修正切片时长误差,补充对 44.1khz 的适配,增加对 contentvec 的支持
>
> 2022.11.04
>
> 添加梅尔谱保存功能
>
> 2022.11.02
>
> 整合新声码器代码,更新 parselmouth 算法
>
> 2022.10.29
>
> 整理推理部分,添加长音频自动切片功能。
>
> 2022.10.28
> 将 hubert 的 onnx 推理迁移为 torch 推理,并整理推理逻辑。
>
> <font color=#FFA500 > 如原先下载过 onnx 的 hubert 模型需重新下载并替换为 pt 模型 </font>config 不需要改,目前可以实现 1060
> 6G 显存的直接 GPU 推理与预处理,详情请查看文档。
>
> 2022.10.27
>
> 更新依赖文件,去除冗余依赖。
>
> 2022.10.27
>
> 修复了一个严重错误,曾导致在 gpu 服务器上 hubert 仍使用 cpu 推理,速度减慢 3-5 倍,影响预处理与推理,不影响训练
>
> 2022.10.26
>
> 修复 windows 上预处理数据在 linux 上无法使用的问题,更新部分文档
>
> 2022.10.25
>
> 编写推理 / 训练详细文档,修改整合部分代码,增加对 ogg 格式音频的支持 (无需与 wav 区分,直接使用即可)
>
> 2022.10.24
>
> 支持对自定义数据集的训练,并精简代码
>
> 2022.10.22
>
> 完成对 opencpop 数据集的训练并创建仓库
## 注意事项 /Notes
> 本项目是基于学术交流目的建立,并非为生产环境准备,不对由此项目模型产生的任何声音的版权问题负责。
>
> 如将本仓库代码二次分发,或将由此项目产出的任何结果公开发表 (包括但不限于视频网站投稿),请注明原作者及代码来源 (此仓库)。
>
> 如果将此项目用于任何其他企划,请提前联系并告知本仓库作者,十分感谢。
> This project is established for academic exchange purposes and is not intended for production environments. We are not
>
> responsible for any copyright issues arising from the sound produced by this project's model.
>
> If you redistribute the code in this repository or publicly publish any results produced by this project (including but not limited to video website submissions), please indicate the original author and source code (this repository).
>
> If you use this project for any other plans, please contact and inform the author of this repository in advance. Thank you very much.
## 推理 /Inference
参考 `infer.py` 进行修改
## 预处理 /PreProcessing:
```sh
export PYTHONPATH=.
CUDA_VISIBLE_DEVICES=0 python preprocessing/svc_binarizer.py --config configs/config_nsf.yaml
```
## 训练 /Training:
```sh
CUDA_VISIBLE_DEVICES=0 python run.py --config configs/config_nsf.yaml --exp_name <your project name> --reset
```
> Links:
>
> 详细训练过程和各种参数介绍请查看 [推理与训练说明](./doc/train_and_inference.markdown)
>
> [中文 hubert 与特化教程](./doc/advanced_skills.markdown)
## 学术 / Acknowledgements
项目基于 [diffsinger](https://github.com/MoonInTheRiver/DiffSinger)、[diffsinger (openvpi 维护版)](https://github.com/openvpi/DiffSinger)、[soft-vc](https://github.com/bshall/soft-vc)
开发.
同时也十分感谢 openvpi 成员在开发训练过程中给予的帮助。
This project is based
on [diffsinger](https://github.com/MoonInTheRiver/DiffSinger), [diffsinger (openvpi maintenance version)](https://github.com/openvpi/DiffSinger),
and [soft-vc](https://github.com/bshall/soft-vc). We would also like to thank the openvpi members for their help during
the development and training process.
> 注意:此项目与同名论文 [DiffSVC](https://arxiv.org/abs/2105.13871) 无任何联系,请勿混淆!
> Note: This project has no connection with the paper of the same name [DiffSVC](https://arxiv.org/abs/2105.13871),
> please
> do not confuse them!
## 工具 / Tools
音频切片参考 [audio-slicer](https://github.com/openvpi/audio-slicer)
Audio Slice Reference [audio-slicer](https://github.com/openvpi/audio-slicer)

58
batch.py Normal file
View File

@ -0,0 +1,58 @@
import io
import os.path
from pathlib import Path
import numpy as np
import soundfile
from infer_tools import infer_tool
from infer_tools.infer_tool import Svc
from utils.hparams import hparams
def run_clip(raw_audio_path, svc_model, key, acc, use_crepe, spk_id=0, auto_key=False, units_mode=False):
infer_tool.format_wav(raw_audio_path)
key = svc_model.evaluate_key(raw_audio_path, key, auto_key)
_f0_tst, _f0_pred, _audio = svc_model.infer(raw_audio_path, key=key, acc=acc, use_crepe=use_crepe, spk_id=spk_id,
singer=not units_mode)
if units_mode:
out_path = io.BytesIO()
soundfile.write(out_path, _audio, hparams["audio_sample_rate"], format='wav')
out_path.seek(0)
npy_path = Path(raw_audio_path).with_suffix(".npy")
np.save(str(npy_path), svc_model.hubert.encode(out_path))
else:
out_path = f'./singer_data/{Path(raw_audio_path).name}'
soundfile.write(out_path, _audio, hparams["audio_sample_rate"], 'PCM_16')
if __name__ == '__main__':
# 工程文件夹名,训练时用的那个
project_name = "fox_cn"
model_path = f'./checkpoints/{project_name}/model_ckpt_steps_370000.ckpt'
config_path = f'./checkpoints/{project_name}/config.yaml'
# 此脚本为批量导出短音频30s内使用同时生成f0、mel供diffsinger使用。
# 支持wav文件放在batch文件夹下带扩展名
wav_paths = infer_tool.get_end_file("./batch", "wav")
trans = -6 # 音高调整,支持正负(半音)
spk_id = 0 # 非多人模型不改
# 特化专用开启此项后仅导出变更音色的units至batch目录其余项不输出关闭此项则切换为对接diffsinger的套娃导出模式
units = True
# 自适应变调,不懂别开
auto_key = False
# 加速倍数
accelerate = 10
# 下面不动
os.makedirs("./singer_data", exist_ok=True)
model = Svc(project_name, config_path, hubert_gpu=True, model_path=model_path)
count = 0
for audio_path in wav_paths:
count += 1
if os.path.exists(Path(audio_path).with_suffix(".npy")) and units:
print(f"{audio_path}:units已存在跳过")
continue
run_clip(audio_path, model, trans, accelerate, spk_id=spk_id, auto_key=auto_key, use_crepe=False,
units_mode=units)
print(f"\r\nnum:{count}\r\ntotal process:{round(count * 100 / len(wav_paths), 2)}%\r\n")

147
configs/base.yaml Normal file
View File

@ -0,0 +1,147 @@
K_step: 1000
accumulate_grad_batches: 1
audio_num_mel_bins: 128
audio_sample_rate: 44100
binarization_args:
shuffle: false
with_spk_embed: false
binarizer_cls: preprocessing.svc_binarizer.SvcBinarizer
check_val_every_n_epoch: 10
choose_test_manually: false
clip_grad_norm: 1
content_cond_steps: []
dec_ffn_kernel_size: 9
dec_layers: 4
decoder_type: fft
dict_dir: ''
diff_decoder_type: wavenet
diff_loss_type: l2
dilation_cycle_length: 4
dropout: 0.1
ds_workers: 4
dur_enc_hidden_stride_kernel:
- 0,2,3
- 0,2,3
- 0,1,3
dur_loss: mse
dur_predictor_kernel: 3
dur_predictor_layers: 5
enc_ffn_kernel_size: 9
enc_layers: 4
encoder_K: 8
encoder_type: fft
endless_ds: false
f0_bin: 256
f0_max: 1100.0
f0_min: 40.0
ffn_act: gelu
ffn_padding: SAME
fft_size: 2048
fmax: 16000
fmin: 40
fs2_ckpt: ''
gaussian_start: true
gen_dir_name: ''
gen_tgt_spk_id: -1
hidden_size: 256
hop_size: 512
hubert_gpu: true
infer: false
keep_bins: 128
lambda_commit: 0.25
lambda_energy: 0.0
lambda_f0: 1.0
lambda_ph_dur: 0.3
lambda_sent_dur: 1.0
lambda_uv: 1.0
lambda_word_dur: 1.0
load_ckpt: ''
log_interval: 100
loud_norm: false
max_beta: 0.02
max_epochs: 3000
max_eval_sentences: 1
max_eval_tokens: 60000
max_frames: 42000
max_input_tokens: 60000
max_updates: 1000000
mel_loss: ssim:0.5|l1:0.5
mel_vmax: 1.5
mel_vmin: -6.0
min_level_db: -120
norm_type: gn
num_heads: 2
num_sanity_val_steps: 1
num_spk: 1
num_test_samples: 0
num_valid_plots: 10
optimizer_adam_beta1: 0.9
optimizer_adam_beta2: 0.98
out_wav_norm: false
pe_ckpt: checkpoints/0102_xiaoma_pe/model_ckpt_steps_60000.ckpt
pe_enable: false
perform_enhance: true
pitch_ar: false
pitch_enc_hidden_stride_kernel:
- 0,2,5
- 0,2,5
- 0,2,5
pitch_extractor: parselmouth
pitch_loss: l2
pitch_norm: log
pitch_type: frame
pndm_speedup: 10
predictor_dropout: 0.5
predictor_grad: 0.1
predictor_hidden: -1
predictor_kernel: 5
predictor_layers: 5
prenet_dropout: 0.5
prenet_hidden_size: 256
pretrain_fs_ckpt: ''
processed_data_dir: xxx
profile_infer: false
ref_norm_layer: bn
rel_pos: true
reset_phone_dict: true
save_best: false
save_ckpt: true
save_codes:
- configs
- modules
- src
- utils
save_f0: true
save_gt: false
schedule_type: linear
seed: 1234
sort_by_len: true
spk_cond_steps: []
speaker_id: single
stop_token_weight: 5.0
task_cls: training.svc_task.SvcTask
test_ids: []
test_input_dir: ''
timesteps: 1000
train_set_name: train
test_set_name: test
use_denoise: false
use_energy_embed: false
use_gt_dur: false
use_gt_f0: false
use_nsf: true
use_pitch_embed: true
use_pos_embed: true
use_spk_embed: false
use_spk_id: false
use_split_spk_id: false
use_uv: false
use_var_enc: false
valid_num: 0
valid_set_name: valid
vocoder: modules.vocoders.nsf_hifigan.NsfHifiGAN
vocoder_ckpt: checkpoints/nsf_hifigan/model
warmup_updates: 2000
wav2spec_eps: 1e-6
weight_decay: 0
win_size: 2048

42
configs/config_ms.yaml Normal file
View File

@ -0,0 +1,42 @@
base_config:
- configs/base.yaml
binary_data_dir: data/binary/svc_ms
choose_test_manually: false
config_path: configs/config_ms.yaml
datasets:
- testfox
- jishuang
decay_steps: 40000
hubert_path: checkpoints/hubert/hubert_soft.pt
lr: 0.0005
max_sentences: 32
max_tokens: 80000
num_ckpt_keep: 10
num_spk: 2
raw_data_dir:
- data/raw/testfox
- data/raw/jishuang
residual_channels: 512
residual_layers: 20
speakers:
- testfox
- jishuang
spec_max:
- 0.0
spec_min:
- -5.0
test_prefixes:
- zhibin-3298
- zhibin-2230
- zhibin-3279
- zhibin-3163
- luoxi-283
- luoxi-984
- luoxi-982
use_amp: false
use_cn_hubert: false
use_crepe: false
use_energy_embed: false
use_spk_id: true
val_check_interval: 2000
work_dir: checkpoints/svc_ms

30
configs/config_nsf.yaml Normal file
View File

@ -0,0 +1,30 @@
base_config:
- configs/base.yaml
binary_data_dir: data/binary/testfox
choose_test_manually: false
config_path: configs/config_nsf.yaml
datasets:
- testfox
decay_steps: 40000
hubert_path: checkpoints/hubert/hubert_soft.pt
lr: 0.0005
max_sentences: 32
max_tokens: 80000
num_ckpt_keep: 10
num_spk: 1
num_test_samples: 0
raw_data_dir: data/raw/testfox
residual_channels: 512
residual_layers: 20
spec_max:
- 0.0
spec_min:
- -5.0
test_prefixes:
- test
use_amp: false
use_cn_hubert: false
use_crepe: false
use_energy_embed: false
val_check_interval: 2000
work_dir: checkpoints/testfox

View File

@ -0,0 +1,84 @@
# Diff-SVC(advanced skills)
## 0.前置知识
> svc\
> Singing voice conversion旨在保证歌唱内容的同时,将音色从source speaker转换到 target speaker
> mel\
> 可简单认为mel谱以数字格式保留了音频的所有信息理想情况可完成wav→mel→wav的无损转换。
> hubert\
> 语音内容编码器可将音频wav编码为256维向量任意语种经hubert编码为统一格式的数字内容units代替人工进行自动标注。
> 音色与发音习惯:\
> 音色可以理解为声音特征,发音习惯为个人特色的咬字、停顿等,共同构成音频的识别特征。\
> 需注意svc模型完整的保留了咬字习惯可能从听感上与源音频没有区别不像目标特征实际音色部分已经完整替换了。
> 音色泄露:\
> 模型工作流程为wav→units→wav。语音编码时units不可避免的带入源音频音色信息导致输出音频含有部分源音频的音色称为音色泄露。
> [soft-vc](https://github.com/bshall/hubert/releases/download/v0.1/hubert-soft-0d54a1f4.pt): \
> hubert的一种英文训练soft-vc技术较普通hubert减少了音色泄露不可避免的有内容信息损失导致口胡英文语料训练其他语种出错率更高\
> 此模型为默认模型,推荐使用。
> [cn_hubert](https://github.com/TencentGameMate/chinese_speech_pretrain):\
> 中文训练的hubert音色、f0泄露更严重不可直接变调保留下来的语音信息也更完整如发音、情感通过特化解决。\
> 此模型(**本项目仅可使用base模型**)优缺点均较明显,**必须**配合特化操作弥补缺陷。
> 不准确的工作流程概括:\
> 将输入源wav编码为units语音信息并提取f0音高曲线参考svc模型将units+f0转换为目标音色的mel、经过声码器转换为wav。\
> 预处理时提取数据集mel、units、f0参数模型在训练过程中学习到units+f0→mel的映射关系。\
> 推理时提取输入源wav的units+f0svc模型将两者转换为目标音色的mel经声码器转换为所需wav。
## 1.特化
> [cn_hubert下载](https://github.com/TencentGameMate/chinese_speech_pretrain)\
> 模型放置路径:\
> "checkpoints/cn_hubert/chinese-hubert-base-fairseq-ckpt.pt"\
> 所需依赖 fairseq 请自行安装
> 理论:\
> units在理想情况仅包含语音内容实际混入了音色、f0信息模型训练中实际学习到的是units数据集音色、编码内容+f0→mel数据集音色、原内容的映射关系。
> 流程:\
> 推理时完成的是units任意音色、数据集包含任意内容+f0→mel数据集音色、对应内容的映射关系因其与训练条件有偏差应进行人工干预即特化操作。
> 优缺点:\
> 特化旨在将any2one的模型变为one2one以固定输入音色为代价、更好的保留语音信息即any→A变为B→A。
> 应用条件:\
> soft-vc模型自身包含音色泄露的解决方案正常训练即可cn_hubert推荐使用特化训练。
> 流程:\
> 目标音色为A输入音色为B以下使用的模型均为中文hubert的base模型\
> 1、以B为数据集如opencpop、正常流程训练svc模型\
> 2、将目标音色A的数据集使用B模型转换一遍得到A内容、B音色的wav数据集\
> 3、预先提取此数据集的units得到A内容、B音色的units\
> 注使用batch.py、units设为True、加载B模型把A数据集wav放到batch文件夹内会自动导出特化所需的units至batch文件夹即2、3两步\
> 4、将上步得到的units与目标音色A的数据集放到同文件夹下以正常流程进行训练\
> 5、程序检测到数据集中有npy格式的units会自动加载、不再临时提取\
> 6、特化训练即unitsB音色、A编码内容+f0→melA音色、A原内容的映射关系得到的模型仅允许B音色的音频输入、转换质量也会更高
> 推理:\
> 特化推理时完成的是unitsB音色、数据集包含任意内容+f0→melA音色、对应内容的映射关系与训练条件更接近均为B→A\
> 所以特化模型可以解决音色泄露等问题,得到更高的转换质量
## 2.特化进阶
> 循环特化:\
> 特化训练可得B→A的高质量转换模型同理以此模型作为预模型导出units可得A→B的特化模型\
> 重复多次后A、B音色间的转换损耗越来越小效果更佳
> 多项特化:\
> 可以分别使用B、C、D等音色炼制预模型\
> 将A数据集分为多份分别导出A内容、BCD音色的数据集进行特化训练\
> 由此得到B、C、D三种固定输入、A音色一种固定输出的特化模型因输入条件更复杂、数据集数量需求更大
> 音色抵抗:\
> 将数据集A拆为数份使用不同音色units特化此时可认为模型进行了B、C、D三种音色的适应训练\
> 将数据集A复制数份使用不同音色units特化会出现下列特殊情况\
> 1、X units、B音色 → X mel、A音色\
> 2、X units、C音色 → X mel、A音色\
> 3、X units、D音色 → X mel、A音色\
> 同一份内容X有多种音色的units同时指向了同一份X mel;模型更容易总结出相同点(语义信息),忽略音色泄露的信息\
> 在音色种类足够时,这种特殊情况相对某几种音色的适应训练、更偏向于对陌生音色的抵抗训练,更加泛化

View File

@ -0,0 +1,214 @@
# Diff-SVC(train/inference by yourself)
## 基于原版教程修改
## 0.环境配置
```
pip install -r requirements.txt
```
## 1.推理
> 使用根目录下的infer.py\
> 在第一个block中修改如下参数
```
config_path='checkpoints压缩包中config.yaml的位置'
如'./checkpoints/nyaru/config.yaml'
config和checkpoints是一一对应的请不要使用其他config
project_name='这个项目的名称'
如'nyaru'
model_path='ckpt文件的全路径'
如'./checkpoints/nyaru/model_ckpt_steps_112000.ckpt'
hubert_gpu=True
推理时是否使用gpu推理hubert(模型中的一个模块),不影响模型的其他部分
目前版本已大幅减小hubert的gpu占用在1060 6G显存下可完整推理不需要关闭了。
另外现已支持长音频自动切片功能(ipynb和infer.py均可)超过30s的音频将自动在静音处切片处理感谢@小狼的代码
```
### 可调节参数:
```
file_names=["逍遥仙","xxx"]#传入音频的路径默认在文件夹raw中
use_crepe=True
#crepe是一个F0算法效果好但速度慢改成False会使用效果稍逊于crepe但较快的parselmouth算法
thre=0.05
#crepe的噪声过滤阈值源音频干净可适当调大噪音多就保持这个数值或者调小前面改成False后这个参数不起作用
pndm_speedup=20
#推理加速算法倍数默认是1000步这里填成10就是只使用100步合成是一个中规中矩的数值这个数值可以高到50倍(20步合成)没有明显质量损失,再大可能会有可观的质量损失,注意如果下方开启了use_gt_mel, 应保证这个数值小于add_noise_step并尽量让其能够整除
key=0
#变调参数默认为0(不是1!!)将源音频的音高升高key个半音后合成如男声转女生可填入8或者12等(12就是升高一整个8度)
wav_gen='yyy.wav'#输出音频的路径,默认在项目根目录中,可通过改变扩展名更改保存文件类型
```
## 2.数据预处理与训练
### 2.1 准备数据
> 目前支持wav格式和ogg格式的音频数据采样率最好高于24kHz程序会自动处理采样率和声道问题。采样率不可低于16kHz一般不会的\
> 音频需要切片为5-15s为宜的短音频长度没有具体要求但不宜过长过短。音频需要为纯目标人干声不可以有背景音乐和其他人声音最好也不要有过重的混响等。若经过去伴奏等处理请尽量保证处理后的音频质量。\
> 单人训练复制config_nsf.yaml修改总时长尽量保证在3h或以上不需要额外任何标注。
### 2.2 修改超参数配置
> 首先请备份一份config_nsf.yamlconfigs文件夹下然后修改它\
> 多人训练复制config_ms.yaml修改 \
> 可能会用到的参数如下(以工程名为nyaru为例):
```
K_step: 1000
#diffusion过程总的step,建议不要修改
binary_data_dir: data/binary/nyaru
预处理后数据的存放地址:需要将后缀改成工程名字
config_path: configs/config_nsf.yaml
你要使用的这份yaml自身的地址由于预处理过程中会写入数据所以这个地址务必修改成将要存放这份yaml文件的完整路径
choose_test_manually: false
手动选择测试集默认关闭自动随机抽取5条音频作为测试集。
如果改为ture请在test_prefixes:中填入测试数据的文件名前缀,程序会将以对应前缀开头的文件作为测试集
这是个列表,可以填多个前缀,如:
test_prefixes:
- test
- aaaa
- 5012
- speaker1024
重要:测试集*不可以*为空,为了不产生意外影响,建议尽量不要手动选择测试集
endless_ds:False
如果你的数据集过小每个epoch时间很短请将此项打开将把正常的1000epoch作为一个epoch计算
hubert_path: checkpoints/hubert/hubert_soft.pt
hubert模型的存放地址确保这个路径是对的一般解压checkpoints包之后就是这个路径不需要改,现已使用torch版本推理
hubert_gpu:True
是否在预处理时使用gpu运行hubert(模型的一个模块)关闭后使用cpu但耗时会显著增加。另外模型训练完推理时hubert是否用gpu是在inference中单独控制的不受此处影响。目前hubert改为torch版后已经可以做到在1060 6G显存gpu上进行预处理与直接推理1分钟内的音频不超出显存限制一般不需要关了。
lr: 0.0008
#初始的学习率:这个数字对应于88的batchsize(80g显存)如果batchsize更小可以调低这个数值一些
decay_steps: 20000
每20000步学习率衰减为原来的一半如果batchsize比较小请调大这个数值
#对于30-40左右的batchsize(30g显存)推荐lr=0.0004decay_steps=40000
max_frames: 42000
max_input_tokens: 6000
max_sentences: 88
max_tokens: 128000
#batchsize是由这几个参数动态算出来的如果不太清楚具体含义可以只改动max_sentences这个参数填入batchsize的最大限制值以免炸显存
raw_data_dir: data/raw/nyaru
#存放预处理前原始数据的位置请将原始wav数据放在这个目录下内部文件结构无所谓会自动解构
residual_channels: 384
residual_layers: 20
#控制核心网络规模的一组参数越大参数越多炼的越慢但效果不一定会变好大一点的数据集可以把第一个改成512。这个可以自行实验效果不过不了解的话尽量不动。
use_crepe: true
#在数据预处理中使用crepe提取F0,追求效果请打开,追求速度可以关闭
val_check_interval: 2000
#每2000steps推理测试集并保存ckpt
vocoder_ckpt:checkpoints/nsf_hifigan/model
#对应声码器的文件名, 注意不要填错
work_dir: checkpoints/nyaru
#修改后缀为工程名
```
> 其他的参数如果你不知道它是做什么的,请不要修改,即使你看着名称可能以为你知道它是做什么的。
### 2.3 数据预处理
在diff-svc的目录下执行以下命令\
#windows
```
set PYTHONPATH=.
set CUDA_VISIBLE_DEVICES=0
python preprocessing\svc_binarizer.py --config configs/config_nsf.yaml
```
#linux
```
export PYTHONPATH=.
CUDA_VISIBLE_DEVICES=0 python preprocessing\svc_binarizer.py --config configs/config_nsf.yaml
```
对于预处理,@小狼准备了一份可以分段处理hubert和其他特征的代码如果正常处理显存不足可以修改后使用 \
pre_hubert.py, 然后再运行正常的指令能够识别提前处理好的hubert特征
### 2.4 训练
#windows
```
set CUDA_VISIBLE_DEVICES=0
python run.py --config configs/config.yaml --exp_name nyaru --reset
```
#linux
```
CUDA_VISIBLE_DEVICES=0 python run.py --config configs/config.yaml --exp_name nyaru --reset
```
> 需要将exp_name改为你的工程名并修改config路径请确保和预处理使用的是同一个config文件\
*重要*
>
训练完成后若之前不是在本地数据预处理除了需要下载对应的ckpt文件也需要将config文件下载下来作为推理时使用的config不可以使用本地之前上传上去那份。因为预处理时会向config文件中写入内容。推理时要保持使用的config和预处理使用的config是同一份。
### 2.5 可能出现的问题:
> 2.5.1 'Upsample' object has no attribute 'recompute_scale_factor'\
> 此问题发现于cuda11.3对应的torch中若出现此问题,请通过合适的方法(如ide自动跳转等)
> 找到你的python依赖包中的torch.nn.modules.upsampling.py文件(
> 如conda环境中为conda目录\envs\环境目录\Lib\site-packages\torch\nn\modules\upsampling.py)修改其153-154行
```
return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners,recompute_scale_factor=self.recompute_scale_factor)
```
> 改为
```
return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners)
# recompute_scale_factor=self.recompute_scale_factor)
```
> 2.5.2 no module named 'utils'\
> 请在你的运行环境(如colab笔记本)中以如下方式设置:
```
import os
os.environ['PYTHONPATH']='.'
!CUDA_VISIBLE_DEVICES=0 python preprocessing/binarize.py --config training/config.yaml
```
注意一定要在项目文件夹的根目录中执行
> 2.5.3 cannot load library 'libsndfile.so'\
> 可能会在linux环境中遇到的错误,请执行以下指令
```
apt-get install libsndfile1 -y
```
> 2.5.4 cannot load import 'consume_prefix_in_state_dict_if_present'\
> torch版本过低请更换高版本torch
> 2.5.5 预处理数据过慢\
> 检查是否在配置中开启了use_crepe将其关闭可显著提升速度。\
> 检查配置中hubert_gpu是否开启。

62
flask_api.py Normal file
View File

@ -0,0 +1,62 @@
import io
import logging
import librosa
import soundfile
from flask import Flask, request, send_file
from flask_cors import CORS
from infer_tools.infer_tool import Svc
from utils.hparams import hparams
app = Flask(__name__)
CORS(app)
logging.getLogger('numba').setLevel(logging.WARNING)
@app.route("/voiceChangeModel", methods=["POST"])
def voice_change_model():
request_form = request.form
wave_file = request.files.get("sample", None)
# 变调信息
f_pitch_change = float(request_form.get("fPitchChange", 0))
# 获取spkid
speak_id = int(request_form.get("sSpeakId", 0))
if enable_spk_id_cover:
speak_id = spk_id
print("说话人:" + str(int_speak_Id))
# DAW所需的采样率
daw_sample = int(float(request_form.get("sampleRate", 0)))
# http获得wav文件并转换
input_wav_path = io.BytesIO(wave_file.read())
# 模型推理
_f0_tst, _f0_pred, _audio = svc_model.infer(input_wav_path, spk_id=speak_id, key=f_pitch_change, acc=accelerate,
use_crepe=False)
tar_audio = librosa.resample(_audio, hparams["audio_sample_rate"], daw_sample)
# 返回音频
out_wav_path = io.BytesIO()
soundfile.write(out_wav_path, tar_audio, daw_sample, format="wav")
out_wav_path.seek(0)
return send_file(out_wav_path, download_name="temp.wav", as_attachment=True)
if __name__ == '__main__':
# 工程文件夹名,训练时用的那个
project_name = "fox_cn"
model_path = f'./checkpoints/{project_name}/clean_model_ckpt_steps_120000.ckpt'
config_path = f'./checkpoints/{project_name}/config.yaml'
# 默认说话人。以及是否优先使用默认说话人覆盖vst传入的参数。
spk_id = 0
enable_spk_id_cover = False
# 加速倍数
accelerate = 50
hubert_gpu = True
svc_model = Svc(project_name, config_path, hubert_gpu, model_path)
# 此处与vst插件对应不建议更改
app.run(port=6842, host="0.0.0.0", debug=False, threaded=False)

81
infer.py Normal file
View File

@ -0,0 +1,81 @@
import io
from pathlib import Path
import numpy as np
import soundfile
from infer_tools import infer_tool
from infer_tools import slicer
from infer_tools.infer_tool import Svc
from utils.hparams import hparams
def run_clip(raw_audio_path, svc_model, key, acc, use_crepe, spk_id=0, auto_key=False, out_path=None, slice_db=-40,
**kwargs):
print(f'code version:2023-02-18')
clean_name = Path(raw_audio_path).name.split(".")[0]
infer_tool.format_wav(raw_audio_path)
wav_path = Path(raw_audio_path).with_suffix('.wav')
key = svc_model.evaluate_key(wav_path, key, auto_key)
chunks = slicer.cut(wav_path, db_thresh=slice_db)
audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks)
count = 0
f0_tst, f0_pred, audio = [], [], []
for (slice_tag, data) in audio_data:
print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======')
length = int(np.ceil(len(data) / audio_sr * hparams['audio_sample_rate']))
raw_path = io.BytesIO()
soundfile.write(raw_path, data, audio_sr, format="wav")
raw_path.seek(0)
if slice_tag:
print('jump empty segment')
_f0_tst, _f0_pred, _audio = (
np.zeros(int(np.ceil(length / hparams['hop_size']))),
np.zeros(int(np.ceil(length / hparams['hop_size']))),
np.zeros(length))
else:
_f0_tst, _f0_pred, _audio = svc_model.infer(raw_path, spk_id=spk_id, key=key, acc=acc, use_crepe=use_crepe)
fix_audio = np.zeros(length)
fix_audio[:] = np.mean(_audio)
fix_audio[:len(_audio)] = _audio[0 if len(_audio) < len(fix_audio) else len(_audio) - len(fix_audio):]
f0_tst.extend(_f0_tst)
f0_pred.extend(_f0_pred)
audio.extend(list(fix_audio))
count += 1
if out_path is None:
out_path = f'./results/{clean_name}_{key}key_{project_name}_{hparams["residual_channels"]}_{hparams["residual_layers"]}_{int(step / 1000)}k_{accelerate}x.{kwargs["format"]}'
soundfile.write(out_path, audio, hparams["audio_sample_rate"], 'PCM_16', format=out_path.split('.')[-1])
return np.array(f0_tst), np.array(f0_pred), audio
if __name__ == '__main__':
# 工程文件夹名,训练时用的那个
project_name = "fox_cn"
model_path = f'./checkpoints/{project_name}/model_ckpt_steps_370000.ckpt'
config_path = f'./checkpoints/{project_name}/config.yaml'
# 支持多个wav/ogg文件放在raw文件夹下带扩展名
file_names = ["逍遥仙"]
spk_id = 0
# 自适应变调(仅支持单人模型)
auto_key = False
trans = [0] # 音高调整,支持正负(半音),数量与上一行对应,不足的自动按第一个移调参数补齐
# 加速倍数
accelerate = 20
hubert_gpu = True
wav_format = 'flac'
step = int(model_path.split("_")[-1].split(".")[0])
# 下面不动
infer_tool.mkdir(["./raw", "./results"])
infer_tool.fill_a_to_b(trans, file_names)
model = Svc(project_name, config_path, hubert_gpu, model_path, onnx=False)
for f_name, tran in zip(file_names, trans):
if "." not in f_name:
f_name += ".wav"
audio_path = f"./raw/{f_name}"
run_clip(raw_audio_path=audio_path, svc_model=model, key=tran, acc=accelerate, use_crepe=False,
spk_id=spk_id, auto_key=auto_key, project_name=project_name, format=wav_format)

116
infer_tools/f0_static.py Normal file
View File

@ -0,0 +1,116 @@
import json
import os
import shutil
from functools import reduce
from pathlib import Path
import matplotlib
import matplotlib.pyplot as plt
import yaml
from pylab import xticks, np
from tqdm import tqdm
from modules.vocoders.nsf_hifigan import NsfHifiGAN
from preprocessing.process_pipeline import get_pitch_parselmouth, get_pitch_crepe
from utils.hparams import set_hparams, hparams
head_list = ["C", "C#", "D", "D#", "E", "F", "F#", "G", "G#", "A", "A#", "B"]
def compare_pitch(f0_static_dict, pitch_time_temp, trans_key=0):
return sum({k: v * f0_static_dict[str(k + trans_key)] for k, v in pitch_time_temp.items() if
str(k + trans_key) in f0_static_dict}.values())
def f0_to_pitch(ff):
f0_pitch = 69 + 12 * np.log2(ff / 440)
return round(f0_pitch, 0)
def pitch_to_name(pitch):
return f"{head_list[int(pitch % 12)]}{int(pitch / 12) - 1}"
def get_f0(audio_path, crepe=False):
wav, mel = NsfHifiGAN.wav2spec(audio_path)
if crepe:
f0, pitch_coarse = get_pitch_crepe(wav, mel, hparams)
else:
f0, pitch_coarse = get_pitch_parselmouth(wav, mel, hparams)
return f0
def merge_f0_dict(dict_list):
def sum_dict(a, b):
temp = dict()
for key in a.keys() | b.keys():
temp[key] = sum([d.get(key, 0) for d in (a, b)])
return temp
return reduce(sum_dict, dict_list)
def collect_f0(f0):
pitch_num = {}
pitch_list = [f0_to_pitch(x) for x in f0[f0 > 0]]
for key in pitch_list:
pitch_num[key] = pitch_num.get(key, 0) + 1
return pitch_num
def static_f0_time(f0):
if isinstance(f0, dict):
pitch_num = merge_f0_dict({k: collect_f0(v) for k, v in f0.items()}.values())
else:
pitch_num = collect_f0(f0)
static_pitch_time = {}
sort_key = sorted(pitch_num.keys())
for key in sort_key:
static_pitch_time[key] = round(pitch_num[key] * hparams['hop_size'] / hparams['audio_sample_rate'], 2)
return static_pitch_time
def get_end_file(dir_path, end):
file_lists = []
for root, dirs, files in os.walk(dir_path):
files = [f for f in files if f[0] != '.']
dirs[:] = [d for d in dirs if d[0] != '.']
for f_file in files:
if f_file.endswith(end):
file_lists.append(os.path.join(root, f_file).replace("\\", "/"))
return file_lists
if __name__ == "__main__":
# 给config文件增加f0_static统计音域
config_path = "../training/config_nsf.yaml"
hparams = set_hparams(config=config_path, exp_name='', infer=True, reset=True, hparams_str='', print_hparams=False)
f0_dict = {}
# 获取batch文件夹下所有wav文件
wav_paths = get_end_file("../batch", "wav")
# parselmouth获取f0
with tqdm(total=len(wav_paths)) as p_bar:
p_bar.set_description('Processing')
for wav_path in wav_paths:
f0_dict[wav_path] = get_f0(wav_path, crepe=False)
p_bar.update(1)
pitch_time = static_f0_time(f0_dict)
total_time = round(sum(pitch_time.values()), 2)
pitch_time["total_time"] = total_time
print(f"total time: {total_time}s")
shutil.copy(config_path, f"{Path(config_path).parent}\\back_{Path(config_path).name}")
with open(config_path, encoding='utf-8') as f:
_hparams = yaml.safe_load(f)
_hparams['f0_static'] = json.dumps(pitch_time)
with open(config_path, 'w', encoding='utf-8') as f:
yaml.safe_dump(_hparams, f)
print("原config文件已在原目录建立备份back_config.yaml")
print("音域统计已保存至config文件此模型可使用自动变调功能")
matplotlib.use('TkAgg')
plt.title("数据集音域统计", fontproperties='SimHei')
plt.xlabel("音高", fontproperties='SimHei')
plt.ylabel("时长(s)", fontproperties='SimHei')
xticks_labels = [pitch_to_name(i) for i in range(36, 96)]
xticks(np.linspace(36, 96, 60, endpoint=True), xticks_labels)
plt.plot(pitch_time.keys(), pitch_time.values(), color='dodgerblue')
plt.show()

202
infer_tools/infer_tool.py Normal file
View File

@ -0,0 +1,202 @@
import json
import os
import pathlib
import time
from io import BytesIO
from pathlib import Path
import librosa
import numpy as np
import soundfile
import torch
import utils
from infer_tools.f0_static import compare_pitch, static_f0_time
from modules.diff.diffusion import GaussianDiffusion
from modules.diff.net import DiffNet
from modules.vocoders.nsf_hifigan import NsfHifiGAN
from preprocessing.hubertinfer import HubertEncoder
from preprocessing.process_pipeline import File2Batch, get_pitch_parselmouth
from utils.hparams import hparams, set_hparams
from utils.pitch_utils import denorm_f0, norm_interp_f0
def timeit(func):
def run(*args, **kwargs):
t = time.time()
res = func(*args, **kwargs)
print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t))
return res
return run
def format_wav(audio_path):
if Path(audio_path).suffix == '.wav':
return
raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None)
soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate)
def fill_a_to_b(a, b):
if len(a) < len(b):
for _ in range(0, len(b) - len(a)):
a.append(a[0])
def get_end_file(dir_path, end):
file_lists = []
for root, dirs, files in os.walk(dir_path):
files = [f for f in files if f[0] != '.']
dirs[:] = [d for d in dirs if d[0] != '.']
for f_file in files:
if f_file.endswith(end):
file_lists.append(os.path.join(root, f_file).replace("\\", "/"))
return file_lists
def mkdir(paths: list):
for path in paths:
if not os.path.exists(path):
os.mkdir(path)
class Svc:
def __init__(self, project_name, config_name, hubert_gpu, model_path, onnx=False):
self.project_name = project_name
self.DIFF_DECODERS = {
'wavenet': lambda hp: DiffNet(hp['audio_num_mel_bins']),
}
self.model_path = model_path
self.dev = torch.device("cuda")
self._ = set_hparams(config=config_name, exp_name=self.project_name, infer=True,
reset=True, hparams_str='', print_hparams=False)
hparams['hubert_gpu'] = hubert_gpu
self.hubert = HubertEncoder(hparams['hubert_path'], onnx=onnx)
self.model = GaussianDiffusion(
phone_encoder=self.hubert,
out_dims=hparams['audio_num_mel_bins'],
denoise_fn=self.DIFF_DECODERS[hparams['diff_decoder_type']](hparams),
timesteps=hparams['timesteps'],
K_step=hparams['K_step'],
loss_type=hparams['diff_loss_type'],
spec_min=hparams['spec_min'], spec_max=hparams['spec_max'],
)
utils.load_ckpt(self.model, self.model_path, 'model', force=True, strict=True)
self.model.cuda()
self.vocoder = NsfHifiGAN()
def infer(self, in_path, key, acc, spk_id=0, use_crepe=True, singer=False):
batch = self.pre(in_path, acc, spk_id, use_crepe)
batch['f0'] = batch['f0'] + (key / 12)
batch['f0'][batch['f0'] > np.log2(hparams['f0_max'])] = 0
@timeit
def diff_infer():
spk_embed = batch.get('spk_embed') if not hparams['use_spk_id'] else batch.get('spk_ids')
energy = batch.get('energy').cuda() if batch.get('energy') else None
if spk_embed is None:
spk_embed = torch.LongTensor([0])
diff_outputs = self.model(
hubert=batch['hubert'].cuda(), spk_embed_id=spk_embed.cuda(), mel2ph=batch['mel2ph'].cuda(),
f0=batch['f0'].cuda(), energy=energy, ref_mels=batch["mels"].cuda(), infer=True)
return diff_outputs
outputs = diff_infer()
batch['outputs'] = outputs['mel_out']
batch['mel2ph_pred'] = outputs['mel2ph']
batch['f0_gt'] = denorm_f0(batch['f0'], batch['uv'], hparams)
batch['f0_pred'] = outputs.get('f0_denorm')
return self.after_infer(batch, singer, in_path)
@timeit
def after_infer(self, prediction, singer, in_path):
for k, v in prediction.items():
if type(v) is torch.Tensor:
prediction[k] = v.cpu().numpy()
# remove paddings
mel_gt = prediction["mels"]
mel_gt_mask = np.abs(mel_gt).sum(-1) > 0
mel_pred = prediction["outputs"]
mel_pred_mask = np.abs(mel_pred).sum(-1) > 0
mel_pred = mel_pred[mel_pred_mask]
mel_pred = np.clip(mel_pred, hparams['mel_vmin'], hparams['mel_vmax'])
f0_gt = prediction.get("f0_gt")
f0_pred = prediction.get("f0_pred")
if f0_pred is not None:
f0_gt = f0_gt[mel_gt_mask]
if len(f0_pred) > len(mel_pred_mask):
f0_pred = f0_pred[:len(mel_pred_mask)]
f0_pred = f0_pred[mel_pred_mask]
torch.cuda.is_available() and torch.cuda.empty_cache()
if singer:
data_path = in_path.replace("batch", "singer_data")
mel_path = data_path[:-4] + "_mel.npy"
f0_path = data_path[:-4] + "_f0.npy"
np.save(mel_path, mel_pred)
np.save(f0_path, f0_pred)
wav_pred = self.vocoder.spec2wav(mel_pred, f0=f0_pred)
return f0_gt, f0_pred, wav_pred
def pre(self, wav_fn, accelerate, spk_id=0, use_crepe=True):
if isinstance(wav_fn, BytesIO):
item_name = self.project_name
else:
song_info = wav_fn.split('/')
item_name = song_info[-1].split('.')[-2]
temp_dict = {'wav_fn': wav_fn, 'spk_id': spk_id, 'id': 0}
temp_dict = File2Batch.temporary_dict2processed_input(item_name, temp_dict, self.hubert, infer=True,
use_crepe=use_crepe)
hparams['pndm_speedup'] = accelerate
batch = File2Batch.processed_input2batch([getitem(temp_dict)])
return batch
def evaluate_key(self, wav_path, key, auto_key):
if "f0_static" in hparams.keys():
f0_static = json.loads(hparams['f0_static'])
wav, mel = self.vocoder.wav2spec(wav_path)
input_f0 = get_pitch_parselmouth(wav, mel, hparams)[0]
pitch_time_temp = static_f0_time(input_f0)
eval_dict = {}
for trans_key in range(-12, 12):
eval_dict[trans_key] = compare_pitch(f0_static, pitch_time_temp, trans_key=trans_key)
sort_key = sorted(eval_dict, key=eval_dict.get, reverse=True)[:5]
print(f"推荐移调:{sort_key}")
if auto_key:
print(f"自动变调已启用您的输入key被{sort_key[0]}key覆盖控制参数为auto_key")
return sort_key[0]
elif not os.path.exists(f"{pathlib.Path(self.model_path).parent}/spk_map.json"):
print("config缺少f0_staic无法使用自动变调可通过infer_tools/data_static添加仅单人模型支持")
return key
def getitem(item):
max_frames = hparams['max_frames']
spec = torch.Tensor(item['mel'])[:max_frames]
mel2ph = torch.LongTensor(item['mel2ph'])[:max_frames] if 'mel2ph' in item else None
f0, uv = norm_interp_f0(item["f0"][:max_frames], hparams)
hubert = torch.Tensor(item['hubert'][:hparams['max_input_tokens']])
pitch = torch.LongTensor(item.get("pitch"))[:max_frames]
sample = {
"id": item['id'],
"spk_id": item['spk_id'],
"item_name": item['item_name'],
"hubert": hubert,
"mel": spec,
"pitch": pitch,
"f0": f0,
"uv": uv,
"mel2ph": mel2ph,
"mel_nonpadding": spec.abs().sum(-1) > 0,
}
if hparams['use_energy_embed']:
sample['energy'] = item['energy']
return sample

142
infer_tools/slicer.py Normal file
View File

@ -0,0 +1,142 @@
import librosa
import torch
import torchaudio
class Slicer:
def __init__(self,
sr: int,
threshold: float = -40.,
min_length: int = 5000,
min_interval: int = 300,
hop_size: int = 20,
max_sil_kept: int = 5000):
if not min_length >= min_interval >= hop_size:
raise ValueError('The following condition must be satisfied: min_length >= min_interval >= hop_size')
if not max_sil_kept >= hop_size:
raise ValueError('The following condition must be satisfied: max_sil_kept >= hop_size')
min_interval = sr * min_interval / 1000
self.threshold = 10 ** (threshold / 20.)
self.hop_size = round(sr * hop_size / 1000)
self.win_size = min(round(min_interval), 4 * self.hop_size)
self.min_length = round(sr * min_length / 1000 / self.hop_size)
self.min_interval = round(min_interval / self.hop_size)
self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size)
def _apply_slice(self, waveform, begin, end):
if len(waveform.shape) > 1:
return waveform[:, begin * self.hop_size: min(waveform.shape[1], end * self.hop_size)]
else:
return waveform[begin * self.hop_size: min(waveform.shape[0], end * self.hop_size)]
# @timeit
def slice(self, waveform):
if len(waveform.shape) > 1:
samples = librosa.to_mono(waveform)
else:
samples = waveform
if samples.shape[0] <= self.min_length:
return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}}
rms_list = librosa.feature.rms(y=samples, frame_length=self.win_size, hop_length=self.hop_size).squeeze(0)
sil_tags = []
silence_start = None
clip_start = 0
for i, rms in enumerate(rms_list):
# Keep looping while frame is silent.
if rms < self.threshold:
# Record start of silent frames.
if silence_start is None:
silence_start = i
continue
# Keep looping while frame is not silent and silence start has not been recorded.
if silence_start is None:
continue
# Clear recorded silence start if interval is not enough or clip is too short
is_leading_silence = silence_start == 0 and i > self.max_sil_kept
need_slice_middle = i - silence_start >= self.min_interval and i - clip_start >= self.min_length
if not is_leading_silence and not need_slice_middle:
silence_start = None
continue
# Need slicing. Record the range of silent frames to be removed.
if i - silence_start <= self.max_sil_kept:
pos = rms_list[silence_start: i + 1].argmin() + silence_start
if silence_start == 0:
sil_tags.append((0, pos))
else:
sil_tags.append((pos, pos))
clip_start = pos
elif i - silence_start <= self.max_sil_kept * 2:
pos = rms_list[i - self.max_sil_kept: silence_start + self.max_sil_kept + 1].argmin()
pos += i - self.max_sil_kept
pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start
pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept
if silence_start == 0:
sil_tags.append((0, pos_r))
clip_start = pos_r
else:
sil_tags.append((min(pos_l, pos), max(pos_r, pos)))
clip_start = max(pos_r, pos)
else:
pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start
pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept
if silence_start == 0:
sil_tags.append((0, pos_r))
else:
sil_tags.append((pos_l, pos_r))
clip_start = pos_r
silence_start = None
# Deal with trailing silence.
total_frames = rms_list.shape[0]
if silence_start is not None and total_frames - silence_start >= self.min_interval:
silence_end = min(total_frames, silence_start + self.max_sil_kept)
pos = rms_list[silence_start: silence_end + 1].argmin() + silence_start
sil_tags.append((pos, total_frames + 1))
# Apply and return slices.
if len(sil_tags) == 0:
return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}}
else:
chunks = []
# 第一段静音并非从头开始,补上有声片段
if sil_tags[0][0]:
chunks.append(
{"slice": False, "split_time": f"0,{min(waveform.shape[0], sil_tags[0][0] * self.hop_size)}"})
for i in range(0, len(sil_tags)):
# 标识有声片段(跳过第一段)
if i:
chunks.append({"slice": False,
"split_time": f"{sil_tags[i - 1][1] * self.hop_size},{min(waveform.shape[0], sil_tags[i][0] * self.hop_size)}"})
# 标识所有静音片段
chunks.append({"slice": True,
"split_time": f"{sil_tags[i][0] * self.hop_size},{min(waveform.shape[0], sil_tags[i][1] * self.hop_size)}"})
# 最后一段静音并非结尾,补上结尾片段
if sil_tags[-1][1] * self.hop_size < len(waveform):
chunks.append({"slice": False, "split_time": f"{sil_tags[-1][1] * self.hop_size},{len(waveform)}"})
chunk_dict = {}
for i in range(len(chunks)):
chunk_dict[str(i)] = chunks[i]
return chunk_dict
def cut(audio_path, db_thresh=-30, min_len=5000):
audio, sr = librosa.load(audio_path, sr=None)
slicer = Slicer(
sr=sr,
threshold=db_thresh,
min_length=min_len
)
chunks = slicer.slice(audio)
return chunks
def chunks2audio(audio_path, chunks):
chunks = dict(chunks)
audio, sr = torchaudio.load(audio_path)
if len(audio.shape) == 2 and audio.shape[1] >= 2:
audio = torch.mean(audio, dim=0).unsqueeze(0)
audio = audio.cpu().numpy()[0]
result = []
for k, v in chunks.items():
tag = v["split_time"].split(",")
if tag[0] != tag[1]:
result.append((v["slice"], audio[int(tag[0]):int(tag[1])]))
return result, sr

View File

@ -0,0 +1,675 @@
import math
import torch
import torch.nn.functional as F
import torch.onnx.operators
from torch import nn
from torch.nn import Parameter
import utils
class Reshape(nn.Module):
def __init__(self, *args):
super(Reshape, self).__init__()
self.shape = args
def forward(self, x):
return x.view(self.shape)
class Permute(nn.Module):
def __init__(self, *args):
super(Permute, self).__init__()
self.args = args
def forward(self, x):
return x.permute(self.args)
class LinearNorm(torch.nn.Module):
def __init__(self, in_dim, out_dim, bias=True, w_init_gain='linear'):
super(LinearNorm, self).__init__()
self.linear_layer = torch.nn.Linear(in_dim, out_dim, bias=bias)
torch.nn.init.xavier_uniform_(
self.linear_layer.weight,
gain=torch.nn.init.calculate_gain(w_init_gain))
def forward(self, x):
return self.linear_layer(x)
class ConvNorm(torch.nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=1, stride=1,
padding=None, dilation=1, bias=True, w_init_gain='linear'):
super(ConvNorm, self).__init__()
if padding is None:
assert (kernel_size % 2 == 1)
padding = int(dilation * (kernel_size - 1) / 2)
self.conv = torch.nn.Conv1d(in_channels, out_channels,
kernel_size=kernel_size, stride=stride,
padding=padding, dilation=dilation,
bias=bias)
torch.nn.init.xavier_uniform_(
self.conv.weight, gain=torch.nn.init.calculate_gain(w_init_gain))
def forward(self, signal):
conv_signal = self.conv(signal)
return conv_signal
def Embedding(num_embeddings, embedding_dim, padding_idx=None):
m = nn.Embedding(num_embeddings, embedding_dim, padding_idx=padding_idx)
nn.init.normal_(m.weight, mean=0, std=embedding_dim ** -0.5)
if padding_idx is not None:
nn.init.constant_(m.weight[padding_idx], 0)
return m
def LayerNorm(normalized_shape, eps=1e-5, elementwise_affine=True, export=False):
if not export and torch.cuda.is_available():
try:
from apex.normalization import FusedLayerNorm
return FusedLayerNorm(normalized_shape, eps, elementwise_affine)
except ImportError:
pass
return torch.nn.LayerNorm(normalized_shape, eps, elementwise_affine)
def Linear(in_features, out_features, bias=True):
m = nn.Linear(in_features, out_features, bias)
nn.init.xavier_uniform_(m.weight)
if bias:
nn.init.constant_(m.bias, 0.)
return m
class SinusoidalPositionalEmbedding(nn.Module):
"""This module produces sinusoidal positional embeddings of any length.
Padding symbols are ignored.
"""
def __init__(self, embedding_dim, padding_idx, init_size=1024):
super().__init__()
self.embedding_dim = embedding_dim
self.padding_idx = padding_idx
self.weights = SinusoidalPositionalEmbedding.get_embedding(
init_size,
embedding_dim,
padding_idx,
)
self.register_buffer('_float_tensor', torch.FloatTensor(1))
@staticmethod
def get_embedding(num_embeddings, embedding_dim, padding_idx=None):
"""Build sinusoidal embeddings.
This matches the implementation in tensor2tensor, but differs slightly
from the description in Section 3.5 of "Attention Is All You Need".
"""
half_dim = embedding_dim // 2
emb = math.log(10000) / (half_dim - 1)
emb = torch.exp(torch.arange(half_dim, dtype=torch.float) * -emb)
emb = torch.arange(num_embeddings, dtype=torch.float).unsqueeze(1) * emb.unsqueeze(0)
emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1).view(num_embeddings, -1)
if embedding_dim % 2 == 1:
# zero pad
emb = torch.cat([emb, torch.zeros(num_embeddings, 1)], dim=1)
if padding_idx is not None:
emb[padding_idx, :] = 0
return emb
def forward(self, input, incremental_state=None, timestep=None, positions=None, **kwargs):
"""Input is expected to be of size [bsz x seqlen]."""
bsz, seq_len = input.shape[:2]
max_pos = self.padding_idx + 1 + seq_len
if self.weights is None or max_pos > self.weights.size(0):
# recompute/expand embeddings if needed
self.weights = SinusoidalPositionalEmbedding.get_embedding(
max_pos,
self.embedding_dim,
self.padding_idx,
)
self.weights = self.weights.to(self._float_tensor)
if incremental_state is not None:
# positions is the same for every token when decoding a single step
pos = timestep.view(-1)[0] + 1 if timestep is not None else seq_len
return self.weights[self.padding_idx + pos, :].expand(bsz, 1, -1)
positions = utils.make_positions(input, self.padding_idx) if positions is None else positions
return self.weights.index_select(0, positions.view(-1)).view(bsz, seq_len, -1).detach()
def max_positions(self):
"""Maximum number of supported positions."""
return int(1e5) # an arbitrary large number
class ConvTBC(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, padding=0):
super(ConvTBC, self).__init__()
self.in_channels = in_channels
self.out_channels = out_channels
self.kernel_size = kernel_size
self.padding = padding
self.weight = torch.nn.Parameter(torch.Tensor(
self.kernel_size, in_channels, out_channels))
self.bias = torch.nn.Parameter(torch.Tensor(out_channels))
def forward(self, input):
return torch.conv_tbc(input.contiguous(), self.weight, self.bias, self.padding)
class MultiheadAttention(nn.Module):
def __init__(self, embed_dim, num_heads, kdim=None, vdim=None, dropout=0., bias=True,
add_bias_kv=False, add_zero_attn=False, self_attention=False,
encoder_decoder_attention=False):
super().__init__()
self.embed_dim = embed_dim
self.kdim = kdim if kdim is not None else embed_dim
self.vdim = vdim if vdim is not None else embed_dim
self.qkv_same_dim = self.kdim == embed_dim and self.vdim == embed_dim
self.num_heads = num_heads
self.dropout = dropout
self.head_dim = embed_dim // num_heads
assert self.head_dim * num_heads == self.embed_dim, "embed_dim must be divisible by num_heads"
self.scaling = self.head_dim ** -0.5
self.self_attention = self_attention
self.encoder_decoder_attention = encoder_decoder_attention
assert not self.self_attention or self.qkv_same_dim, 'Self-attention requires query, key and ' \
'value to be of the same size'
if self.qkv_same_dim:
self.in_proj_weight = Parameter(torch.Tensor(3 * embed_dim, embed_dim))
else:
self.k_proj_weight = Parameter(torch.Tensor(embed_dim, self.kdim))
self.v_proj_weight = Parameter(torch.Tensor(embed_dim, self.vdim))
self.q_proj_weight = Parameter(torch.Tensor(embed_dim, embed_dim))
if bias:
self.in_proj_bias = Parameter(torch.Tensor(3 * embed_dim))
else:
self.register_parameter('in_proj_bias', None)
self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
if add_bias_kv:
self.bias_k = Parameter(torch.Tensor(1, 1, embed_dim))
self.bias_v = Parameter(torch.Tensor(1, 1, embed_dim))
else:
self.bias_k = self.bias_v = None
self.add_zero_attn = add_zero_attn
self.reset_parameters()
self.enable_torch_version = False
if hasattr(F, "multi_head_attention_forward"):
self.enable_torch_version = True
else:
self.enable_torch_version = False
self.last_attn_probs = None
def reset_parameters(self):
if self.qkv_same_dim:
nn.init.xavier_uniform_(self.in_proj_weight)
else:
nn.init.xavier_uniform_(self.k_proj_weight)
nn.init.xavier_uniform_(self.v_proj_weight)
nn.init.xavier_uniform_(self.q_proj_weight)
nn.init.xavier_uniform_(self.out_proj.weight)
if self.in_proj_bias is not None:
nn.init.constant_(self.in_proj_bias, 0.)
nn.init.constant_(self.out_proj.bias, 0.)
if self.bias_k is not None:
nn.init.xavier_normal_(self.bias_k)
if self.bias_v is not None:
nn.init.xavier_normal_(self.bias_v)
def forward(
self,
query, key, value,
key_padding_mask=None,
incremental_state=None,
need_weights=True,
static_kv=False,
attn_mask=None,
before_softmax=False,
need_head_weights=False,
enc_dec_attn_constraint_mask=None,
reset_attn_weight=None
):
"""Input shape: Time x Batch x Channel
Args:
key_padding_mask (ByteTensor, optional): mask to exclude
keys that are pads, of shape `(batch, src_len)`, where
padding elements are indicated by 1s.
need_weights (bool, optional): return the attention weights,
averaged over heads (default: False).
attn_mask (ByteTensor, optional): typically used to
implement causal attention, where the mask prevents the
attention from looking forward in time (default: None).
before_softmax (bool, optional): return the raw attention
weights and values before the attention softmax.
need_head_weights (bool, optional): return the attention
weights for each head. Implies *need_weights*. Default:
return the average attention weights over all heads.
"""
if need_head_weights:
need_weights = True
tgt_len, bsz, embed_dim = query.size()
assert embed_dim == self.embed_dim
assert list(query.size()) == [tgt_len, bsz, embed_dim]
if self.enable_torch_version and incremental_state is None and not static_kv and reset_attn_weight is None:
if self.qkv_same_dim:
return F.multi_head_attention_forward(query, key, value,
self.embed_dim, self.num_heads,
self.in_proj_weight,
self.in_proj_bias, self.bias_k, self.bias_v,
self.add_zero_attn, self.dropout,
self.out_proj.weight, self.out_proj.bias,
self.training, key_padding_mask, need_weights,
attn_mask)
else:
return F.multi_head_attention_forward(query, key, value,
self.embed_dim, self.num_heads,
torch.empty([0]),
self.in_proj_bias, self.bias_k, self.bias_v,
self.add_zero_attn, self.dropout,
self.out_proj.weight, self.out_proj.bias,
self.training, key_padding_mask, need_weights,
attn_mask, use_separate_proj_weight=True,
q_proj_weight=self.q_proj_weight,
k_proj_weight=self.k_proj_weight,
v_proj_weight=self.v_proj_weight)
if incremental_state is not None:
print('Not implemented error.')
exit()
else:
saved_state = None
if self.self_attention:
# self-attention
q, k, v = self.in_proj_qkv(query)
elif self.encoder_decoder_attention:
# encoder-decoder attention
q = self.in_proj_q(query)
if key is None:
assert value is None
k = v = None
else:
k = self.in_proj_k(key)
v = self.in_proj_v(key)
else:
q = self.in_proj_q(query)
k = self.in_proj_k(key)
v = self.in_proj_v(value)
q *= self.scaling
if self.bias_k is not None:
assert self.bias_v is not None
k = torch.cat([k, self.bias_k.repeat(1, bsz, 1)])
v = torch.cat([v, self.bias_v.repeat(1, bsz, 1)])
if attn_mask is not None:
attn_mask = torch.cat([attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1)
if key_padding_mask is not None:
key_padding_mask = torch.cat(
[key_padding_mask, key_padding_mask.new_zeros(key_padding_mask.size(0), 1)], dim=1)
q = q.contiguous().view(tgt_len, bsz * self.num_heads, self.head_dim).transpose(0, 1)
if k is not None:
k = k.contiguous().view(-1, bsz * self.num_heads, self.head_dim).transpose(0, 1)
if v is not None:
v = v.contiguous().view(-1, bsz * self.num_heads, self.head_dim).transpose(0, 1)
if saved_state is not None:
print('Not implemented error.')
exit()
src_len = k.size(1)
# This is part of a workaround to get around fork/join parallelism
# not supporting Optional types.
if key_padding_mask is not None and key_padding_mask.shape == torch.Size([]):
key_padding_mask = None
if key_padding_mask is not None:
assert key_padding_mask.size(0) == bsz
assert key_padding_mask.size(1) == src_len
if self.add_zero_attn:
src_len += 1
k = torch.cat([k, k.new_zeros((k.size(0), 1) + k.size()[2:])], dim=1)
v = torch.cat([v, v.new_zeros((v.size(0), 1) + v.size()[2:])], dim=1)
if attn_mask is not None:
attn_mask = torch.cat([attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1)
if key_padding_mask is not None:
key_padding_mask = torch.cat(
[key_padding_mask, torch.zeros(key_padding_mask.size(0), 1).type_as(key_padding_mask)], dim=1)
attn_weights = torch.bmm(q, k.transpose(1, 2))
attn_weights = self.apply_sparse_mask(attn_weights, tgt_len, src_len, bsz)
assert list(attn_weights.size()) == [bsz * self.num_heads, tgt_len, src_len]
if attn_mask is not None:
if len(attn_mask.shape) == 2:
attn_mask = attn_mask.unsqueeze(0)
elif len(attn_mask.shape) == 3:
attn_mask = attn_mask[:, None].repeat([1, self.num_heads, 1, 1]).reshape(
bsz * self.num_heads, tgt_len, src_len)
attn_weights = attn_weights + attn_mask
if enc_dec_attn_constraint_mask is not None: # bs x head x L_kv
attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
attn_weights = attn_weights.masked_fill(
enc_dec_attn_constraint_mask.unsqueeze(2).bool(),
-1e9,
)
attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
if key_padding_mask is not None:
# don't attend to padding symbols
attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
attn_weights = attn_weights.masked_fill(
key_padding_mask.unsqueeze(1).unsqueeze(2),
-1e9,
)
attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
attn_logits = attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
if before_softmax:
return attn_weights, v
attn_weights_float = utils.softmax(attn_weights, dim=-1)
attn_weights = attn_weights_float.type_as(attn_weights)
attn_probs = F.dropout(attn_weights_float.type_as(attn_weights), p=self.dropout, training=self.training)
if reset_attn_weight is not None:
if reset_attn_weight:
self.last_attn_probs = attn_probs.detach()
else:
assert self.last_attn_probs is not None
attn_probs = self.last_attn_probs
attn = torch.bmm(attn_probs, v)
assert list(attn.size()) == [bsz * self.num_heads, tgt_len, self.head_dim]
attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim)
attn = self.out_proj(attn)
if need_weights:
attn_weights = attn_weights_float.view(bsz, self.num_heads, tgt_len, src_len).transpose(1, 0)
if not need_head_weights:
# average attention weights over heads
attn_weights = attn_weights.mean(dim=0)
else:
attn_weights = None
return attn, (attn_weights, attn_logits)
def in_proj_qkv(self, query):
return self._in_proj(query).chunk(3, dim=-1)
def in_proj_q(self, query):
if self.qkv_same_dim:
return self._in_proj(query, end=self.embed_dim)
else:
bias = self.in_proj_bias
if bias is not None:
bias = bias[:self.embed_dim]
return F.linear(query, self.q_proj_weight, bias)
def in_proj_k(self, key):
if self.qkv_same_dim:
return self._in_proj(key, start=self.embed_dim, end=2 * self.embed_dim)
else:
weight = self.k_proj_weight
bias = self.in_proj_bias
if bias is not None:
bias = bias[self.embed_dim:2 * self.embed_dim]
return F.linear(key, weight, bias)
def in_proj_v(self, value):
if self.qkv_same_dim:
return self._in_proj(value, start=2 * self.embed_dim)
else:
weight = self.v_proj_weight
bias = self.in_proj_bias
if bias is not None:
bias = bias[2 * self.embed_dim:]
return F.linear(value, weight, bias)
def _in_proj(self, input, start=0, end=None):
weight = self.in_proj_weight
bias = self.in_proj_bias
weight = weight[start:end, :]
if bias is not None:
bias = bias[start:end]
return F.linear(input, weight, bias)
def apply_sparse_mask(self, attn_weights, tgt_len, src_len, bsz):
return attn_weights
class Swish(torch.autograd.Function):
@staticmethod
def forward(ctx, i):
result = i * torch.sigmoid(i)
ctx.save_for_backward(i)
return result
@staticmethod
def backward(ctx, grad_output):
i = ctx.saved_variables[0]
sigmoid_i = torch.sigmoid(i)
return grad_output * (sigmoid_i * (1 + i * (1 - sigmoid_i)))
class CustomSwish(nn.Module):
def forward(self, input_tensor):
return Swish.apply(input_tensor)
class Mish(nn.Module):
def forward(self, x):
return x * torch.tanh(F.softplus(x))
class TransformerFFNLayer(nn.Module):
def __init__(self, hidden_size, filter_size, padding="SAME", kernel_size=1, dropout=0., act='gelu'):
super().__init__()
self.kernel_size = kernel_size
self.dropout = dropout
self.act = act
if padding == 'SAME':
self.ffn_1 = nn.Conv1d(hidden_size, filter_size, kernel_size, padding=kernel_size // 2)
elif padding == 'LEFT':
self.ffn_1 = nn.Sequential(
nn.ConstantPad1d((kernel_size - 1, 0), 0.0),
nn.Conv1d(hidden_size, filter_size, kernel_size)
)
self.ffn_2 = Linear(filter_size, hidden_size)
if self.act == 'swish':
self.swish_fn = CustomSwish()
def forward(self, x, incremental_state=None):
# x: T x B x C
if incremental_state is not None:
assert incremental_state is None, 'Nar-generation does not allow this.'
exit(1)
x = self.ffn_1(x.permute(1, 2, 0)).permute(2, 0, 1)
x = x * self.kernel_size ** -0.5
if incremental_state is not None:
x = x[-1:]
if self.act == 'gelu':
x = F.gelu(x)
if self.act == 'relu':
x = F.relu(x)
if self.act == 'swish':
x = self.swish_fn(x)
x = F.dropout(x, self.dropout, training=self.training)
x = self.ffn_2(x)
return x
class BatchNorm1dTBC(nn.Module):
def __init__(self, c):
super(BatchNorm1dTBC, self).__init__()
self.bn = nn.BatchNorm1d(c)
def forward(self, x):
"""
:param x: [T, B, C]
:return: [T, B, C]
"""
x = x.permute(1, 2, 0) # [B, C, T]
x = self.bn(x) # [B, C, T]
x = x.permute(2, 0, 1) # [T, B, C]
return x
class EncSALayer(nn.Module):
def __init__(self, c, num_heads, dropout, attention_dropout=0.1,
relu_dropout=0.1, kernel_size=9, padding='SAME', norm='ln', act='gelu'):
super().__init__()
self.c = c
self.dropout = dropout
self.num_heads = num_heads
if num_heads > 0:
if norm == 'ln':
self.layer_norm1 = LayerNorm(c)
elif norm == 'bn':
self.layer_norm1 = BatchNorm1dTBC(c)
self.self_attn = MultiheadAttention(
self.c, num_heads, self_attention=True, dropout=attention_dropout, bias=False,
)
if norm == 'ln':
self.layer_norm2 = LayerNorm(c)
elif norm == 'bn':
self.layer_norm2 = BatchNorm1dTBC(c)
self.ffn = TransformerFFNLayer(
c, 4 * c, kernel_size=kernel_size, dropout=relu_dropout, padding=padding, act=act)
def forward(self, x, encoder_padding_mask=None, **kwargs):
layer_norm_training = kwargs.get('layer_norm_training', None)
if layer_norm_training is not None:
self.layer_norm1.training = layer_norm_training
self.layer_norm2.training = layer_norm_training
if self.num_heads > 0:
residual = x
x = self.layer_norm1(x)
x, _, = self.self_attn(
query=x,
key=x,
value=x,
key_padding_mask=encoder_padding_mask
)
x = F.dropout(x, self.dropout, training=self.training)
x = residual + x
x = x * (1 - encoder_padding_mask.float()).transpose(0, 1)[..., None]
residual = x
x = self.layer_norm2(x)
x = self.ffn(x)
x = F.dropout(x, self.dropout, training=self.training)
x = residual + x
x = x * (1 - encoder_padding_mask.float()).transpose(0, 1)[..., None]
return x
class DecSALayer(nn.Module):
def __init__(self, c, num_heads, dropout, attention_dropout=0.1, relu_dropout=0.1, kernel_size=9, act='gelu'):
super().__init__()
self.c = c
self.dropout = dropout
self.layer_norm1 = LayerNorm(c)
self.self_attn = MultiheadAttention(
c, num_heads, self_attention=True, dropout=attention_dropout, bias=False
)
self.layer_norm2 = LayerNorm(c)
self.encoder_attn = MultiheadAttention(
c, num_heads, encoder_decoder_attention=True, dropout=attention_dropout, bias=False,
)
self.layer_norm3 = LayerNorm(c)
self.ffn = TransformerFFNLayer(
c, 4 * c, padding='LEFT', kernel_size=kernel_size, dropout=relu_dropout, act=act)
def forward(
self,
x,
encoder_out=None,
encoder_padding_mask=None,
incremental_state=None,
self_attn_mask=None,
self_attn_padding_mask=None,
attn_out=None,
reset_attn_weight=None,
**kwargs,
):
layer_norm_training = kwargs.get('layer_norm_training', None)
if layer_norm_training is not None:
self.layer_norm1.training = layer_norm_training
self.layer_norm2.training = layer_norm_training
self.layer_norm3.training = layer_norm_training
residual = x
x = self.layer_norm1(x)
x, _ = self.self_attn(
query=x,
key=x,
value=x,
key_padding_mask=self_attn_padding_mask,
incremental_state=incremental_state,
attn_mask=self_attn_mask
)
x = F.dropout(x, self.dropout, training=self.training)
x = residual + x
residual = x
x = self.layer_norm2(x)
if encoder_out is not None:
x, attn = self.encoder_attn(
query=x,
key=encoder_out,
value=encoder_out,
key_padding_mask=encoder_padding_mask,
incremental_state=incremental_state,
static_kv=True,
enc_dec_attn_constraint_mask=None,
# utils.get_incremental_state(self, incremental_state, 'enc_dec_attn_constraint_mask'),
reset_attn_weight=reset_attn_weight
)
attn_logits = attn[1]
else:
assert attn_out is not None
x = self.encoder_attn.in_proj_v(attn_out.transpose(0, 1))
attn_logits = None
x = F.dropout(x, self.dropout, training=self.training)
x = residual + x
residual = x
x = self.layer_norm3(x)
x = self.ffn(x, incremental_state=incremental_state)
x = F.dropout(x, self.dropout, training=self.training)
x = residual + x
# if len(attn_logits.size()) > 3:
# indices = attn_logits.softmax(-1).max(-1).values.sum(-1).argmax(-1)
# attn_logits = attn_logits.gather(1,
# indices[:, None, None, None].repeat(1, 1, attn_logits.size(-2), attn_logits.size(-1))).squeeze(1)
return x, attn_logits

84
modules/commons/ssim.py Normal file
View File

@ -0,0 +1,84 @@
"""
Adapted from https://github.com/Po-Hsun-Su/pytorch-ssim
"""
from math import exp
import torch
import torch.nn.functional as F
from torch.autograd import Variable
def gaussian(window_size, sigma):
gauss = torch.Tensor([exp(-(x - window_size // 2) ** 2 / float(2 * sigma ** 2)) for x in range(window_size)])
return gauss / gauss.sum()
def create_window(window_size, channel):
_1D_window = gaussian(window_size, 1.5).unsqueeze(1)
_2D_window = _1D_window.mm(_1D_window.t()).float().unsqueeze(0).unsqueeze(0)
window = Variable(_2D_window.expand(channel, 1, window_size, window_size).contiguous())
return window
def _ssim(img1, img2, window, window_size, channel, size_average=True):
mu1 = F.conv2d(img1, window, padding=window_size // 2, groups=channel)
mu2 = F.conv2d(img2, window, padding=window_size // 2, groups=channel)
mu1_sq = mu1.pow(2)
mu2_sq = mu2.pow(2)
mu1_mu2 = mu1 * mu2
sigma1_sq = F.conv2d(img1 * img1, window, padding=window_size // 2, groups=channel) - mu1_sq
sigma2_sq = F.conv2d(img2 * img2, window, padding=window_size // 2, groups=channel) - mu2_sq
sigma12 = F.conv2d(img1 * img2, window, padding=window_size // 2, groups=channel) - mu1_mu2
C1 = 0.01 ** 2
C2 = 0.03 ** 2
ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) * (sigma1_sq + sigma2_sq + C2))
if size_average:
return ssim_map.mean()
else:
return ssim_map.mean(1)
class SSIM(torch.nn.Module):
def __init__(self, window_size=11, size_average=True):
super(SSIM, self).__init__()
self.window_size = window_size
self.size_average = size_average
self.channel = 1
self.window = create_window(window_size, self.channel)
def forward(self, img1, img2):
(_, channel, _, _) = img1.size()
if channel == self.channel and self.window.data.type() == img1.data.type():
window = self.window
else:
window = create_window(self.window_size, channel)
if img1.is_cuda:
window = window.cuda(img1.get_device())
window = window.type_as(img1)
self.window = window
self.channel = channel
return _ssim(img1, img2, window, self.window_size, channel, self.size_average)
window = None
def ssim(img1, img2, window_size=11, size_average=True):
(_, channel, _, _) = img1.size()
global window
if window is None:
window = create_window(window_size, channel)
if img1.is_cuda:
window = window.cuda(img1.get_device())
window = window.type_as(img1)
return _ssim(img1, img2, window, window_size, channel, size_average)

257
modules/diff/diffusion.py Normal file
View File

@ -0,0 +1,257 @@
from collections import deque
from functools import partial
from inspect import isfunction
import numpy as np
import torch
import torch.nn.functional as F
from torch import nn
from tqdm import tqdm
from modules.encoder import SvcEncoder
from training.train_pipeline import Batch2Loss
from utils.hparams import hparams
def exists(x):
return x is not None
def default(val, d):
if exists(val):
return val
return d() if isfunction(d) else d
# gaussian diffusion trainer class
def extract(a, t, x_shape):
b, *_ = t.shape
out = a.gather(-1, t)
return out.reshape(b, *((1,) * (len(x_shape) - 1)))
def noise_like(shape, device, repeat=False):
repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
noise = lambda: torch.randn(shape, device=device)
return repeat_noise() if repeat else noise()
def linear_beta_schedule(timesteps, max_beta=hparams.get('max_beta', 0.01)):
"""
linear schedule
"""
betas = np.linspace(1e-4, max_beta, timesteps)
return betas
def cosine_beta_schedule(timesteps, s=0.008):
"""
cosine schedule
as proposed in https://openreview.net/forum?id=-NEXDKk8gZ
"""
steps = timesteps + 1
x = np.linspace(0, steps, steps)
alphas_cumprod = np.cos(((x / steps) + s) / (1 + s) * np.pi * 0.5) ** 2
alphas_cumprod = alphas_cumprod / alphas_cumprod[0]
betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1])
return np.clip(betas, a_min=0, a_max=0.999)
beta_schedule = {
"cosine": cosine_beta_schedule,
"linear": linear_beta_schedule,
}
class GaussianDiffusion(nn.Module):
def __init__(self, phone_encoder, out_dims, denoise_fn,
timesteps=1000, K_step=1000, loss_type=hparams.get('diff_loss_type', 'l1'), betas=None, spec_min=None,
spec_max=None):
super().__init__()
self.denoise_fn = denoise_fn
self.fs2 = SvcEncoder(phone_encoder, out_dims)
self.mel_bins = out_dims
if exists(betas):
betas = betas.detach().cpu().numpy() if isinstance(betas, torch.Tensor) else betas
else:
if 'schedule_type' in hparams.keys():
betas = beta_schedule[hparams['schedule_type']](timesteps)
else:
betas = cosine_beta_schedule(timesteps)
alphas = 1. - betas
alphas_cumprod = np.cumprod(alphas, axis=0)
alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1])
timesteps, = betas.shape
self.num_timesteps = int(timesteps)
self.K_step = K_step
self.loss_type = loss_type
self.noise_list = deque(maxlen=4)
to_torch = partial(torch.tensor, dtype=torch.float32)
self.register_buffer('betas', to_torch(betas))
self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod))
self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev))
# calculations for diffusion q(x_t | x_{t-1}) and others
self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod)))
self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod)))
self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod)))
self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod)))
self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1)))
# calculations for posterior q(x_{t-1} | x_t, x_0)
posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod)
# above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t)
self.register_buffer('posterior_variance', to_torch(posterior_variance))
# below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain
self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20))))
self.register_buffer('posterior_mean_coef1', to_torch(
betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod)))
self.register_buffer('posterior_mean_coef2', to_torch(
(1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod)))
self.register_buffer('spec_min', torch.FloatTensor(spec_min)[None, None, :hparams['keep_bins']])
self.register_buffer('spec_max', torch.FloatTensor(spec_max)[None, None, :hparams['keep_bins']])
def predict_start_from_noise(self, x_t, t, noise):
return (
extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t -
extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise
)
def q_posterior(self, x_start, x_t, t):
posterior_mean = (
extract(self.posterior_mean_coef1, t, x_t.shape) * x_start +
extract(self.posterior_mean_coef2, t, x_t.shape) * x_t
)
posterior_variance = extract(self.posterior_variance, t, x_t.shape)
posterior_log_variance_clipped = extract(self.posterior_log_variance_clipped, t, x_t.shape)
return posterior_mean, posterior_variance, posterior_log_variance_clipped
def p_mean_variance(self, x, t, cond, clip_denoised: bool):
noise_pred = self.denoise_fn(x, t, cond=cond)
x_recon = self.predict_start_from_noise(x, t=t, noise=noise_pred)
if clip_denoised:
x_recon.clamp_(-1., 1.)
model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t)
return model_mean, posterior_variance, posterior_log_variance
@torch.no_grad()
def p_sample(self, x, t, cond, clip_denoised=True, repeat_noise=False):
b, *_, device = *x.shape, x.device
model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, cond=cond, clip_denoised=clip_denoised)
noise = noise_like(x.shape, device, repeat_noise)
# no noise when t == 0
nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1)))
return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise
@torch.no_grad()
def p_sample_plms(self, x, t, interval, cond):
"""
Use the PLMS method from [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://arxiv.org/abs/2202.09778).
"""
def get_x_pred(x, noise_t, t):
a_t = extract(self.alphas_cumprod, t, x.shape)
a_prev = extract(self.alphas_cumprod, torch.max(t - interval, torch.zeros_like(t)), x.shape)
a_t_sq, a_prev_sq = a_t.sqrt(), a_prev.sqrt()
x_delta = (a_prev - a_t) * ((1 / (a_t_sq * (a_t_sq + a_prev_sq))) * x - 1 / (
a_t_sq * (((1 - a_prev) * a_t).sqrt() + ((1 - a_t) * a_prev).sqrt())) * noise_t)
x_pred = x + x_delta
return x_pred
noise_list = self.noise_list
noise_pred = self.denoise_fn(x, t, cond=cond)
if len(noise_list) == 0:
x_pred = get_x_pred(x, noise_pred, t)
noise_pred_prev = self.denoise_fn(x_pred, max(t - interval, 0), cond=cond)
noise_pred_prime = (noise_pred + noise_pred_prev) / 2
elif len(noise_list) == 1:
noise_pred_prime = (3 * noise_pred - noise_list[-1]) / 2
elif len(noise_list) == 2:
noise_pred_prime = (23 * noise_pred - 16 * noise_list[-1] + 5 * noise_list[-2]) / 12
elif len(noise_list) >= 3:
noise_pred_prime = (55 * noise_pred - 59 * noise_list[-1] + 37 * noise_list[-2] - 9 * noise_list[-3]) / 24
x_prev = get_x_pred(x, noise_pred_prime, t)
noise_list.append(noise_pred)
return x_prev
def q_sample(self, x_start, t, noise=None):
noise = default(noise, lambda: torch.randn_like(x_start))
return (
extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start +
extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise
)
def p_losses(self, x_start, t, cond, noise=None, nonpadding=None):
noise = default(noise, lambda: torch.randn_like(x_start))
x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise)
x_recon = self.denoise_fn(x_noisy, t, cond)
if self.loss_type == 'l1':
if nonpadding is not None:
loss = ((noise - x_recon).abs() * nonpadding.unsqueeze(1)).mean()
else:
# print('are you sure w/o nonpadding?')
loss = (noise - x_recon).abs().mean()
elif self.loss_type == 'l2':
loss = F.mse_loss(noise, x_recon)
else:
raise NotImplementedError()
return loss
def forward(self, hubert, mel2ph=None, spk_embed_id=None, ref_mels=None, f0=None, energy=None, infer=False):
'''
conditioning diffusion, use fastspeech2 encoder output as the condition
'''
ret = self.fs2(hubert, mel2ph, spk_embed_id, f0, energy)
cond = ret['decoder_inp'].transpose(1, 2)
b, *_, device = *hubert.shape, hubert.device
if not infer:
Batch2Loss.module4(
self.p_losses,
self.norm_spec(ref_mels), cond, ret, self.K_step, b, device
)
else:
t = self.K_step
shape = (cond.shape[0], 1, self.mel_bins, cond.shape[2])
x = torch.randn(shape, device=device)
if hparams.get('pndm_speedup') and hparams['pndm_speedup'] > 1:
self.noise_list = deque(maxlen=4)
iteration_interval = hparams['pndm_speedup']
for i in tqdm(reversed(range(0, t, iteration_interval)), desc='sample time step',
total=t // iteration_interval):
x = self.p_sample_plms(x, torch.full((b,), i, device=device, dtype=torch.long), iteration_interval,
cond)
else:
for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t):
x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond)
x = x[:, 0].transpose(1, 2)
if mel2ph is not None: # for singing
ret['mel_out'] = self.denorm_spec(x) * ((mel2ph > 0).float()[:, :, None])
else:
ret['mel_out'] = self.denorm_spec(x)
return ret
def norm_spec(self, x):
return (x - self.spec_min) / (self.spec_max - self.spec_min) * 2 - 1
def denorm_spec(self, x):
return (x + 1) / 2 * (self.spec_max - self.spec_min) + self.spec_min

135
modules/diff/net.py Normal file
View File

@ -0,0 +1,135 @@
import math
from math import sqrt
import torch
import torch.nn as nn
import torch.nn.functional as F
from modules.commons.common_layers import Mish
from utils.hparams import hparams
Linear = nn.Linear
ConvTranspose2d = nn.ConvTranspose2d
class AttrDict(dict):
def __init__(self, *args, **kwargs):
super(AttrDict, self).__init__(*args, **kwargs)
self.__dict__ = self
def override(self, attrs):
if isinstance(attrs, dict):
self.__dict__.update(**attrs)
elif isinstance(attrs, (list, tuple, set)):
for attr in attrs:
self.override(attr)
elif attrs is not None:
raise NotImplementedError
return self
class SinusoidalPosEmb(nn.Module):
def __init__(self, dim):
super().__init__()
self.dim = dim
def forward(self, x):
device = x.device
half_dim = self.dim // 2
emb = math.log(10000) / (half_dim - 1)
emb = torch.exp(torch.arange(half_dim, device=device) * -emb)
emb = x[:, None] * emb[None, :]
emb = torch.cat((emb.sin(), emb.cos()), dim=-1)
return emb
def Conv1d(*args, **kwargs):
layer = nn.Conv1d(*args, **kwargs)
nn.init.kaiming_normal_(layer.weight)
return layer
@torch.jit.script
def silu(x):
return x * torch.sigmoid(x)
class ResidualBlock(nn.Module):
def __init__(self, encoder_hidden, residual_channels, dilation):
super().__init__()
self.dilated_conv = Conv1d(residual_channels, 2 * residual_channels, 3, padding=dilation, dilation=dilation)
self.diffusion_projection = Linear(residual_channels, residual_channels)
self.conditioner_projection = Conv1d(encoder_hidden, 2 * residual_channels, 1)
self.output_projection = Conv1d(residual_channels, 2 * residual_channels, 1)
def forward(self, x, conditioner, diffusion_step):
diffusion_step = self.diffusion_projection(diffusion_step).unsqueeze(-1)
conditioner = self.conditioner_projection(conditioner)
y = x + diffusion_step
y = self.dilated_conv(y) + conditioner
gate, filter = torch.chunk(y, 2, dim=1)
# Using torch.split instead of torch.chunk to avoid using onnx::Slice
# gate, filter = torch.split(y, torch.div(y.shape[1], 2), dim=1)
y = torch.sigmoid(gate) * torch.tanh(filter)
y = self.output_projection(y)
residual, skip = torch.chunk(y, 2, dim=1)
# Using torch.split instead of torch.chunk to avoid using onnx::Slice
# residual, skip = torch.split(y, torch.div(y.shape[1], 2), dim=1)
return (x + residual) / sqrt(2.0), skip
class DiffNet(nn.Module):
def __init__(self, in_dims=80):
super().__init__()
self.params = params = AttrDict(
# Model params
encoder_hidden=hparams['hidden_size'],
residual_layers=hparams['residual_layers'],
residual_channels=hparams['residual_channels'],
dilation_cycle_length=hparams['dilation_cycle_length'],
)
self.input_projection = Conv1d(in_dims, params.residual_channels, 1)
self.diffusion_embedding = SinusoidalPosEmb(params.residual_channels)
dim = params.residual_channels
self.mlp = nn.Sequential(
nn.Linear(dim, dim * 4),
Mish(),
nn.Linear(dim * 4, dim)
)
self.residual_layers = nn.ModuleList([
ResidualBlock(params.encoder_hidden, params.residual_channels, 2 ** (i % params.dilation_cycle_length))
for i in range(params.residual_layers)
])
self.skip_projection = Conv1d(params.residual_channels, params.residual_channels, 1)
self.output_projection = Conv1d(params.residual_channels, in_dims, 1)
nn.init.zeros_(self.output_projection.weight)
def forward(self, spec, diffusion_step, cond):
"""
:param spec: [B, 1, M, T]
:param diffusion_step: [B, 1]
:param cond: [B, M, T]
:return:
"""
x = spec[:, 0]
x = self.input_projection(x) # x [B, residual_channel, T]
x = F.relu(x)
diffusion_step = self.diffusion_embedding(diffusion_step)
diffusion_step = self.mlp(diffusion_step)
skip = []
for layer_id, layer in enumerate(self.residual_layers):
x, skip_connection = layer(x, cond, diffusion_step)
skip.append(skip_connection)
x = torch.sum(torch.stack(skip), dim=0) / sqrt(len(self.residual_layers))
x = self.skip_projection(x)
x = F.relu(x)
x = self.output_projection(x) # [B, 80, T]
return x[:, None, :, :]

157
modules/encoder.py Normal file
View File

@ -0,0 +1,157 @@
import torch
from modules.commons.common_layers import *
from modules.commons.common_layers import Embedding
from modules.commons.common_layers import SinusoidalPositionalEmbedding
from utils.hparams import hparams
from utils.pitch_utils import f0_to_coarse, denorm_f0
class LayerNorm(torch.nn.LayerNorm):
"""Layer normalization module.
:param int nout: output dim size
:param int dim: dimension to be normalized
"""
def __init__(self, nout, dim=-1):
"""Construct an LayerNorm object."""
super(LayerNorm, self).__init__(nout, eps=1e-12)
self.dim = dim
def forward(self, x):
"""Apply layer normalization.
:param torch.Tensor x: input tensor
:return: layer normalized tensor
:rtype torch.Tensor
"""
if self.dim == -1:
return super(LayerNorm, self).forward(x)
return super(LayerNorm, self).forward(x.transpose(1, -1)).transpose(1, -1)
class PitchPredictor(torch.nn.Module):
def __init__(self, idim, n_layers=5, n_chans=384, odim=2, kernel_size=5,
dropout_rate=0.1, padding='SAME'):
"""Initilize pitch predictor module.
Args:
idim (int): Input dimension.
n_layers (int, optional): Number of convolutional layers.
n_chans (int, optional): Number of channels of convolutional layers.
kernel_size (int, optional): Kernel size of convolutional layers.
dropout_rate (float, optional): Dropout rate.
"""
super(PitchPredictor, self).__init__()
self.conv = torch.nn.ModuleList()
self.kernel_size = kernel_size
self.padding = padding
for idx in range(n_layers):
in_chans = idim if idx == 0 else n_chans
self.conv += [torch.nn.Sequential(
torch.nn.ConstantPad1d(((kernel_size - 1) // 2, (kernel_size - 1) // 2)
if padding == 'SAME'
else (kernel_size - 1, 0), 0),
torch.nn.Conv1d(in_chans, n_chans, kernel_size, stride=1, padding=0),
torch.nn.ReLU(),
LayerNorm(n_chans, dim=1),
torch.nn.Dropout(dropout_rate)
)]
self.linear = torch.nn.Linear(n_chans, odim)
self.embed_positions = SinusoidalPositionalEmbedding(idim, 0, init_size=4096)
self.pos_embed_alpha = nn.Parameter(torch.Tensor([1]))
def forward(self, xs):
"""
:param xs: [B, T, H]
:return: [B, T, H]
"""
positions = self.pos_embed_alpha * self.embed_positions(xs[..., 0])
xs = xs + positions
xs = xs.transpose(1, -1) # (B, idim, Tmax)
for f in self.conv:
xs = f(xs) # (B, C, Tmax)
# NOTE: calculate in log domain
xs = self.linear(xs.transpose(1, -1)) # (B, Tmax, H)
return xs
class SvcEncoder(nn.Module):
def __init__(self, dictionary, out_dims=None):
super().__init__()
# self.dictionary = dictionary
self.padding_idx = 0
self.hidden_size = hparams['hidden_size']
self.out_dims = out_dims
if out_dims is None:
self.out_dims = hparams['audio_num_mel_bins']
self.mel_out = Linear(self.hidden_size, self.out_dims, bias=True)
predictor_hidden = hparams['predictor_hidden'] if hparams['predictor_hidden'] > 0 else self.hidden_size
if hparams['use_pitch_embed']:
self.pitch_embed = Embedding(300, self.hidden_size, self.padding_idx)
self.pitch_predictor = PitchPredictor(
self.hidden_size,
n_chans=predictor_hidden,
n_layers=hparams['predictor_layers'],
dropout_rate=hparams['predictor_dropout'],
odim=2 if hparams['pitch_type'] == 'frame' else 1,
padding=hparams['ffn_padding'], kernel_size=hparams['predictor_kernel'])
if hparams['use_energy_embed']:
self.energy_embed = Embedding(256, self.hidden_size, self.padding_idx)
if hparams['use_spk_id']:
self.spk_embed_proj = Embedding(hparams['num_spk'], self.hidden_size)
elif hparams['use_spk_embed']:
self.spk_embed_proj = Linear(256, self.hidden_size, bias=True)
def forward(self, hubert, mel2ph=None, spk_embed_id=None, f0=None, energy=None):
ret = {}
encoder_out = hubert
var_embed = 0
# encoder_out_dur denotes encoder outputs for duration predictor
# in speech adaptation, duration predictor use old speaker embedding
if hparams['use_spk_id']:
spk_embed_0 = self.spk_embed_proj(spk_embed_id.to(hubert.device))[:, None, :]
spk_embed_1 = self.spk_embed_proj(torch.LongTensor([0]).to(hubert.device))[:, None, :]
spk_embed_2 = self.spk_embed_proj(torch.LongTensor([0]).to(hubert.device))[:, None, :]
spk_embed = 1 * spk_embed_0 + 0 * spk_embed_1 + 0 * spk_embed_2
spk_embed_f0 = spk_embed
else:
spk_embed_f0 = spk_embed = 0
ret['mel2ph'] = mel2ph
decoder_inp = F.pad(encoder_out, [0, 0, 1, 0])
mel2ph_ = mel2ph[..., None].repeat([1, 1, encoder_out.shape[-1]])
decoder_inp_origin = decoder_inp = torch.gather(decoder_inp, 1, mel2ph_) # [B, T, H]
tgt_nonpadding = (mel2ph > 0).float()[:, :, None]
# add pitch and energy embed
pitch_inp = (decoder_inp_origin + var_embed + spk_embed_f0) * tgt_nonpadding
if hparams['use_pitch_embed']:
decoder_inp = decoder_inp + self.add_pitch(pitch_inp, f0, mel2ph, ret)
if hparams['use_energy_embed']:
decoder_inp = decoder_inp + self.add_energy(pitch_inp, energy, ret)
ret['decoder_inp'] = (decoder_inp + spk_embed) * tgt_nonpadding
return ret
def add_pitch(self, decoder_inp, f0, mel2ph, ret):
decoder_inp = decoder_inp.detach() + hparams['predictor_grad'] * (decoder_inp - decoder_inp.detach())
pitch_padding = (mel2ph == 0)
ret['f0_denorm'] = f0_denorm = denorm_f0(f0, False, hparams, pitch_padding=pitch_padding)
if pitch_padding is not None:
f0[pitch_padding] = 0
pitch = f0_to_coarse(f0_denorm, hparams) # start from 0
ret['pitch_pred'] = pitch.unsqueeze(-1)
pitch_embedding = self.pitch_embed(pitch)
return pitch_embedding
def add_energy(self, decoder_inp, energy, ret):
decoder_inp = decoder_inp.detach() + hparams['predictor_grad'] * (decoder_inp - decoder_inp.detach())
ret['energy_pred'] = energy # energy_pred = self.energy_predictor(decoder_inp)[:, :, 0]
energy = torch.clamp(energy * 256 // 4, max=255).long() # energy_to_coarse
energy_embedding = self.energy_embed(energy)
return energy_embedding

View File

@ -0,0 +1,40 @@
import librosa
import torch
import torch.nn as nn
def load_cn_model(ch_hubert_path):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
from fairseq import checkpoint_utils
models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task(
[ch_hubert_path],
suffix="",
)
model = models[0]
model = model.to(device)
model.eval()
return model
def get_cn_hubert_units(con_model, audio_path, dev):
audio, sampling_rate = librosa.load(audio_path)
if len(audio.shape) > 1:
audio = librosa.to_mono(audio.transpose(1, 0))
if sampling_rate != 16000:
audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
feats = torch.from_numpy(audio).float()
if feats.dim() == 2: # double channels
feats = feats.mean(-1)
assert feats.dim() == 1, feats.dim()
feats = feats.view(1, -1)
padding_mask = torch.BoolTensor(feats.shape).fill_(False)
inputs = {
"source": feats.to(dev),
"padding_mask": padding_mask.to(dev),
"output_layer": 9, # layer 9
}
with torch.no_grad():
logits = con_model.extract_features(**inputs)
feats = con_model.final_proj(logits[0])
return feats

View File

@ -0,0 +1,243 @@
import copy
import random
from typing import Optional, Tuple
import librosa
import torch
import torch.nn as nn
import torch.nn.functional as t_func
from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present
class Hubert(nn.Module):
def __init__(self, num_label_embeddings: int = 100, mask: bool = True):
super().__init__()
self._mask = mask
self.feature_extractor = FeatureExtractor()
self.feature_projection = FeatureProjection()
self.positional_embedding = PositionalConvEmbedding()
self.norm = nn.LayerNorm(768)
self.dropout = nn.Dropout(0.1)
self.encoder = TransformerEncoder(
nn.TransformerEncoderLayer(
768, 12, 3072, activation="gelu", batch_first=True
),
12,
)
self.proj = nn.Linear(768, 256)
self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_())
self.label_embedding = nn.Embedding(num_label_embeddings, 256)
def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
mask = None
if self.training and self._mask:
mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2)
x[mask] = self.masked_spec_embed.to(x.dtype)
return x, mask
def encode(
self, x: torch.Tensor, layer: Optional[int] = None
) -> Tuple[torch.Tensor, torch.Tensor]:
x = self.feature_extractor(x)
x = self.feature_projection(x.transpose(1, 2))
x, mask = self.mask(x)
x = x + self.positional_embedding(x)
x = self.dropout(self.norm(x))
x = self.encoder(x, output_layer=layer)
return x, mask
def logits(self, x: torch.Tensor) -> torch.Tensor:
logits = torch.cosine_similarity(
x.unsqueeze(2),
self.label_embedding.weight.unsqueeze(0).unsqueeze(0),
dim=-1,
)
return logits / 0.1
def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
x, mask = self.encode(x)
x = self.proj(x)
logits = self.logits(x)
return logits, mask
class HubertSoft(Hubert):
def __init__(self):
super().__init__()
# @torch.inference_mode()
def units(self, wav: torch.Tensor) -> torch.Tensor:
wav = torch.nn.functional.pad(wav, ((400 - 320) // 2, (400 - 320) // 2))
x, _ = self.encode(wav)
return self.proj(x)
def forward(self, wav: torch.Tensor):
return self.units(wav)
class FeatureExtractor(nn.Module):
def __init__(self):
super().__init__()
self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False)
self.norm0 = nn.GroupNorm(512, 512)
self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False)
self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False)
self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False)
self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False)
self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False)
self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = t_func.gelu(self.norm0(self.conv0(x)))
x = t_func.gelu(self.conv1(x))
x = t_func.gelu(self.conv2(x))
x = t_func.gelu(self.conv3(x))
x = t_func.gelu(self.conv4(x))
x = t_func.gelu(self.conv5(x))
x = t_func.gelu(self.conv6(x))
return x
class FeatureProjection(nn.Module):
def __init__(self):
super().__init__()
self.norm = nn.LayerNorm(512)
self.projection = nn.Linear(512, 768)
self.dropout = nn.Dropout(0.1)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.norm(x)
x = self.projection(x)
x = self.dropout(x)
return x
class PositionalConvEmbedding(nn.Module):
def __init__(self):
super().__init__()
self.conv = nn.Conv1d(
768,
768,
kernel_size=128,
padding=128 // 2,
groups=16,
)
self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.conv(x.transpose(1, 2))
x = t_func.gelu(x[:, :, :-1])
return x.transpose(1, 2)
class TransformerEncoder(nn.Module):
def __init__(
self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int
) -> None:
super(TransformerEncoder, self).__init__()
self.layers = nn.ModuleList(
[copy.deepcopy(encoder_layer) for _ in range(num_layers)]
)
self.num_layers = num_layers
def forward(
self,
src: torch.Tensor,
mask: torch.Tensor = None,
src_key_padding_mask: torch.Tensor = None,
output_layer: Optional[int] = None,
) -> torch.Tensor:
output = src
for layer in self.layers[:output_layer]:
output = layer(
output, src_mask=mask, src_key_padding_mask=src_key_padding_mask
)
return output
def _compute_mask(
shape: Tuple[int, int],
mask_prob: float,
mask_length: int,
device: torch.device,
min_masks: int = 0,
) -> torch.Tensor:
batch_size, sequence_length = shape
if mask_length < 1:
raise ValueError("`mask_length` has to be bigger than 0.")
if mask_length > sequence_length:
raise ValueError(
f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`"
)
# compute number of masked spans in batch
num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random())
num_masked_spans = max(num_masked_spans, min_masks)
# make sure num masked indices <= sequence_length
if num_masked_spans * mask_length > sequence_length:
num_masked_spans = sequence_length // mask_length
# SpecAugment mask to fill
mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool)
# uniform distribution to sample from, make sure that offset samples are < sequence_length
uniform_dist = torch.ones(
(batch_size, sequence_length - (mask_length - 1)), device=device
)
# get random indices to mask
mask_indices = torch.multinomial(uniform_dist, num_masked_spans)
# expand masked indices to masked spans
mask_indices = (
mask_indices.unsqueeze(dim=-1)
.expand((batch_size, num_masked_spans, mask_length))
.reshape(batch_size, num_masked_spans * mask_length)
)
offsets = (
torch.arange(mask_length, device=device)[None, None, :]
.expand((batch_size, num_masked_spans, mask_length))
.reshape(batch_size, num_masked_spans * mask_length)
)
mask_idxs = mask_indices + offsets
# scatter indices to mask
mask = mask.scatter(1, mask_idxs, True)
return mask
def hubert_soft(
path: str
) -> HubertSoft:
r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`.
Args:
path (str): path of a pretrained model
"""
dev = torch.device("cuda" if torch.cuda.is_available() else "cpu")
hubert = HubertSoft()
checkpoint = torch.load(path)
consume_prefix_in_state_dict_if_present(checkpoint, "module.")
hubert.load_state_dict(checkpoint)
hubert.eval().to(dev)
return hubert
def get_units(hbt_soft, raw_wav_path, dev=torch.device('cuda')):
wav, sr = librosa.load(raw_wav_path, sr=None)
assert (sr >= 16000)
if len(wav.shape) > 1:
wav = librosa.to_mono(wav)
if sr != 16000:
wav16 = librosa.resample(wav, sr, 16000)
else:
wav16 = wav
dev = torch.device("cuda" if (dev == torch.device('cuda') and torch.cuda.is_available()) else "cpu")
torch.cuda.is_available() and torch.cuda.empty_cache()
with torch.inference_mode():
units = hbt_soft.units(torch.FloatTensor(wav16.astype(float)).unsqueeze(0).unsqueeze(0).to(dev))
return units

View File

@ -0,0 +1,19 @@
import time
import torch
import torchaudio
def get_onnx_units(hbt_soft, raw_wav_path):
source, sr = torchaudio.load(raw_wav_path)
source = torchaudio.functional.resample(source, sr, 16000)
if len(source.shape) == 2 and source.shape[1] >= 2:
source = torch.mean(source, dim=0).unsqueeze(0)
source = source.unsqueeze(0)
# 使用ONNX Runtime进行推理
start = time.time()
units = hbt_soft.run(output_names=["units"],
input_feed={"wav": source.numpy()})[0]
use_time = time.time() - start
print("hubert_onnx_session.run time:{}".format(use_time))
return units

View File

@ -0,0 +1,18 @@
import os
import shutil
class AttrDict(dict):
def __init__(self, *args, **kwargs):
super(AttrDict, self).__init__(*args, **kwargs)
self.__dict__ = self
def __getattr__(self, item):
return self[item]
def build_env(config, config_name, path):
t_path = os.path.join(path, config_name)
if config != t_path:
os.makedirs(path, exist_ok=True)
shutil.copyfile(config, os.path.join(path, config_name))

View File

@ -0,0 +1,437 @@
import json
import os
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
from .env import AttrDict
from .utils import init_weights, get_padding
LRELU_SLOPE = 0.1
def load_model(model_path, device='cuda'):
config_file = os.path.join(os.path.split(model_path)[0], 'config.json')
with open(config_file) as f:
data = f.read()
json_config = json.loads(data)
h = AttrDict(json_config)
generator = Generator(h).to(device)
cp_dict = torch.load(model_path, map_location=device)
generator.load_state_dict(cp_dict['generator'])
generator.eval()
generator.remove_weight_norm()
del cp_dict
return generator, h
class ResBlock1(torch.nn.Module):
def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)):
super(ResBlock1, self).__init__()
self.h = h
self.convs1 = nn.ModuleList([
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
padding=get_padding(kernel_size, dilation[0]))),
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
padding=get_padding(kernel_size, dilation[1]))),
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
padding=get_padding(kernel_size, dilation[2])))
])
self.convs1.apply(init_weights)
self.convs2 = nn.ModuleList([
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
padding=get_padding(kernel_size, 1))),
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
padding=get_padding(kernel_size, 1))),
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
padding=get_padding(kernel_size, 1)))
])
self.convs2.apply(init_weights)
def forward(self, x):
for c1, c2 in zip(self.convs1, self.convs2):
xt = F.leaky_relu(x, LRELU_SLOPE)
xt = c1(xt)
xt = F.leaky_relu(xt, LRELU_SLOPE)
xt = c2(xt)
x = xt + x
return x
def remove_weight_norm(self):
for l in self.convs1:
remove_weight_norm(l)
for l in self.convs2:
remove_weight_norm(l)
class ResBlock2(torch.nn.Module):
def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)):
super(ResBlock2, self).__init__()
self.h = h
self.convs = nn.ModuleList([
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
padding=get_padding(kernel_size, dilation[0]))),
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
padding=get_padding(kernel_size, dilation[1])))
])
self.convs.apply(init_weights)
def forward(self, x):
for c in self.convs:
xt = F.leaky_relu(x, LRELU_SLOPE)
xt = c(xt)
x = xt + x
return x
def remove_weight_norm(self):
for l in self.convs:
remove_weight_norm(l)
class SineGen(torch.nn.Module):
""" Definition of sine generator
SineGen(samp_rate, harmonic_num = 0,
sine_amp = 0.1, noise_std = 0.003,
voiced_threshold = 0,
flag_for_pulse=False)
samp_rate: sampling rate in Hz
harmonic_num: number of harmonic overtones (default 0)
sine_amp: amplitude of sine-wavefrom (default 0.1)
noise_std: std of Gaussian noise (default 0.003)
voiced_thoreshold: F0 threshold for U/V classification (default 0)
flag_for_pulse: this SinGen is used inside PulseGen (default False)
Note: when flag_for_pulse is True, the first time step of a voiced
segment is always sin(np.pi) or cos(0)
"""
def __init__(self, samp_rate, harmonic_num=0,
sine_amp=0.1, noise_std=0.003,
voiced_threshold=0):
super(SineGen, self).__init__()
self.sine_amp = sine_amp
self.noise_std = noise_std
self.harmonic_num = harmonic_num
self.dim = self.harmonic_num + 1
self.sampling_rate = samp_rate
self.voiced_threshold = voiced_threshold
def _f02uv(self, f0):
# generate uv signal
uv = torch.ones_like(f0)
uv = uv * (f0 > self.voiced_threshold)
return uv
@torch.no_grad()
def forward(self, f0, upp):
""" sine_tensor, uv = forward(f0)
input F0: tensor(batchsize=1, length, dim=1)
f0 for unvoiced steps should be 0
output sine_tensor: tensor(batchsize=1, length, dim)
output uv: tensor(batchsize=1, length, 1)
"""
f0 = f0.unsqueeze(-1)
fn = torch.multiply(f0, torch.arange(1, self.dim + 1, device=f0.device).reshape((1, 1, -1)))
rad_values = (fn / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
rand_ini = torch.rand(fn.shape[0], fn.shape[2], device=fn.device)
rand_ini[:, 0] = 0
rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
is_half = rad_values.dtype is not torch.float32
tmp_over_one = torch.cumsum(rad_values.double(), 1) # % 1 #####%1意味着后面的cumsum无法再优化
if is_half:
tmp_over_one = tmp_over_one.half()
else:
tmp_over_one = tmp_over_one.float()
tmp_over_one *= upp
tmp_over_one = F.interpolate(
tmp_over_one.transpose(2, 1), scale_factor=upp,
mode='linear', align_corners=True
).transpose(2, 1)
rad_values = F.interpolate(rad_values.transpose(2, 1), scale_factor=upp, mode='nearest').transpose(2, 1)
tmp_over_one %= 1
tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
cumsum_shift = torch.zeros_like(rad_values)
cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
rad_values = rad_values.double()
cumsum_shift = cumsum_shift.double()
sine_waves = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi)
if is_half:
sine_waves = sine_waves.half()
else:
sine_waves = sine_waves.float()
sine_waves = sine_waves * self.sine_amp
uv = self._f02uv(f0)
uv = F.interpolate(uv.transpose(2, 1), scale_factor=upp, mode='nearest').transpose(2, 1)
noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
noise = noise_amp * torch.randn_like(sine_waves)
sine_waves = sine_waves * uv + noise
return sine_waves, uv, noise
class SourceModuleHnNSF(torch.nn.Module):
""" SourceModule for hn-nsf
SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
add_noise_std=0.003, voiced_threshod=0)
sampling_rate: sampling_rate in Hz
harmonic_num: number of harmonic above F0 (default: 0)
sine_amp: amplitude of sine source signal (default: 0.1)
add_noise_std: std of additive Gaussian noise (default: 0.003)
note that amplitude of noise in unvoiced is decided
by sine_amp
voiced_threshold: threhold to set U/V given F0 (default: 0)
Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
F0_sampled (batchsize, length, 1)
Sine_source (batchsize, length, 1)
noise_source (batchsize, length 1)
uv (batchsize, length, 1)
"""
def __init__(self, sampling_rate, harmonic_num=0, sine_amp=0.1,
add_noise_std=0.003, voiced_threshod=0):
super(SourceModuleHnNSF, self).__init__()
self.sine_amp = sine_amp
self.noise_std = add_noise_std
# to produce sine waveforms
self.l_sin_gen = SineGen(sampling_rate, harmonic_num,
sine_amp, add_noise_std, voiced_threshod)
# to merge source harmonics into a single excitation
self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
self.l_tanh = torch.nn.Tanh()
def forward(self, x, upp):
sine_wavs, uv, _ = self.l_sin_gen(x, upp)
sine_merge = self.l_tanh(self.l_linear(sine_wavs))
return sine_merge
class Generator(torch.nn.Module):
def __init__(self, h):
super(Generator, self).__init__()
self.h = h
self.num_kernels = len(h.resblock_kernel_sizes)
self.num_upsamples = len(h.upsample_rates)
self.m_source = SourceModuleHnNSF(
sampling_rate=h.sampling_rate,
harmonic_num=8
)
self.noise_convs = nn.ModuleList()
self.conv_pre = weight_norm(Conv1d(h.num_mels, h.upsample_initial_channel, 7, 1, padding=3))
resblock = ResBlock1 if h.resblock == '1' else ResBlock2
self.ups = nn.ModuleList()
for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)):
c_cur = h.upsample_initial_channel // (2 ** (i + 1))
self.ups.append(weight_norm(
ConvTranspose1d(h.upsample_initial_channel // (2 ** i), h.upsample_initial_channel // (2 ** (i + 1)),
k, u, padding=(k - u) // 2)))
if i + 1 < len(h.upsample_rates): #
stride_f0 = int(np.prod(h.upsample_rates[i + 1:]))
self.noise_convs.append(Conv1d(
1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2))
else:
self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
self.resblocks = nn.ModuleList()
ch = h.upsample_initial_channel
for i in range(len(self.ups)):
ch //= 2
for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)):
self.resblocks.append(resblock(h, ch, k, d))
self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3))
self.ups.apply(init_weights)
self.conv_post.apply(init_weights)
self.upp = int(np.prod(h.upsample_rates))
def forward(self, x, f0):
har_source = self.m_source(f0, self.upp).transpose(1, 2)
x = self.conv_pre(x)
for i in range(self.num_upsamples):
x = F.leaky_relu(x, LRELU_SLOPE)
x = self.ups[i](x)
x_source = self.noise_convs[i](har_source)
x = x + x_source
xs = None
for j in range(self.num_kernels):
if xs is None:
xs = self.resblocks[i * self.num_kernels + j](x)
else:
xs += self.resblocks[i * self.num_kernels + j](x)
x = xs / self.num_kernels
x = F.leaky_relu(x)
x = self.conv_post(x)
x = torch.tanh(x)
return x
def remove_weight_norm(self):
print('Removing weight norm...')
for l in self.ups:
remove_weight_norm(l)
for l in self.resblocks:
l.remove_weight_norm()
remove_weight_norm(self.conv_pre)
remove_weight_norm(self.conv_post)
class DiscriminatorP(torch.nn.Module):
def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
super(DiscriminatorP, self).__init__()
self.period = period
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
self.convs = nn.ModuleList([
norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))),
norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))),
])
self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
def forward(self, x):
fmap = []
# 1d to 2d
b, c, t = x.shape
if t % self.period != 0: # pad first
n_pad = self.period - (t % self.period)
x = F.pad(x, (0, n_pad), "reflect")
t = t + n_pad
x = x.view(b, c, t // self.period, self.period)
for l in self.convs:
x = l(x)
x = F.leaky_relu(x, LRELU_SLOPE)
fmap.append(x)
x = self.conv_post(x)
fmap.append(x)
x = torch.flatten(x, 1, -1)
return x, fmap
class MultiPeriodDiscriminator(torch.nn.Module):
def __init__(self, periods=None):
super(MultiPeriodDiscriminator, self).__init__()
self.periods = periods if periods is not None else [2, 3, 5, 7, 11]
self.discriminators = nn.ModuleList()
for period in self.periods:
self.discriminators.append(DiscriminatorP(period))
def forward(self, y, y_hat):
y_d_rs = []
y_d_gs = []
fmap_rs = []
fmap_gs = []
for i, d in enumerate(self.discriminators):
y_d_r, fmap_r = d(y)
y_d_g, fmap_g = d(y_hat)
y_d_rs.append(y_d_r)
fmap_rs.append(fmap_r)
y_d_gs.append(y_d_g)
fmap_gs.append(fmap_g)
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
class DiscriminatorS(torch.nn.Module):
def __init__(self, use_spectral_norm=False):
super(DiscriminatorS, self).__init__()
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
self.convs = nn.ModuleList([
norm_f(Conv1d(1, 128, 15, 1, padding=7)),
norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)),
norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)),
norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)),
norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)),
norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)),
norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
])
self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
def forward(self, x):
fmap = []
for l in self.convs:
x = l(x)
x = F.leaky_relu(x, LRELU_SLOPE)
fmap.append(x)
x = self.conv_post(x)
fmap.append(x)
x = torch.flatten(x, 1, -1)
return x, fmap
class MultiScaleDiscriminator(torch.nn.Module):
def __init__(self):
super(MultiScaleDiscriminator, self).__init__()
self.discriminators = nn.ModuleList([
DiscriminatorS(use_spectral_norm=True),
DiscriminatorS(),
DiscriminatorS(),
])
self.meanpools = nn.ModuleList([
AvgPool1d(4, 2, padding=2),
AvgPool1d(4, 2, padding=2)
])
def forward(self, y, y_hat):
y_d_rs = []
y_d_gs = []
fmap_rs = []
fmap_gs = []
for i, d in enumerate(self.discriminators):
if i != 0:
y = self.meanpools[i - 1](y)
y_hat = self.meanpools[i - 1](y_hat)
y_d_r, fmap_r = d(y)
y_d_g, fmap_g = d(y_hat)
y_d_rs.append(y_d_r)
fmap_rs.append(fmap_r)
y_d_gs.append(y_d_g)
fmap_gs.append(fmap_g)
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
def feature_loss(fmap_r, fmap_g):
loss = 0
for dr, dg in zip(fmap_r, fmap_g):
for rl, gl in zip(dr, dg):
loss += torch.mean(torch.abs(rl - gl))
return loss * 2
def discriminator_loss(disc_real_outputs, disc_generated_outputs):
loss = 0
r_losses = []
g_losses = []
for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
r_loss = torch.mean((1 - dr) ** 2)
g_loss = torch.mean(dg ** 2)
loss += (r_loss + g_loss)
r_losses.append(r_loss.item())
g_losses.append(g_loss.item())
return loss, r_losses, g_losses
def generator_loss(disc_outputs):
loss = 0
gen_losses = []
for dg in disc_outputs:
l = torch.mean((1 - dg) ** 2)
gen_losses.append(l)
loss += l
return loss, gen_losses

View File

@ -0,0 +1,140 @@
import os
import librosa
import numpy as np
import soundfile as sf
import torch
import torch.nn.functional as F
import torch.utils.data
from librosa.filters import mel as librosa_mel_fn
os.environ["LRU_CACHE_CAPACITY"] = "3"
def load_wav_to_torch(full_path, target_sr=None, return_empty_on_exception=False):
sampling_rate = None
try:
data, sampling_rate = sf.read(full_path, always_2d=True) # than soundfile.
except Exception as ex:
print(f"'{full_path}' failed to load.\nException:")
print(ex)
if return_empty_on_exception:
return [], sampling_rate or target_sr or 48000
else:
raise Exception(ex)
if len(data.shape) > 1:
data = data[:, 0]
assert len(
data) > 2 # check duration of audio file is > 2 samples (because otherwise the slice operation was on the wrong dimension)
if np.issubdtype(data.dtype, np.integer): # if audio data is type int
max_mag = -np.iinfo(data.dtype).min # maximum magnitude = min possible value of intXX
else: # if audio data is type fp32
max_mag = max(np.amax(data), -np.amin(data))
max_mag = (2 ** 31) + 1 if max_mag > (2 ** 15) else ((
2 ** 15) + 1 if max_mag > 1.01 else 1.0) # data should be either 16-bit INT, 32-bit INT or [-1 to 1] float32
data = torch.FloatTensor(data.astype(np.float32)) / max_mag
if (torch.isinf(data) | torch.isnan(
data)).any() and return_empty_on_exception: # resample will crash with inf/NaN inputs. return_empty_on_exception will return empty arr instead of except
return [], sampling_rate or target_sr or 48000
if target_sr is not None and sampling_rate != target_sr:
data = torch.from_numpy(librosa.core.resample(data.numpy(), orig_sr=sampling_rate, target_sr=target_sr))
sampling_rate = target_sr
return data, sampling_rate
def dynamic_range_compression(x, C=1, clip_val=1e-5):
return np.log(np.clip(x, a_min=clip_val, a_max=None) * C)
def dynamic_range_decompression(x, C=1):
return np.exp(x) / C
def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
return torch.log(torch.clamp(x, min=clip_val) * C)
def dynamic_range_decompression_torch(x, C=1):
return torch.exp(x) / C
class STFT():
def __init__(self, sr=22050, n_mels=80, n_fft=1024, win_size=1024, hop_length=256, fmin=20, fmax=11025,
clip_val=1e-5):
self.target_sr = sr
self.n_mels = n_mels
self.n_fft = n_fft
self.win_size = win_size
self.hop_length = hop_length
self.fmin = fmin
self.fmax = fmax
self.clip_val = clip_val
self.mel_basis = {}
self.hann_window = {}
def get_mel(self, y, keyshift=0, speed=1, center=False):
sampling_rate = self.target_sr
n_mels = self.n_mels
n_fft = self.n_fft
win_size = self.win_size
hop_length = self.hop_length
fmin = self.fmin
fmax = self.fmax
clip_val = self.clip_val
factor = 2 ** (keyshift / 12)
n_fft_new = int(np.round(n_fft * factor))
win_size_new = int(np.round(win_size * factor))
hop_length_new = int(np.round(hop_length * speed))
if torch.min(y) < -1.:
print('min value is ', torch.min(y))
if torch.max(y) > 1.:
print('max value is ', torch.max(y))
mel_basis_key = str(fmax) + '_' + str(y.device)
if mel_basis_key not in self.mel_basis:
mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax)
self.mel_basis[mel_basis_key] = torch.from_numpy(mel).float().to(y.device)
keyshift_key = str(keyshift) + '_' + str(y.device)
if keyshift_key not in self.hann_window:
self.hann_window[keyshift_key] = torch.hann_window(win_size_new).to(y.device)
y = torch.nn.functional.pad(y.unsqueeze(1),
((win_size_new - hop_length_new) // 2, (win_size_new - hop_length_new + 1) // 2),
mode='reflect')
y = y.squeeze(1)
spec = torch.stft(y, n_fft_new, hop_length=hop_length_new, win_length=win_size_new,
window=self.hann_window[keyshift_key],
center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False)
# print(111,spec)
spec = torch.sqrt(spec.pow(2).sum(-1) + (1e-9))
if keyshift != 0:
size = n_fft // 2 + 1
resize = spec.size(1)
if resize < size:
spec = F.pad(spec, (0, 0, 0, size - resize))
spec = spec[:, :size, :] * win_size / win_size_new
# print(222,spec)
spec = torch.matmul(self.mel_basis[mel_basis_key], spec)
# print(333,spec)
spec = dynamic_range_compression_torch(spec, clip_val=clip_val)
# print(444,spec)
return spec
def __call__(self, audiopath):
audio, sr = load_wav_to_torch(audiopath, target_sr=self.target_sr)
spect = self.get_mel(audio.unsqueeze(0)).squeeze(0)
return spect
stft = STFT()

View File

@ -0,0 +1,69 @@
import glob
import os
import matplotlib
import matplotlib.pylab as plt
import torch
from torch.nn.utils import weight_norm
matplotlib.use("Agg")
def plot_spectrogram(spectrogram):
fig, ax = plt.subplots(figsize=(10, 2))
im = ax.imshow(spectrogram, aspect="auto", origin="lower",
interpolation='none')
plt.colorbar(im, ax=ax)
fig.canvas.draw()
plt.close()
return fig
def init_weights(m, mean=0.0, std=0.01):
classname = m.__class__.__name__
if classname.find("Conv") != -1:
m.weight.data.normal_(mean, std)
def apply_weight_norm(m):
classname = m.__class__.__name__
if classname.find("Conv") != -1:
weight_norm(m)
def get_padding(kernel_size, dilation=1):
return int((kernel_size * dilation - dilation) / 2)
def load_checkpoint(filepath, device):
assert os.path.isfile(filepath)
print("Loading '{}'".format(filepath))
checkpoint_dict = torch.load(filepath, map_location=device)
print("Complete.")
return checkpoint_dict
def save_checkpoint(filepath, obj):
print("Saving checkpoint to {}".format(filepath))
torch.save(obj, filepath)
print("Complete.")
def del_old_checkpoints(cp_dir, prefix, n_models=2):
pattern = os.path.join(cp_dir, prefix + '????????')
cp_list = glob.glob(pattern) # get checkpoint paths
cp_list = sorted(cp_list) # sort by iter
if len(cp_list) > n_models: # if more than n_models models are found
for cp in cp_list[:-n_models]: # delete the oldest models other than lastest n_models
open(cp, 'w').close() # empty file contents
os.unlink(cp) # delete file (move to trash when using Colab)
def scan_checkpoint(cp_dir, prefix):
pattern = os.path.join(cp_dir, prefix + '????????')
cp_list = glob.glob(pattern)
if len(cp_list) == 0:
return None
return sorted(cp_list)[-1]

View File

@ -0,0 +1 @@
from modules.vocoders import nsf_hifigan

View File

@ -0,0 +1,77 @@
import os
import torch
from modules.nsf_hifigan.models import load_model
from modules.nsf_hifigan.nvSTFT import load_wav_to_torch, STFT
from utils.hparams import hparams
nsf_hifigan = None
def register_vocoder(cls):
global nsf_hifigan
nsf_hifigan = cls
return cls
@register_vocoder
class NsfHifiGAN():
def __init__(self, device=None):
if device is None:
device = 'cuda' if torch.cuda.is_available() else 'cpu'
self.device = device
model_path = hparams['vocoder_ckpt']
if os.path.exists(model_path):
print('| Load HifiGAN: ', model_path)
self.model, self.h = load_model(model_path, device=self.device)
else:
print('Error: HifiGAN model file is not found!')
def spec2wav(self, mel, **kwargs):
if self.h.sampling_rate != hparams['audio_sample_rate']:
print('Mismatch parameters: hparams[\'audio_sample_rate\']=', hparams['audio_sample_rate'], '!=',
self.h.sampling_rate, '(vocoder)')
if self.h.num_mels != hparams['audio_num_mel_bins']:
print('Mismatch parameters: hparams[\'audio_num_mel_bins\']=', hparams['audio_num_mel_bins'], '!=',
self.h.num_mels, '(vocoder)')
if self.h.n_fft != hparams['fft_size']:
print('Mismatch parameters: hparams[\'fft_size\']=', hparams['fft_size'], '!=', self.h.n_fft, '(vocoder)')
if self.h.win_size != hparams['win_size']:
print('Mismatch parameters: hparams[\'win_size\']=', hparams['win_size'], '!=', self.h.win_size,
'(vocoder)')
if self.h.hop_size != hparams['hop_size']:
print('Mismatch parameters: hparams[\'hop_size\']=', hparams['hop_size'], '!=', self.h.hop_size,
'(vocoder)')
if self.h.fmin != hparams['fmin']:
print('Mismatch parameters: hparams[\'fmin\']=', hparams['fmin'], '!=', self.h.fmin, '(vocoder)')
if self.h.fmax != hparams['fmax']:
print('Mismatch parameters: hparams[\'fmax\']=', hparams['fmax'], '!=', self.h.fmax, '(vocoder)')
with torch.no_grad():
c = torch.FloatTensor(mel).unsqueeze(0).transpose(2, 1).to(self.device)
# log10 to log mel
c = 2.30259 * c
f0 = kwargs.get('f0')
f0 = torch.FloatTensor(f0[None, :]).to(self.device)
y = self.model(c, f0).view(-1)
wav_out = y.cpu().numpy()
return wav_out
@staticmethod
def wav2spec(inp_path, device=None):
if device is None:
device = 'cuda' if torch.cuda.is_available() else 'cpu'
sampling_rate = hparams['audio_sample_rate']
num_mels = hparams['audio_num_mel_bins']
n_fft = hparams['fft_size']
win_size = hparams['win_size']
hop_size = hparams['hop_size']
fmin = hparams['fmin']
fmax = hparams['fmax']
stft = STFT(sampling_rate, num_mels, n_fft, win_size, hop_size, fmin, fmax)
with torch.no_grad():
wav_torch, _ = load_wav_to_torch(inp_path, target_sr=stft.target_sr)
mel_torch = stft.get_mel(wav_torch.unsqueeze(0).to(device)).squeeze(0).T
# log mel to log10 mel
mel_torch = 0.434294 * mel_torch
return wav_torch.cpu().numpy(), mel_torch.cpu().numpy()

155
pre_check.py Normal file
View File

@ -0,0 +1,155 @@
import os
import re
import yaml
solutions = {'yaml': 'yaml路径不正确可能导致yaml污染或预处理失败\r\n',
'hubert': 'hubert不存在请到群文件里下载hubert_torch.zip并解压到checkpoints文件夹下\r\n',
'raw_data_dir': '数据集目录与yaml不匹配请检查数据集目录与yaml内"raw_data_dir:"栏是否匹配\r\n',
'vocoder': "与yaml匹配的声码器不存在24k声码器请到群文件里下载basics.zip并解压到checkpoints文件夹下\r\n44.1k声码器请到“https://github.com/openvpi/vocoders”github发布页下载并解压到checkpoints下\r\n如已下载并解压请核对声码器文件名与yaml内“vocoder_ckpt:”栏的声码器文件名与checkpoints文件夹下声码器文件夹内的声码器文件名是否匹配\r\n",
'torch': '你未安装torch三件套或torch三件套异常,解决方法见语雀安装torch相关\r\n复制链接粘贴到浏览器即可直达相关网页:\r\n通用命令安装torchhttps://www.yuque.com/jiuwei-nui3d/qng6eg/sc8ivoge8vww4lu6#9mQgt\r\nwindows下手动安装torchhttps://www.yuque.com/jiuwei-nui3d/qng6eg/ea0ntd\r\n',
'urllib.parse': '载入urllib.parse失败,解决方法见语雀常见错误①,\r\n复制链接粘贴到浏览器即可直达相关网页:https://www.yuque.com/jiuwei-nui3d/qng6eg/gdpi5orf3niv9mwb#SyTom\r\n',
'utils.hparams': '载入utils.hparams失败,解决方法见语雀常见错误③,\r\n复制链接粘贴到浏览器即可直达相关网页:https://www.yuque.com/jiuwei-nui3d/qng6eg/abaxpwozc2h5yltt#MOddD\r\n',
'config_path': 'config_path路径即yaml路径格式不正确应为training/xxxx.yaml\r\n'}
def get_end_file(dir_path, end):
file_lists = []
for root, dirs, files in os.walk(dir_path):
files = [f for f in files if f[0] != '.']
dirs[:] = [d for d in dirs if d[0] != '.']
for f_file in files:
if f_file.endswith(end):
file_lists.append(os.path.join(root, f_file).replace("\\", "/"))
return file_lists
def scan(path):
model_str = ""
path_lists = get_end_file(path, "yaml")
for i in range(0, len(path_lists)):
if re.search(u'[\u4e00-\u9fa5]', path_lists[i]):
print(f'{path_lists[i]}:中文路径!此项跳过')
continue
model_str += f"{i}:{path_lists[i]}\r\n"
if (i + 1) % 5 == 0:
print(f"{model_str}")
model_str = ""
if len(path_lists) % 5 != 0:
print(model_str)
return path_lists
# 检测文件夹大小的函数
def get_dir_size(path, size=0):
for root, dirs, files in os.walk(path):
for f in files:
size += os.path.getsize(os.path.join(root, f))
return size
# 意义不明的try_except
def try_except():
print("请等待10秒")
res_str = ""
try:
import torch
import torchvision
import torchaudio
print('成功加载torch')
cuda = f"cuda vision: {torch.cuda_version}" if torch.cuda.is_available() else "cuda不存在或版本不匹配请查阅相关资料自行安装"
print(cuda)
except Exception as e:
res_str += solutions['torch']
try:
from urllib.parse import quote
print('成功载入urllib.parse')
except Exception as e:
res_str += solutions['urllib.parse']
try:
from utils.hparams import set_hparams, hparams
print('成功载入utils.hparams')
except Exception as e:
res_str += solutions['utils.hparams']
if res_str:
print("\r\n*=====================\r\n", "错误及解决方法:\r\n", res_str)
def test_part(test):
res_str = ""
print("\r\n*=====================")
for k, v in test.items():
if isinstance(v, list):
for i in v:
if os.path.exists(i):
print(f"{k}-{i}: 通过" + (
",绝对路径只能在当前平台运行,更换平台训练请使用相对路径" if os.path.isabs(i) else ""))
elif os.path.exists(v):
print(
f"{k}: 通过" + (",绝对路径只能在当前平台运行,更换平台训练请使用相对路径" if os.path.isabs(v) else ""))
else:
print(f"{k}: 不通过")
res_str += f"{k}:{solutions[k]}\r\n"
if res_str:
print("\r\n解决方法:\r\n", res_str)
else:
return True
if __name__ == '__main__':
print("选择:")
print("0.环境检测")
print("1.配置文件检测")
f = int(input("请输入选项:"))
if f == 0:
# 调用try函数
try_except()
elif f == 1:
path_list = scan("./configs")
a = input("请输入选项:")
project_path = path_list[int(a)]
with open(project_path, "r") as f:
data = yaml.safe_load(f)
with open("./configs/base.yaml", "r") as f:
base = yaml.safe_load(f)
test_model = {'yaml': data["config_path"], 'hubert': data["hubert_path"],
'raw_data_dir': data["raw_data_dir"], 'vocoder': base["vocoder_ckpt"],
'config_path': data["config_path"]}
try_except()
yaml_path = data["config_path"]
model_name = data["binary_data_dir"].split("/")[-1]
if test_part(test_model):
if get_dir_size(data["binary_data_dir"]) > 100 * 1024 ** 2:
print("\r\ntrain.data通过初步检测不排除数据集制作时的失误")
print("\r\n*====================="
"\r\n### 训练"
"\r\ncd进入diff-svc的目录下执行以下命令"
"\r\n*====================="
"\r\n# windows**使用cmd窗口**"
"\r\nset CUDA_VISIBLE_DEVICES=0"
f"\r\npython run.py --config {yaml_path} --exp_name {model_name} --reset"
"\r\n*====================="
"\r\n# linux"
f"\r\nCUDA_VISIBLE_DEVICES=0 python run.py --config {yaml_path} --exp_name {model_name} --reset"
"\r\n*=====================")
else:
print("\r\n未进行预处理或预处理错误请参考语雀教程https://www.yuque.com/jiuwei-nui3d/qng6eg")
print("\r\n*====================="
"\r\n### 数据预处理"
"\r\ncd进入diff-svc的目录下执行以下命令"
"\r\n*====================="
"\r\n# windows**使用cmd窗口**"
"\r\nset PYTHONPATH=."
"\r\nset CUDA_VISIBLE_DEVICES=0"
f"\r\npython preprocessing/svc_binarizer.py --config {yaml_path}"
"\r\n*====================="
"\r\n# linux"
"\r\nexport PYTHONPATH=."
f"\r\nCUDA_VISIBLE_DEVICES=0 python preprocessing/svc_binarizer.py --config {yaml_path}"
"\r\n*=====================")
print("预处理完请重新运行此脚本选项1届时提供训练命令")
else:
print("请依据以上提示解决问题后,重新运行此脚本")
exit()

20
pre_hubert.py Normal file
View File

@ -0,0 +1,20 @@
import os
from pathlib import Path
import numpy as np
from tqdm import tqdm
from infer_tools import infer_tool
from preprocessing.hubertinfer import HubertEncoder
# hubert_mode可选——"soft_hubert"、"cn_hubert"
hubert_model = HubertEncoder(hubert_mode='soft_hubert')
# 自动搜索batch文件夹下所有wav文件可自行更改路径
wav_paths = infer_tool.get_end_file("./batch", "wav")
with tqdm(total=len(wav_paths)) as p_bar:
p_bar.set_description('Processing')
for wav_path in wav_paths:
npy_path = Path(wav_path).with_suffix(".npy")
if not os.path.exists(npy_path):
np.save(str(npy_path), hubert_model.encode(wav_path))
p_bar.update(1)

View File

@ -0,0 +1,53 @@
import os.path
from io import BytesIO
from pathlib import Path
import numpy as np
import onnxruntime as ort
import torch
from modules.hubert.cn_hubert import load_cn_model, get_cn_hubert_units
from modules.hubert.hubert_model import hubert_soft, get_units
from modules.hubert.hubert_onnx import get_onnx_units
from utils.hparams import hparams
class HubertEncoder:
def __init__(self, pt_path='checkpoints/hubert/hubert_soft.pt', hubert_mode='', onnx=False):
self.hubert_mode = hubert_mode
self.onnx = onnx
if 'use_cn_hubert' not in hparams.keys():
hparams['use_cn_hubert'] = False
if hparams['use_cn_hubert'] or self.hubert_mode == 'cn_hubert':
pt_path = "checkpoints/cn_hubert/chinese-hubert-base-fairseq-ckpt.pt"
self.dev = torch.device("cuda")
self.hbt_model = load_cn_model(pt_path)
else:
if onnx:
self.hbt_model = ort.InferenceSession("onnx/hubert_soft.onnx",
providers=['CUDAExecutionProvider', 'CPUExecutionProvider', ])
else:
pt_path = list(Path(pt_path).parent.rglob('*.pt'))[0]
if 'hubert_gpu' in hparams.keys():
self.use_gpu = hparams['hubert_gpu']
else:
self.use_gpu = True
self.dev = torch.device("cuda" if self.use_gpu and torch.cuda.is_available() else "cpu")
self.hbt_model = hubert_soft(str(pt_path)).to(self.dev)
print(f"| load 'model' from '{pt_path}'")
def encode(self, wav_path):
if isinstance(wav_path, BytesIO):
npy_path = ""
wav_path.seek(0)
else:
npy_path = Path(wav_path).with_suffix('.npy')
if os.path.exists(npy_path):
units = np.load(str(npy_path))
elif self.onnx:
units = get_onnx_units(self.hbt_model, wav_path).squeeze(0)
elif hparams['use_cn_hubert'] or self.hubert_mode == 'cn_hubert':
units = get_cn_hubert_units(self.hbt_model, wav_path, self.dev).cpu().numpy()[0]
else:
units = get_units(self.hbt_model, wav_path, self.dev).cpu().numpy()[0]
return units # [T,256]

View File

@ -0,0 +1,247 @@
import hashlib
import json
import os
import time
import traceback
import warnings
from pathlib import Path
import numpy as np
import parselmouth
import resampy
import torch
import torchcrepe
import utils
from modules.vocoders.nsf_hifigan import nsf_hifigan
from utils.hparams import hparams
from utils.pitch_utils import f0_to_coarse
warnings.filterwarnings("ignore")
class BinarizationError(Exception):
pass
def get_md5(content):
return hashlib.new("md5", content).hexdigest()
def read_temp(file_name):
if not os.path.exists(file_name):
with open(file_name, "w") as f:
f.write(json.dumps({"info": "temp_dict"}))
return {}
else:
try:
with open(file_name, "r") as f:
data = f.read()
data_dict = json.loads(data)
if os.path.getsize(file_name) > 50 * 1024 * 1024:
f_name = file_name.split("/")[-1]
print(f"clean {f_name}")
for wav_hash in list(data_dict.keys()):
if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600:
del data_dict[wav_hash]
except Exception as e:
print(e)
print(f"{file_name} error,auto rebuild file")
data_dict = {"info": "temp_dict"}
return data_dict
def write_temp(file_name, data):
with open(file_name, "w") as f:
f.write(json.dumps(data))
f0_dict = read_temp("./infer_tools/f0_temp.json")
def get_pitch_parselmouth(wav_data, mel, hparams):
"""
:param wav_data: [T]
:param mel: [T, 80]
:param hparams:
:return:
"""
time_step = hparams['hop_size'] / hparams['audio_sample_rate']
f0_min = hparams['f0_min']
f0_max = hparams['f0_max']
f0 = parselmouth.Sound(wav_data, hparams['audio_sample_rate']).to_pitch_ac(
time_step=time_step, voicing_threshold=0.6,
pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency']
pad_size = (int(len(wav_data) // hparams['hop_size']) - len(f0) + 1) // 2
f0 = np.pad(f0, [[pad_size, len(mel) - len(f0) - pad_size]], mode='constant')
pitch_coarse = f0_to_coarse(f0, hparams)
return f0, pitch_coarse
def get_pitch_crepe(wav_data, mel, hparams, threshold=0.05):
# device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device = torch.device("cuda")
# crepe只支持16khz采样率需要重采样
wav16k = resampy.resample(wav_data, hparams['audio_sample_rate'], 16000)
wav16k_torch = torch.FloatTensor(wav16k).unsqueeze(0).to(device)
# 频率范围
f0_min = hparams['f0_min']
f0_max = hparams['f0_max']
# 重采样后按照hopsize=80,也就是5ms一帧分析f0
f0, pd = torchcrepe.predict(wav16k_torch, 16000, 80, f0_min, f0_max, pad=True, model='full', batch_size=1024,
device=device, return_periodicity=True)
# 滤波去掉静音设置uv阈值参考原仓库readme
pd = torchcrepe.filter.median(pd, 3)
pd = torchcrepe.threshold.Silence(-60.)(pd, wav16k_torch, 16000, 80)
f0 = torchcrepe.threshold.At(threshold)(f0, pd)
f0 = torchcrepe.filter.mean(f0, 3)
# 将nan频率uv部分转换为0频率
f0 = torch.where(torch.isnan(f0), torch.full_like(f0, 0), f0)
# 去掉0频率并线性插值
nzindex = torch.nonzero(f0[0]).squeeze()
f0 = torch.index_select(f0[0], dim=0, index=nzindex).cpu().numpy()
time_org = 0.005 * nzindex.cpu().numpy()
time_frame = np.arange(len(mel)) * hparams['hop_size'] / hparams['audio_sample_rate']
if f0.shape[0] == 0:
f0 = torch.FloatTensor(time_frame.shape[0]).fill_(0)
print('f0 all zero!')
else:
f0 = np.interp(time_frame, time_org, f0, left=f0[0], right=f0[-1])
pitch_coarse = f0_to_coarse(f0, hparams)
return f0, pitch_coarse
class File2Batch:
'''
pipeline: file -> temporary_dict -> processed_input -> batch
'''
@staticmethod
def file2temporary_dict(raw_data_dir, ds_id):
'''
read from file, store data in temporary dicts
'''
raw_data_dir = Path(raw_data_dir)
utterance_labels = []
utterance_labels.extend(list(raw_data_dir.rglob(f"*.wav")))
utterance_labels.extend(list(raw_data_dir.rglob(f"*.ogg")))
all_temp_dict = {}
for utterance_label in utterance_labels:
item_name = str(utterance_label)
temp_dict = {'wav_fn': str(utterance_label), 'spk_id': ds_id}
all_temp_dict[item_name] = temp_dict
return all_temp_dict
@staticmethod
def temporary_dict2processed_input(item_name, temp_dict, encoder, infer=False, **kwargs):
'''
process data in temporary_dicts
'''
def get_pitch(wav, mel):
# get ground truth f0 by self.get_pitch_algorithm
global f0_dict
use_crepe = hparams['use_crepe'] if not infer else kwargs['use_crepe']
if use_crepe:
md5 = get_md5(wav)
if infer and md5 in f0_dict.keys():
print("load temp crepe f0")
gt_f0 = np.array(f0_dict[md5]["f0"])
coarse_f0 = np.array(f0_dict[md5]["coarse"])
else:
torch.cuda.is_available() and torch.cuda.empty_cache()
gt_f0, coarse_f0 = get_pitch_crepe(wav, mel, hparams, threshold=0.05)
if infer:
f0_dict[md5] = {"f0": gt_f0.tolist(), "coarse": coarse_f0.tolist(), "time": int(time.time())}
write_temp("./infer_tools/f0_temp.json", f0_dict)
else:
gt_f0, coarse_f0 = get_pitch_parselmouth(wav, mel, hparams)
if sum(gt_f0) == 0:
raise BinarizationError("Empty **gt** f0")
processed_input['f0'] = gt_f0
processed_input['pitch'] = coarse_f0
def get_align(mel, phone_encoded):
mel2ph = np.zeros([mel.shape[0]], int)
start_frame = 0
ph_durs = mel.shape[0] / phone_encoded.shape[0]
for i_ph in range(phone_encoded.shape[0]):
end_frame = int(i_ph * ph_durs + ph_durs + 0.5)
mel2ph[start_frame:end_frame + 1] = i_ph + 1
start_frame = end_frame + 1
processed_input['mel2ph'] = mel2ph
wav, mel = nsf_hifigan.wav2spec(temp_dict['wav_fn'])
processed_input = {
'item_name': item_name, 'mel': mel,
'sec': len(wav) / hparams['audio_sample_rate'], 'len': mel.shape[0]
}
processed_input = {**temp_dict, **processed_input,
'spec_min': np.min(mel, axis=0),
'spec_max': np.max(mel, axis=0)} # merge two dicts
try:
get_pitch(wav, mel)
try:
hubert_encoded = processed_input['hubert'] = encoder.encode(temp_dict['wav_fn'])
except:
traceback.print_exc()
raise Exception(f"hubert encode error")
get_align(mel, hubert_encoded)
except Exception as e:
print(f"| Skip item ({e}). item_name: {item_name}, wav_fn: {temp_dict['wav_fn']}")
return None
if hparams['use_energy_embed']:
max_frames = hparams['max_frames']
spec = torch.Tensor(processed_input['mel'])[:max_frames]
processed_input['energy'] = (spec.exp() ** 2).sum(-1).sqrt()
return processed_input
@staticmethod
def processed_input2batch(samples):
'''
Args:
samples: one batch of processed_input
NOTE:
the batch size is controlled by hparams['max_sentences']
'''
if len(samples) == 0:
return {}
id = torch.LongTensor([s['id'] for s in samples])
item_names = [s['item_name'] for s in samples]
hubert = utils.collate_2d([s['hubert'] for s in samples], 0.0)
f0 = utils.collate_1d([s['f0'] for s in samples], 0.0)
pitch = utils.collate_1d([s['pitch'] for s in samples])
uv = utils.collate_1d([s['uv'] for s in samples])
mel2ph = utils.collate_1d([s['mel2ph'] for s in samples], 0.0) \
if samples[0]['mel2ph'] is not None else None
mels = utils.collate_2d([s['mel'] for s in samples], 0.0)
mel_lengths = torch.LongTensor([s['mel'].shape[0] for s in samples])
batch = {
'id': id,
'item_name': item_names,
'nsamples': len(samples),
'hubert': hubert,
'mels': mels,
'mel_lengths': mel_lengths,
'mel2ph': mel2ph,
'pitch': pitch,
'f0': f0,
'uv': uv,
}
if hparams['use_energy_embed']:
batch['energy'] = utils.collate_1d([s['energy'] for s in samples], 0.0)
if hparams['use_spk_id']:
spk_ids = torch.LongTensor([s['spk_id'] for s in samples])
batch['spk_ids'] = spk_ids
return batch

View File

@ -0,0 +1,224 @@
import json
import logging
import os
import random
from copy import deepcopy
import numpy as np
import yaml
from resemblyzer import VoiceEncoder
from tqdm import tqdm
from infer_tools.f0_static import static_f0_time
from modules.vocoders.nsf_hifigan import NsfHifiGAN
from preprocessing.hubertinfer import HubertEncoder
from preprocessing.process_pipeline import File2Batch
from preprocessing.process_pipeline import get_pitch_parselmouth, get_pitch_crepe
from utils.hparams import hparams
from utils.hparams import set_hparams
from utils.indexed_datasets import IndexedDatasetBuilder
os.environ["OMP_NUM_THREADS"] = "1"
BASE_ITEM_ATTRIBUTES = ['wav_fn', 'spk_id']
class SvcBinarizer:
'''
Base class for data processing.
1. *process* and *process_data_split*:
process entire data, generate the train-test split (support parallel processing);
2. *process_item*:
process singe piece of data;
3. *get_pitch*:
infer the pitch using some algorithm;
4. *get_align*:
get the alignment using 'mel2ph' format (see https://arxiv.org/abs/1905.09263).
5. phoneme encoder, voice encoder, etc.
Subclasses should define:
1. *load_metadata*:
how to read multiple datasets from files;
2. *train_item_names*, *valid_item_names*, *test_item_names*:
how to split the dataset;
3. load_ph_set:
the phoneme set.
'''
def __init__(self, data_dir=None, item_attributes=None):
self.spk_map = None
self.vocoder = NsfHifiGAN()
self.phone_encoder = HubertEncoder(pt_path=hparams['hubert_path'])
if item_attributes is None:
item_attributes = BASE_ITEM_ATTRIBUTES
if data_dir is None:
data_dir = hparams['raw_data_dir']
if 'speakers' not in hparams:
speakers = hparams['datasets']
hparams['speakers'] = hparams['datasets']
else:
speakers = hparams['speakers']
assert isinstance(speakers, list), 'Speakers must be a list'
assert len(speakers) == len(set(speakers)), 'Speakers cannot contain duplicate names'
self.raw_data_dirs = data_dir if isinstance(data_dir, list) else [data_dir]
assert len(speakers) == len(self.raw_data_dirs), \
'Number of raw data dirs must equal number of speaker names!'
self.speakers = speakers
self.binarization_args = hparams['binarization_args']
self.items = {}
# every item in self.items has some attributes
self.item_attributes = item_attributes
# load each dataset
for ds_id, data_dir in enumerate(self.raw_data_dirs):
self.load_meta_data(data_dir, ds_id)
if ds_id == 0:
# check program correctness
assert all([attr in self.item_attributes for attr in list(self.items.values())[0].keys()])
self.item_names = sorted(list(self.items.keys()))
if self.binarization_args['shuffle']:
random.seed(hparams['seed'])
random.shuffle(self.item_names)
# set default get_pitch algorithm
if hparams['use_crepe']:
self.get_pitch_algorithm = get_pitch_crepe
else:
self.get_pitch_algorithm = get_pitch_parselmouth
print('spkers: ', set(self.speakers))
self._train_item_names, self._test_item_names = self.split_train_test_set(self.item_names)
@staticmethod
def split_train_test_set(item_names):
auto_test = item_names[-5:]
item_names = set(deepcopy(item_names))
if hparams['choose_test_manually']:
prefixes = set([str(pr) for pr in hparams['test_prefixes']])
test_item_names = set()
# Add prefixes that specified speaker index and matches exactly item name to test set
for prefix in deepcopy(prefixes):
if prefix in item_names:
test_item_names.add(prefix)
prefixes.remove(prefix)
# Add prefixes that exactly matches item name without speaker id to test set
for prefix in deepcopy(prefixes):
for name in item_names:
if name.split(':')[-1] == prefix:
test_item_names.add(name)
prefixes.remove(prefix)
# Add names with one of the remaining prefixes to test set
for prefix in deepcopy(prefixes):
for name in item_names:
if name.startswith(prefix):
test_item_names.add(name)
prefixes.remove(prefix)
for prefix in prefixes:
for name in item_names:
if name.split(':')[-1].startswith(prefix):
test_item_names.add(name)
test_item_names = sorted(list(test_item_names))
else:
test_item_names = auto_test
train_item_names = [x for x in item_names if x not in set(test_item_names)]
logging.info("train {}".format(len(train_item_names)))
logging.info("test {}".format(len(test_item_names)))
return train_item_names, test_item_names
@property
def train_item_names(self):
return self._train_item_names
@property
def valid_item_names(self):
return self._test_item_names
@property
def test_item_names(self):
return self._test_item_names
def load_meta_data(self, raw_data_dir, ds_id):
self.items.update(File2Batch.file2temporary_dict(raw_data_dir, ds_id))
@staticmethod
def build_spk_map():
spk_map = {x: i for i, x in enumerate(hparams['speakers'])}
assert len(spk_map) <= hparams['num_spk'], 'Actual number of speakers should be smaller than num_spk!'
return spk_map
def item_name2spk_id(self, item_name):
return self.spk_map[self.items[item_name]['spk_id']]
def meta_data_iterator(self, prefix):
if prefix == 'valid':
item_names = self.valid_item_names
elif prefix == 'test':
item_names = self.test_item_names
else:
item_names = self.train_item_names
for item_name in item_names:
meta_data = self.items[item_name]
yield item_name, meta_data
def process(self):
os.makedirs(hparams['binary_data_dir'], exist_ok=True)
self.spk_map = self.build_spk_map()
print("| spk_map: ", self.spk_map)
spk_map_fn = f"{hparams['binary_data_dir']}/spk_map.json"
json.dump(self.spk_map, open(spk_map_fn, 'w', encoding='utf-8'))
self.process_data_split('valid')
self.process_data_split('test')
self.process_data_split('train')
def process_data_split(self, prefix):
data_dir = hparams['binary_data_dir']
args = []
builder = IndexedDatasetBuilder(f'{data_dir}/{prefix}')
lengths = []
total_sec = 0
if self.binarization_args['with_spk_embed']:
voice_encoder = VoiceEncoder().cuda()
for item_name, meta_data in self.meta_data_iterator(prefix):
args.append([item_name, meta_data, self.binarization_args])
spec_min = []
spec_max = []
f0_dict = {}
# code for single cpu processing
for i in tqdm(reversed(range(len(args))), total=len(args)):
a = args[i]
item = self.process_item(*a)
if item is None:
continue
item['spk_embed'] = voice_encoder.embed_utterance(item['wav']) \
if self.binarization_args['with_spk_embed'] else None
spec_min.append(item['spec_min'])
spec_max.append(item['spec_max'])
f0_dict[item['wav_fn']] = item['f0']
builder.add_item(item)
lengths.append(item['len'])
total_sec += item['sec']
if prefix == 'train':
spec_max = np.max(spec_max, 0)
spec_min = np.min(spec_min, 0)
pitch_time = static_f0_time(f0_dict)
with open(hparams['config_path'], encoding='utf-8') as f:
_hparams = yaml.safe_load(f)
_hparams['spec_max'] = spec_max.tolist()
_hparams['spec_min'] = spec_min.tolist()
if self.speakers == 1:
_hparams['f0_static'] = json.dumps(pitch_time)
with open(hparams['config_path'], 'w', encoding='utf-8') as f:
yaml.safe_dump(_hparams, f)
builder.finalize()
np.save(f'{data_dir}/{prefix}_lengths.npy', lengths)
print(f"| {prefix} total duration: {total_sec:.3f}s")
def process_item(self, item_name, meta_data, binarization_args):
from preprocessing.process_pipeline import File2Batch
return File2Batch.temporary_dict2processed_input(item_name, meta_data, self.phone_encoder)
if __name__ == "__main__":
set_hparams()
SvcBinarizer().process()

18
requirements.txt Normal file
View File

@ -0,0 +1,18 @@
setuptools==59.5.0
onnxruntime
torchcrepe
matplotlib
praat-parselmouth==0.4.1
scikit-image
pyyaml
ipython
ipykernel
librosa==0.8.0
pyloudnorm
resemblyzer
torchmetrics==0.5.0
pytorch_lightning==1.3.3
numpy==1.23.0
# pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio==0.10.0+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
# fairseq

17
run.py Normal file
View File

@ -0,0 +1,17 @@
import importlib
from utils.hparams import set_hparams, hparams
set_hparams(print_hparams=False)
def run_task():
assert hparams['task_cls'] != ''
pkg = ".".join(hparams["task_cls"].split(".")[:-1])
cls_name = hparams["task_cls"].split(".")[-1]
task_cls = getattr(importlib.import_module(pkg), cls_name)
task_cls.start()
if __name__ == '__main__':
run_task()

88
simplify.py Normal file
View File

@ -0,0 +1,88 @@
import os
import re
import shutil
import torch
def get_model_folder(path):
model_lists = os.listdir(path)
res_list = []
filter_list = ["hubert", "xiaoma_pe", "hifigan", "checkpoints", ".yaml", ".zip"]
for path in model_lists:
if not any(word if word in path else False for word in filter_list):
res_list.append(path)
return res_list
def scan(path):
model_str = ""
path_lists = get_model_folder(path)
for i in range(0, len(path_lists)):
if re.search(u'[\u4e00-\u9fa5]', path_lists[i]):
print(f'{path_lists[i]}:中文路径!此项跳过')
continue
model_str += f"{i}:{path_lists[i]} "
if (i + 1) % 5 == 0:
print(f"{model_str}")
model_str = ""
if len(path_lists) % 5 != 0:
print(model_str)
return path_lists
def simplify_pth(model_name, proj_name, output_path):
model_path = f'./checkpoints/{proj_name}'
checkpoint_dict = torch.load(f'{model_path}/{model_name}')
torch.save({'epoch': checkpoint_dict['epoch'],
'state_dict': checkpoint_dict['state_dict'],
'global_step': None,
'checkpoint_callback_best': None,
'optimizer_states': None,
'lr_schedulers': None
}, output_path)
def mkdir(paths: list):
for path in paths:
if not os.path.exists(path):
os.mkdir(path)
if __name__ == '__main__':
if os.path.exists("./checkpoints"):
path_list = scan("./checkpoints")
else:
print("请检查checkpoints文件夹是否存在")
exit()
a = input("\r\n请输入序号并回车:")
project_name = path_list[int(a)]
path_list = scan(f"./checkpoints/{path_list[int(a)]}")
b = input("\r\n请输入序号并回车:")
pth_name = path_list[int(b)]
print("\r\n选择:\r\n"
"0.存储精简模型到对应模型目录(本地精简模型时推荐使用这个)\r\n"
"1.存储精简模型和config.yaml到程序根目录新建文件夹九天毕昇上导出精简模型推荐使用这个\r\n"
"2.复制完整模型和config.yaml到程序根目录新建文件夹九天毕昇上导出完整模型推荐使用这个\r\n"
"输入其他退出")
f = int(input("\r\n请输入序号并回车:"))
if f == 0:
print(f"已保存精简模型至对应模型目录")
shutil.copyfile(f'./checkpoints/{project_name}/config.yaml', f"./{project_name}/config.yaml")
output = f"./checkpoints/{project_name}/clean_{pth_name}"
simplify_pth(pth_name, project_name, output)
elif f == 1:
print(f"已保存精简模型至: 根目录下新建文件夹/{project_name}")
mkdir([f"./{project_name}"])
shutil.copyfile(f'./checkpoints/{project_name}/config.yaml', f"./{project_name}/config.yaml")
output = f"./{project_name}/clean_{pth_name}"
simplify_pth(pth_name, project_name, output)
elif f == 2:
print(f"已保存完整模型至: 根目录下新建文件夹/{project_name}")
mkdir([f"./{project_name}"])
shutil.copyfile(f'./checkpoints/{project_name}/config.yaml', f"./{project_name}/config.yaml")
shutil.copyfile(f'./checkpoints/{project_name}/{pth_name}', f"./{project_name}/{pth_name}")
else:
print("输入错误,程序退出")
exit()

335
training/base_task.py Normal file
View File

@ -0,0 +1,335 @@
import logging
import os
import random
import shutil
import sys
import matplotlib
import numpy as np
import torch.distributed as dist
import torch.utils.data
from pytorch_lightning.loggers import TensorBoardLogger
from torch import nn
import utils
from utils.hparams import hparams, set_hparams
from utils.pl_utils import LatestModelCheckpoint, BaseTrainer, data_loader, DDP
matplotlib.use('Agg')
torch.multiprocessing.set_sharing_strategy(os.getenv('TORCH_SHARE_STRATEGY', 'file_system'))
log_format = '%(asctime)s %(message)s'
logging.basicConfig(stream=sys.stdout, level=logging.INFO,
format=log_format, datefmt='%m/%d %I:%M:%S %p')
class BaseTask(nn.Module):
'''
Base class for training tasks.
1. *load_ckpt*:
load checkpoint;
2. *training_step*:
record and log the loss;
3. *optimizer_step*:
run backwards step;
4. *start*:
load training configs, backup code, log to tensorboard, start training;
5. *configure_ddp* and *init_ddp_connection*:
start parallel training.
Subclasses should define:
1. *build_model*, *build_optimizer*, *build_scheduler*:
how to build the model, the optimizer and the training scheduler;
2. *_training_step*:
one training step of the model;
3. *validation_end* and *_validation_end*:
postprocess the validation output.
'''
def __init__(self, *args, **kwargs):
# dataset configs
super(BaseTask, self).__init__(*args, **kwargs)
self.current_epoch = 0
self.global_step = 0
self.loaded_optimizer_states_dict = {}
self.trainer = None
self.logger = None
self.on_gpu = False
self.use_dp = False
self.use_ddp = False
self.example_input_array = None
self.max_tokens = hparams['max_tokens']
self.max_sentences = hparams['max_sentences']
self.max_eval_tokens = hparams['max_eval_tokens']
if self.max_eval_tokens == -1:
hparams['max_eval_tokens'] = self.max_eval_tokens = self.max_tokens
self.max_eval_sentences = hparams['max_eval_sentences']
if self.max_eval_sentences == -1:
hparams['max_eval_sentences'] = self.max_eval_sentences = self.max_sentences
self.model = None
self.training_losses_meter = None
###########
# Training, validation and testing
###########
def build_model(self):
raise NotImplementedError
def load_ckpt(self, ckpt_base_dir, current_model_name=None, model_name='model', force=True, strict=True):
# This function is updated on 2021.12.13
if current_model_name is None:
current_model_name = model_name
utils.load_ckpt(self.__getattr__(current_model_name), ckpt_base_dir, current_model_name, force, strict)
def on_epoch_start(self):
self.training_losses_meter = {'total_loss': utils.AvgrageMeter()}
def _training_step(self, sample, batch_idx, optimizer_idx):
"""
:param sample:
:param batch_idx:
:return: total loss: torch.Tensor, loss_log: dict
"""
raise NotImplementedError
def training_step(self, sample, batch_idx, optimizer_idx=-1):
loss_ret = self._training_step(sample, batch_idx, optimizer_idx)
self.opt_idx = optimizer_idx
if loss_ret is None:
return {'loss': None}
total_loss, log_outputs = loss_ret
log_outputs = utils.tensors_to_scalars(log_outputs)
for k, v in log_outputs.items():
if k not in self.training_losses_meter:
self.training_losses_meter[k] = utils.AvgrageMeter()
if not np.isnan(v):
self.training_losses_meter[k].update(v)
self.training_losses_meter['total_loss'].update(total_loss.item())
try:
log_outputs['lr'] = self.scheduler.get_lr()
if isinstance(log_outputs['lr'], list):
log_outputs['lr'] = log_outputs['lr'][0]
except:
pass
# log_outputs['all_loss'] = total_loss.item()
progress_bar_log = log_outputs
tb_log = {f'tr/{k}': v for k, v in log_outputs.items()}
return {
'loss': total_loss,
'progress_bar': progress_bar_log,
'log': tb_log
}
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx):
optimizer.step()
optimizer.zero_grad()
if self.scheduler is not None:
self.scheduler.step(self.global_step // hparams['accumulate_grad_batches'])
def on_epoch_end(self):
loss_outputs = {k: round(v.avg, 4) for k, v in self.training_losses_meter.items()}
print(f"\n==============\n "
f"Epoch {self.current_epoch} ended. Steps: {self.global_step}. {loss_outputs}"
f"\n==============\n")
def validation_step(self, sample, batch_idx):
"""
:param sample:
:param batch_idx:
:return: output: dict
"""
raise NotImplementedError
def _validation_end(self, outputs):
"""
:param outputs:
:return: loss_output: dict
"""
raise NotImplementedError
def validation_end(self, outputs):
loss_output = self._validation_end(outputs)
print(f"\n==============\n "
f"valid results: {loss_output}"
f"\n==============\n")
return {
'log': {f'val/{k}': v for k, v in loss_output.items()},
'val_loss': loss_output['total_loss']
}
def build_scheduler(self, optimizer):
raise NotImplementedError
def build_optimizer(self, model):
raise NotImplementedError
def configure_optimizers(self):
optm = self.build_optimizer(self.model)
self.scheduler = self.build_scheduler(optm)
return [optm]
def test_start(self):
pass
def test_step(self, sample, batch_idx):
return self.validation_step(sample, batch_idx)
def test_end(self, outputs):
return self.validation_end(outputs)
###########
# Running configuration
###########
@classmethod
def start(cls):
set_hparams()
os.environ['MASTER_PORT'] = str(random.randint(15000, 30000))
random.seed(hparams['seed'])
np.random.seed(hparams['seed'])
task = cls()
work_dir = hparams['work_dir']
trainer = BaseTrainer(checkpoint_callback=LatestModelCheckpoint(
filepath=work_dir,
verbose=True,
monitor='val_loss',
mode='min',
num_ckpt_keep=hparams['num_ckpt_keep'],
save_best=hparams['save_best'],
period=1 if hparams['save_ckpt'] else 100000
),
logger=TensorBoardLogger(
save_dir=work_dir,
name='lightning_logs',
version='lastest'
),
gradient_clip_val=hparams['clip_grad_norm'],
val_check_interval=hparams['val_check_interval'],
row_log_interval=hparams['log_interval'],
max_updates=hparams['max_updates'],
num_sanity_val_steps=hparams['num_sanity_val_steps'] if not hparams[
'validate'] else 10000,
accumulate_grad_batches=hparams['accumulate_grad_batches'],
use_amp=hparams['use_amp'])
if not hparams['infer']: # train
# Copy spk_map.json to work dir
spk_map = os.path.join(work_dir, 'spk_map.json')
spk_map_orig = os.path.join(hparams['binary_data_dir'], 'spk_map.json')
if not os.path.exists(spk_map) and os.path.exists(spk_map_orig):
shutil.copy(spk_map_orig, spk_map)
print(f"| Copied spk map to {spk_map}.")
trainer.checkpoint_callback.task = task
trainer.fit(task)
else:
trainer.test(task)
@staticmethod
def configure_ddp(model, device_ids):
model = DDP(
model,
device_ids=device_ids,
find_unused_parameters=True
)
if dist.get_rank() != 0 and not hparams['debug']:
sys.stdout = open(os.devnull, "w")
sys.stderr = open(os.devnull, "w")
random.seed(hparams['seed'])
np.random.seed(hparams['seed'])
return model
@staticmethod
def training_end(self, *args, **kwargs):
return None
def init_ddp_connection(self, proc_rank, world_size):
set_hparams(print_hparams=False)
# guarantees unique ports across jobs from same grid search
default_port = 12910
# if user gave a port number, use that one instead
try:
default_port = os.environ['MASTER_PORT']
except Exception:
os.environ['MASTER_PORT'] = str(default_port)
# figure out the root node addr
root_node = '127.0.0.2'
root_node = self.trainer.resolve_root_node_address(root_node)
os.environ['MASTER_ADDR'] = root_node
dist.init_process_group('nccl', rank=proc_rank, world_size=world_size)
@data_loader
def train_dataloader(self):
return None
@data_loader
def test_dataloader(self):
return None
@data_loader
def val_dataloader(self):
return None
def on_load_checkpoint(self, checkpoint):
pass
def on_save_checkpoint(self, checkpoint):
pass
def on_sanity_check_start(self):
pass
def on_train_start(self):
pass
def on_train_end(self):
pass
def on_batch_start(self, batch):
pass
def on_batch_end(self):
pass
def on_pre_performance_check(self):
pass
def on_post_performance_check(self):
pass
def on_before_zero_grad(self, optimizer):
pass
def on_after_backward(self):
pass
@staticmethod
def backward(loss, optimizer):
loss.backward()
def grad_norm(self, norm_type):
results = {}
total_norm = 0
for name, p in self.named_parameters():
if p.requires_grad:
try:
param_norm = p.grad.data.norm(norm_type)
total_norm += param_norm ** norm_type
norm = param_norm ** (1 / norm_type)
grad = round(norm.data.cpu().numpy().flatten()[0], 3)
results['grad_{}_norm_{}'.format(norm_type, name)] = grad
except Exception:
# this param had no grad
pass
total_norm = total_norm ** (1. / norm_type)
grad = round(total_norm.data.cpu().numpy().flatten()[0], 3)
results['grad_{}_norm_total'.format(norm_type)] = grad
return results

482
training/svc_task.py Normal file
View File

@ -0,0 +1,482 @@
import os
from multiprocessing.pool import Pool
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.distributed as dist
import torch.distributions
import torch.nn.functional as F
import torch.optim
import torch.utils.data
from tqdm import tqdm
import utils
from modules.commons.ssim import ssim
from modules.diff.diffusion import GaussianDiffusion
from modules.diff.net import DiffNet
from modules.vocoders.nsf_hifigan import NsfHifiGAN, nsf_hifigan
from preprocessing.hubertinfer import HubertEncoder
from preprocessing.process_pipeline import get_pitch_parselmouth
from training.base_task import BaseTask
from utils import audio
from utils.hparams import hparams
from utils.pitch_utils import denorm_f0
from utils.pl_utils import data_loader
from utils.plot import spec_to_figure, f0_to_figure
from utils.svc_utils import SvcDataset
matplotlib.use('Agg')
DIFF_DECODERS = {
'wavenet': lambda hp: DiffNet(hp['audio_num_mel_bins'])
}
class SvcTask(BaseTask):
def __init__(self):
super(SvcTask, self).__init__()
self.vocoder = NsfHifiGAN()
self.phone_encoder = HubertEncoder(hparams['hubert_path'])
self.saving_result_pool = None
self.saving_results_futures = None
self.stats = {}
self.dataset_cls = SvcDataset
self.mse_loss_fn = torch.nn.MSELoss()
mel_losses = hparams['mel_loss'].split("|")
self.loss_and_lambda = {}
for i, l in enumerate(mel_losses):
if l == '':
continue
if ':' in l:
l, lbd = l.split(":")
lbd = float(lbd)
else:
lbd = 1.0
self.loss_and_lambda[l] = lbd
print("| Mel losses:", self.loss_and_lambda)
def build_dataloader(self, dataset, shuffle, max_tokens=None, max_sentences=None,
required_batch_size_multiple=-1, endless=False, batch_by_size=True):
devices_cnt = torch.cuda.device_count()
if devices_cnt == 0:
devices_cnt = 1
if required_batch_size_multiple == -1:
required_batch_size_multiple = devices_cnt
def shuffle_batches(batches):
np.random.shuffle(batches)
return batches
if max_tokens is not None:
max_tokens *= devices_cnt
if max_sentences is not None:
max_sentences *= devices_cnt
indices = dataset.ordered_indices()
if batch_by_size:
batch_sampler = utils.batch_by_size(
indices, dataset.num_tokens, max_tokens=max_tokens, max_sentences=max_sentences,
required_batch_size_multiple=required_batch_size_multiple,
)
else:
batch_sampler = []
for i in range(0, len(indices), max_sentences):
batch_sampler.append(indices[i:i + max_sentences])
if shuffle:
batches = shuffle_batches(list(batch_sampler))
if endless:
batches = [b for _ in range(1000) for b in shuffle_batches(list(batch_sampler))]
else:
batches = batch_sampler
if endless:
batches = [b for _ in range(1000) for b in batches]
num_workers = dataset.num_workers
if self.trainer.use_ddp:
num_replicas = dist.get_world_size()
rank = dist.get_rank()
batches = [x[rank::num_replicas] for x in batches if len(x) % num_replicas == 0]
return torch.utils.data.DataLoader(dataset,
collate_fn=dataset.collater,
batch_sampler=batches,
num_workers=num_workers,
pin_memory=False)
def test_start(self):
self.saving_result_pool = Pool(8)
self.saving_results_futures = []
self.vocoder = nsf_hifigan
def test_end(self, outputs):
self.saving_result_pool.close()
[f.get() for f in tqdm(self.saving_results_futures)]
self.saving_result_pool.join()
return {}
@data_loader
def train_dataloader(self):
train_dataset = self.dataset_cls(hparams['train_set_name'], shuffle=True)
return self.build_dataloader(train_dataset, True, self.max_tokens, self.max_sentences,
endless=hparams['endless_ds'])
@data_loader
def val_dataloader(self):
valid_dataset = self.dataset_cls(hparams['valid_set_name'], shuffle=False)
return self.build_dataloader(valid_dataset, False, self.max_eval_tokens, self.max_eval_sentences)
@data_loader
def test_dataloader(self):
test_dataset = self.dataset_cls(hparams['test_set_name'], shuffle=False)
return self.build_dataloader(test_dataset, False, self.max_eval_tokens,
self.max_eval_sentences, batch_by_size=False)
def build_model(self):
self.build_tts_model()
if hparams['load_ckpt'] != '':
self.load_ckpt(hparams['load_ckpt'], strict=True)
utils.print_arch(self.model)
return self.model
def build_tts_model(self):
mel_bins = hparams['audio_num_mel_bins']
self.model = GaussianDiffusion(
phone_encoder=self.phone_encoder,
out_dims=mel_bins, denoise_fn=DIFF_DECODERS[hparams['diff_decoder_type']](hparams),
timesteps=hparams['timesteps'],
K_step=hparams['K_step'],
loss_type=hparams['diff_loss_type'],
spec_min=hparams['spec_min'], spec_max=hparams['spec_max'],
)
def build_optimizer(self, model):
self.optimizer = optimizer = torch.optim.AdamW(
filter(lambda p: p.requires_grad, model.parameters()),
lr=hparams['lr'],
betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
weight_decay=hparams['weight_decay'])
return optimizer
@staticmethod
def run_model(model, sample, return_output=False, infer=False):
'''
steps:
1. run the full model, calc the main loss
2. calculate loss for dur_predictor, pitch_predictor, energy_predictor
'''
hubert = sample['hubert'] # [B, T_t,H]
target = sample['mels'] # [B, T_s, 80]
mel2ph = sample['mel2ph'] # [B, T_s]
f0 = sample['f0']
energy = sample.get('energy')
spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
output = model(hubert, mel2ph=mel2ph, spk_embed_id=spk_embed, ref_mels=target, f0=f0, energy=energy,
infer=infer)
losses = {}
if 'diff_loss' in output:
losses['mel'] = output['diff_loss']
if not return_output:
return losses
else:
return losses, output
def build_scheduler(self, optimizer):
return torch.optim.lr_scheduler.StepLR(optimizer, hparams['decay_steps'], gamma=0.5)
def _training_step(self, sample, batch_idx, _):
log_outputs = self.run_model(self.model, sample)
total_loss = sum([v for v in log_outputs.values() if isinstance(v, torch.Tensor) and v.requires_grad])
log_outputs['batch_size'] = sample['hubert'].size()[0]
log_outputs['lr'] = self.scheduler.get_lr()[0]
return total_loss, log_outputs
def optimizer_step(self, epoch, batch_idx, optimizer, optimizer_idx, use_amp, scaler):
if optimizer is None:
return
if use_amp:
scaler.step(optimizer)
scaler.update()
else:
optimizer.step()
optimizer.zero_grad()
if self.scheduler is not None:
self.scheduler.step(self.global_step // hparams['accumulate_grad_batches'])
def validation_step(self, sample, batch_idx):
outputs = {}
hubert = sample['hubert'] # [B, T_t]
energy = sample.get('energy')
spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
mel2ph = sample['mel2ph']
outputs['losses'] = {}
outputs['losses'], model_out = self.run_model(self.model, sample, return_output=True, infer=False)
outputs['total_loss'] = sum(outputs['losses'].values())
outputs['nsamples'] = sample['nsamples']
outputs = utils.tensors_to_scalars(outputs)
if batch_idx < hparams['num_valid_plots']:
model_out = self.model(
hubert, spk_embed_id=spk_embed, mel2ph=mel2ph, f0=sample['f0'], energy=energy, ref_mels=None, infer=True
)
gt_f0 = denorm_f0(sample['f0'], sample['uv'], hparams)
pred_f0 = model_out.get('f0_denorm')
self.plot_wav(batch_idx, sample['mels'], model_out['mel_out'], is_mel=True, gt_f0=gt_f0, f0=pred_f0)
self.plot_mel(batch_idx, sample['mels'], model_out['mel_out'], name=f'diffmel_{batch_idx}')
if hparams['use_pitch_embed']:
self.plot_pitch(batch_idx, sample, model_out)
return outputs
def _validation_end(self, outputs):
all_losses_meter = {
'total_loss': utils.AvgrageMeter(),
}
for output in outputs:
n = output['nsamples']
for k, v in output['losses'].items():
if k not in all_losses_meter:
all_losses_meter[k] = utils.AvgrageMeter()
all_losses_meter[k].update(v, n)
all_losses_meter['total_loss'].update(output['total_loss'], n)
return {k: round(v.avg, 4) for k, v in all_losses_meter.items()}
############
# losses
############
def add_mel_loss(self, mel_out, target, losses, postfix='', mel_mix_loss=None):
if mel_mix_loss is None:
for loss_name, lbd in self.loss_and_lambda.items():
if 'l1' == loss_name:
l = self.l1_loss(mel_out, target)
elif 'mse' == loss_name:
raise NotImplementedError
elif 'ssim' == loss_name:
l = self.ssim_loss(mel_out, target)
elif 'gdl' == loss_name:
raise NotImplementedError
losses[f'{loss_name}{postfix}'] = l * lbd
else:
raise NotImplementedError
def l1_loss(self, decoder_output, target):
# decoder_output : B x T x n_mel
# target : B x T x n_mel
l1_loss = F.l1_loss(decoder_output, target, reduction='none')
weights = self.weights_nonzero_speech(target)
l1_loss = (l1_loss * weights).sum() / weights.sum()
return l1_loss
def ssim_loss(self, decoder_output, target, bias=6.0):
# decoder_output : B x T x n_mel
# target : B x T x n_mel
assert decoder_output.shape == target.shape
weights = self.weights_nonzero_speech(target)
decoder_output = decoder_output[:, None] + bias
target = target[:, None] + bias
ssim_loss = 1 - ssim(decoder_output, target, size_average=False)
ssim_loss = (ssim_loss * weights).sum() / weights.sum()
return ssim_loss
def add_pitch_loss(self, output, sample, losses):
if hparams['pitch_type'] == 'ph':
nonpadding = (sample['txt_tokens'] != 0).float()
pitch_loss_fn = F.l1_loss if hparams['pitch_loss'] == 'l1' else F.mse_loss
losses['f0'] = (pitch_loss_fn(output['pitch_pred'][:, :, 0], sample['f0'],
reduction='none') * nonpadding).sum() \
/ nonpadding.sum() * hparams['lambda_f0']
return
mel2ph = sample['mel2ph'] # [B, T_s]
f0 = sample['f0']
uv = sample['uv']
nonpadding = (mel2ph != 0).float()
if hparams['pitch_type'] == 'frame':
self.add_f0_loss(output['pitch_pred'], f0, uv, losses, nonpadding=nonpadding)
@staticmethod
def add_f0_loss(p_pred, f0, uv, losses, nonpadding):
assert p_pred[..., 0].shape == f0.shape
if hparams['use_uv']:
assert p_pred[..., 1].shape == uv.shape
losses['uv'] = (F.binary_cross_entropy_with_logits(
p_pred[:, :, 1], uv, reduction='none') * nonpadding).sum() \
/ nonpadding.sum() * hparams['lambda_uv']
nonpadding = nonpadding * (uv == 0).float()
f0_pred = p_pred[:, :, 0]
if hparams['pitch_loss'] in ['l1', 'l2']:
pitch_loss_fn = F.l1_loss if hparams['pitch_loss'] == 'l1' else F.mse_loss
losses['f0'] = (pitch_loss_fn(f0_pred, f0, reduction='none') * nonpadding).sum() \
/ nonpadding.sum() * hparams['lambda_f0']
elif hparams['pitch_loss'] == 'ssim':
return NotImplementedError
@staticmethod
def add_energy_loss(energy_pred, energy, losses):
nonpadding = (energy != 0).float()
loss = (F.mse_loss(energy_pred, energy, reduction='none') * nonpadding).sum() / nonpadding.sum()
loss = loss * hparams['lambda_energy']
losses['e'] = loss
############
# validation plots
############
def plot_mel(self, batch_idx, spec, spec_out, name=None):
spec_cat = torch.cat([spec, spec_out], -1)
name = f'mel_{batch_idx}' if name is None else name
vmin = hparams['mel_vmin']
vmax = hparams['mel_vmax']
self.logger.experiment.add_figure(name, spec_to_figure(spec_cat[0], vmin, vmax), self.global_step)
def plot_pitch(self, batch_idx, sample, model_out):
f0 = sample['f0']
if hparams['pitch_type'] == 'ph':
mel2ph = sample['mel2ph']
f0 = self.expand_f0_ph(f0, mel2ph)
f0_pred = self.expand_f0_ph(model_out['pitch_pred'][:, :, 0], mel2ph)
self.logger.experiment.add_figure(
f'f0_{batch_idx}', f0_to_figure(f0[0], None, f0_pred[0]), self.global_step)
return
f0 = denorm_f0(f0, sample['uv'], hparams)
if hparams['pitch_type'] == 'frame':
pitch_pred = denorm_f0(model_out['pitch_pred'][:, :, 0], sample['uv'], hparams)
self.logger.experiment.add_figure(
f'f0_{batch_idx}', f0_to_figure(f0[0], None, pitch_pred[0]), self.global_step)
def plot_wav(self, batch_idx, gt_wav, wav_out, is_mel=False, gt_f0=None, f0=None, name=None):
gt_wav = gt_wav[0].cpu().numpy()
wav_out = wav_out[0].cpu().numpy()
gt_f0 = gt_f0[0].cpu().numpy()
f0 = f0[0].cpu().numpy()
if is_mel:
gt_wav = self.vocoder.spec2wav(gt_wav, f0=gt_f0)
wav_out = self.vocoder.spec2wav(wav_out, f0=f0)
self.logger.experiment.add_audio(f'gt_{batch_idx}', gt_wav, sample_rate=hparams['audio_sample_rate'],
global_step=self.global_step)
self.logger.experiment.add_audio(f'wav_{batch_idx}', wav_out, sample_rate=hparams['audio_sample_rate'],
global_step=self.global_step)
############
# infer
############
def test_step(self, sample, batch_idx):
spk_embed = sample.get('spk_embed') if not hparams['use_spk_id'] else sample.get('spk_ids')
hubert = sample['hubert']
ref_mels = None
mel2ph = sample['mel2ph']
f0 = sample['f0']
outputs = self.model(hubert, spk_embed_id=spk_embed, mel2ph=mel2ph, f0=f0, ref_mels=ref_mels, infer=True)
sample['outputs'] = outputs['mel_out']
sample['mel2ph_pred'] = outputs['mel2ph']
sample['f0'] = denorm_f0(sample['f0'], sample['uv'], hparams)
sample['f0_pred'] = outputs.get('f0_denorm')
return self.after_infer(sample)
def after_infer(self, predictions):
if self.saving_result_pool is None and not hparams['profile_infer']:
self.saving_result_pool = Pool(min(int(os.getenv('N_PROC', os.cpu_count())), 16))
self.saving_results_futures = []
predictions = utils.unpack_dict_to_list(predictions)
t = tqdm(predictions)
for num_predictions, prediction in enumerate(t):
for k, v in prediction.items():
if type(v) is torch.Tensor:
prediction[k] = v.cpu().numpy()
item_name = prediction.get('item_name')
# remove paddings
mel_gt = prediction["mels"]
mel_gt_mask = np.abs(mel_gt).sum(-1) > 0
mel_gt = mel_gt[mel_gt_mask]
mel_pred = prediction["outputs"]
mel_pred_mask = np.abs(mel_pred).sum(-1) > 0
mel_pred = mel_pred[mel_pred_mask]
mel_gt = np.clip(mel_gt, hparams['mel_vmin'], hparams['mel_vmax'])
mel_pred = np.clip(mel_pred, hparams['mel_vmin'], hparams['mel_vmax'])
f0_gt = prediction.get("f0")
f0_pred = f0_gt
if f0_pred is not None:
f0_gt = f0_gt[mel_gt_mask]
if len(f0_pred) > len(mel_pred_mask):
f0_pred = f0_pred[:len(mel_pred_mask)]
f0_pred = f0_pred[mel_pred_mask]
gen_dir = os.path.join(hparams['work_dir'],
f'generated_{self.trainer.global_step}_{hparams["gen_dir_name"]}')
wav_pred = self.vocoder.spec2wav(mel_pred, f0=f0_pred)
if not hparams['profile_infer']:
os.makedirs(gen_dir, exist_ok=True)
os.makedirs(f'{gen_dir}/wavs', exist_ok=True)
os.makedirs(f'{gen_dir}/plot', exist_ok=True)
os.makedirs(os.path.join(hparams['work_dir'], 'P_mels_npy'), exist_ok=True)
os.makedirs(os.path.join(hparams['work_dir'], 'G_mels_npy'), exist_ok=True)
self.saving_results_futures.append(
self.saving_result_pool.apply_async(self.save_result, args=[
wav_pred, mel_pred, 'P', item_name, gen_dir]))
if mel_gt is not None and hparams['save_gt']:
wav_gt = self.vocoder.spec2wav(mel_gt, f0=f0_gt)
self.saving_results_futures.append(
self.saving_result_pool.apply_async(self.save_result, args=[
wav_gt, mel_gt, 'G', item_name, gen_dir]))
if hparams['save_f0']:
import matplotlib.pyplot as plt
f0_pred_ = f0_pred
f0_gt_, _ = get_pitch_parselmouth(wav_gt, mel_gt, hparams)
fig = plt.figure()
plt.plot(f0_pred_, label=r'$f0_P$')
plt.plot(f0_gt_, label=r'$f0_G$')
plt.legend()
plt.tight_layout()
plt.savefig(f'{gen_dir}/plot/[F0][{item_name}]{text}.png', format='png')
plt.close(fig)
t.set_description(
f"Pred_shape: {mel_pred.shape}, gt_shape: {mel_gt.shape}")
else:
if 'gen_wav_time' not in self.stats:
self.stats['gen_wav_time'] = 0
self.stats['gen_wav_time'] += len(wav_pred) / hparams['audio_sample_rate']
print('gen_wav_time: ', self.stats['gen_wav_time'])
return {}
@staticmethod
def save_result(wav_out, mel, prefix, item_name, gen_dir):
item_name = item_name.replace('/', '-')
base_fn = f'[{item_name}][{prefix}]'
base_fn += ('-' + hparams['exp_name'])
np.save(os.path.join(hparams['work_dir'], f'{prefix}_mels_npy', item_name), mel)
audio.save_wav(wav_out, f'{gen_dir}/wavs/{base_fn}.wav', 24000, # hparams['audio_sample_rate'],
norm=hparams['out_wav_norm'])
fig = plt.figure(figsize=(14, 10))
spec_vmin = hparams['mel_vmin']
spec_vmax = hparams['mel_vmax']
heatmap = plt.pcolor(mel.T, vmin=spec_vmin, vmax=spec_vmax)
fig.colorbar(heatmap)
f0, _ = get_pitch_parselmouth(wav_out, mel, hparams)
f0 = (f0 - 100) / (800 - 100) * 80 * (f0 > 0)
plt.plot(f0, c='white', linewidth=1, alpha=0.6)
plt.tight_layout()
plt.savefig(f'{gen_dir}/plot/{base_fn}.png', format='png', dpi=1000)
plt.close(fig)
##############
# utils
##############
@staticmethod
def expand_f0_ph(f0, mel2ph):
f0 = denorm_f0(f0, None, hparams)
f0 = F.pad(f0, [1, 0])
f0 = torch.gather(f0, 1, mel2ph) # [B, T_mel]
return f0
@staticmethod
def weights_nonzero_speech(target):
# target : B x T x mel
# Assign weight 1.0 to all labels except for padding (id=0).
dim = target.size(-1)
return target.abs().sum(-1, keepdim=True).ne(0).float().repeat(1, 1, dim)

218
training/train_pipeline.py Normal file
View File

@ -0,0 +1,218 @@
import torch
from torch.nn import functional as F
from utils.hparams import hparams
from utils.pitch_utils import f0_to_coarse, denorm_f0
class Batch2Loss:
'''
pipeline: batch -> insert1 -> module1 -> insert2 -> module2 -> insert3 -> module3 -> insert4 -> module4 -> loss
'''
@staticmethod
def insert1(pitch_midi, midi_dur, is_slur, # variables
midi_embed, midi_dur_layer, is_slur_embed): # modules
'''
add embeddings for midi, midi_dur, slur
'''
midi_embedding = midi_embed(pitch_midi)
midi_dur_embedding, slur_embedding = 0, 0
if midi_dur is not None:
midi_dur_embedding = midi_dur_layer(midi_dur[:, :, None]) # [B, T, 1] -> [B, T, H]
if is_slur is not None:
slur_embedding = is_slur_embed(is_slur)
return midi_embedding, midi_dur_embedding, slur_embedding
@staticmethod
def module1(fs2_encoder, # modules
txt_tokens, midi_embedding, midi_dur_embedding, slur_embedding): # variables
'''
get *encoder_out* == fs2_encoder(*txt_tokens*, some embeddings)
'''
encoder_out = fs2_encoder(txt_tokens, midi_embedding, midi_dur_embedding, slur_embedding)
return encoder_out
@staticmethod
def insert2(encoder_out, spk_embed_id, spk_embed_dur_id, spk_embed_f0_id, src_nonpadding, # variables
spk_embed_proj): # modules
'''
1. add embeddings for pspk, spk_dur, sk_f0
2. get *dur_inp* ~= *encoder_out* + *spk_embed_dur*
'''
# add ref style embed
# Not implemented
# variance encoder
var_embed = 0
# encoder_out_dur denotes encoder outputs for duration predictor
# in speech adaptation, duration predictor use old speaker embedding
if hparams['use_spk_id']:
spk_embed = spk_embed_proj(spk_embed_id)[:, None, :]
spk_embed_dur = spk_embed_f0 = spk_embed
else:
spk_embed_dur = spk_embed_f0 = spk_embed = 0
# add dur
dur_inp = (encoder_out + var_embed + spk_embed_dur) * src_nonpadding
return var_embed, spk_embed, spk_embed_dur, spk_embed_f0, dur_inp
@staticmethod
def module2(dur_predictor, length_regulator, # modules
dur_input, mel2ph, txt_tokens, all_vowel_tokens, ret, midi_dur=None): # variables
'''
1. get *dur* ~= dur_predictor(*dur_inp*)
2. (mel2ph is None): get *mel2ph* ~= length_regulater(*dur*)
'''
src_padding = (txt_tokens == 0)
dur_input = dur_input.detach() + hparams['predictor_grad'] * (dur_input - dur_input.detach())
if mel2ph is None:
dur, xs = dur_predictor.inference(dur_input, src_padding)
ret['dur'] = xs
dur = xs.squeeze(-1).exp() - 1.0
for i in range(len(dur)):
for j in range(len(dur[i])):
if txt_tokens[i, j] in all_vowel_tokens:
if j < len(dur[i]) - 1 and txt_tokens[i, j + 1] not in all_vowel_tokens:
dur[i, j] = midi_dur[i, j] - dur[i, j + 1]
if dur[i, j] < 0:
dur[i, j] = 0
dur[i, j + 1] = midi_dur[i, j]
else:
dur[i, j] = midi_dur[i, j]
dur[:, 0] = dur[:, 0] + 0.5
dur_acc = F.pad(torch.round(torch.cumsum(dur, axis=1)), (1, 0))
dur = torch.clamp(dur_acc[:, 1:] - dur_acc[:, :-1], min=0).long()
ret['dur_choice'] = dur
mel2ph = length_regulator(dur, src_padding).detach()
else:
ret['dur'] = dur_predictor(dur_input, src_padding)
ret['mel2ph'] = mel2ph
return mel2ph
@staticmethod
def insert3(encoder_out, mel2ph, var_embed, spk_embed_f0, src_nonpadding, tgt_nonpadding): # variables
'''
1. get *decoder_inp* ~= gather *encoder_out* according to *mel2ph*
2. get *pitch_inp* ~= *decoder_inp* + *spk_embed_f0*
3. get *pitch_inp_ph* ~= *encoder_out* + *spk_embed_f0*
'''
decoder_inp = F.pad(encoder_out, [0, 0, 1, 0])
mel2ph_ = mel2ph[..., None].repeat([1, 1, encoder_out.shape[-1]])
decoder_inp = decoder_inp_origin = torch.gather(decoder_inp, 1, mel2ph_) # [B, T, H]
pitch_inp = (decoder_inp_origin + var_embed + spk_embed_f0) * tgt_nonpadding
pitch_inp_ph = (encoder_out + var_embed + spk_embed_f0) * src_nonpadding
return decoder_inp, pitch_inp, pitch_inp_ph
@staticmethod
def module3(pitch_predictor, pitch_embed, energy_predictor, energy_embed, # modules
pitch_inp, pitch_inp_ph, f0, uv, energy, mel2ph, is_training, ret): # variables
'''
1. get *ret['pitch_pred']*, *ret['energy_pred']* ~= pitch_predictor(*pitch_inp*), energy_predictor(*pitch_inp*)
2. get *pitch_embedding* ~= pitch_embed(f0_to_coarse(denorm_f0(*f0* or *pitch_pred*))
3. get *energy_embedding* ~= energy_embed(energy_to_coarse(*energy* or *energy_pred*))
'''
def add_pitch(decoder_inp, f0, uv, mel2ph, ret, encoder_out=None):
if hparams['pitch_type'] == 'ph':
pitch_pred_inp = encoder_out.detach() + hparams['predictor_grad'] * (encoder_out - encoder_out.detach())
pitch_padding = (encoder_out.sum().abs() == 0)
ret['pitch_pred'] = pitch_pred = pitch_predictor(pitch_pred_inp)
if f0 is None:
f0 = pitch_pred[:, :, 0]
ret['f0_denorm'] = f0_denorm = denorm_f0(f0, None, hparams, pitch_padding=pitch_padding)
pitch = f0_to_coarse(f0_denorm) # start from 0 [B, T_txt]
pitch = F.pad(pitch, [1, 0])
pitch = torch.gather(pitch, 1, mel2ph) # [B, T_mel]
pitch_embedding = pitch_embed(pitch)
return pitch_embedding
decoder_inp = decoder_inp.detach() + hparams['predictor_grad'] * (decoder_inp - decoder_inp.detach())
pitch_padding = (mel2ph == 0)
if hparams['pitch_ar']:
ret['pitch_pred'] = pitch_pred = pitch_predictor(decoder_inp, f0 if is_training else None)
if f0 is None:
f0 = pitch_pred[:, :, 0]
else:
ret['pitch_pred'] = pitch_pred = pitch_predictor(decoder_inp)
if f0 is None:
f0 = pitch_pred[:, :, 0]
if hparams['use_uv'] and uv is None:
uv = pitch_pred[:, :, 1] > 0
ret['f0_denorm'] = f0_denorm = denorm_f0(f0, uv, hparams, pitch_padding=pitch_padding)
if pitch_padding is not None:
f0[pitch_padding] = 0
pitch = f0_to_coarse(f0_denorm) # start from 0
pitch_embedding = pitch_embed(pitch)
return pitch_embedding
def add_energy(decoder_inp, energy, ret):
decoder_inp = decoder_inp.detach() + hparams['predictor_grad'] * (decoder_inp - decoder_inp.detach())
ret['energy_pred'] = energy_pred = energy_predictor(decoder_inp)[:, :, 0]
if energy is None:
energy = energy_pred
energy = torch.clamp(energy * 256 // 4, max=255).long() # energy_to_coarse
energy_embedding = energy_embed(energy)
return energy_embedding
# add pitch and energy embed
nframes = mel2ph.size(1)
pitch_embedding = 0
if hparams['use_pitch_embed']:
if f0 is not None:
delta_l = nframes - f0.size(1)
if delta_l > 0:
f0 = torch.cat((f0, torch.FloatTensor([[x[-1]] * delta_l for x in f0]).to(f0.device)), 1)
f0 = f0[:, :nframes]
if uv is not None:
delta_l = nframes - uv.size(1)
if delta_l > 0:
uv = torch.cat((uv, torch.FloatTensor([[x[-1]] * delta_l for x in uv]).to(uv.device)), 1)
uv = uv[:, :nframes]
pitch_embedding = add_pitch(pitch_inp, f0, uv, mel2ph, ret, encoder_out=pitch_inp_ph)
energy_embedding = 0
if hparams['use_energy_embed']:
if energy is not None:
delta_l = nframes - energy.size(1)
if delta_l > 0:
energy = torch.cat(
(energy, torch.FloatTensor([[x[-1]] * delta_l for x in energy]).to(energy.device)), 1)
energy = energy[:, :nframes]
energy_embedding = add_energy(pitch_inp, energy, ret)
return pitch_embedding, energy_embedding
@staticmethod
def insert4(decoder_inp, pitch_embedding, energy_embedding, spk_embed, ret, tgt_nonpadding):
'''
*decoder_inp* ~= *decoder_inp* + embeddings for spk, pitch, energy
'''
ret['decoder_inp'] = decoder_inp = (
decoder_inp + pitch_embedding + energy_embedding + spk_embed) * tgt_nonpadding
return decoder_inp
@staticmethod
def module4(diff_main_loss, # modules
norm_spec, decoder_inp_t, ret, K_step, batch_size, device): # variables
'''
training diffusion using spec as input and decoder_inp as condition.
Args:
norm_spec: (normalized) spec
decoder_inp_t: (transposed) decoder_inp
Returns:
ret['diff_loss']
'''
t = torch.randint(0, K_step, (batch_size,), device=device).long()
norm_spec = norm_spec.transpose(1, 2)[:, None, :, :] # [B, 1, M, T]
ret['diff_loss'] = diff_main_loss(norm_spec, t, cond=decoder_inp_t)
# nonpadding = (mel2ph != 0).float()
# ret['diff_loss'] = self.p_losses(x, t, cond, nonpadding=nonpadding)

250
utils/__init__.py Normal file
View File

@ -0,0 +1,250 @@
import glob
import logging
import re
import time
from collections import defaultdict
import os
import sys
import shutil
import types
import numpy as np
import torch
import torch.nn.functional as F
import torch.distributed as dist
from torch import nn
def tensors_to_scalars(metrics):
new_metrics = {}
for k, v in metrics.items():
if isinstance(v, torch.Tensor):
v = v.item()
if type(v) is dict:
v = tensors_to_scalars(v)
new_metrics[k] = v
return new_metrics
class AvgrageMeter(object):
def __init__(self):
self.reset()
def reset(self):
self.avg = 0
self.sum = 0
self.cnt = 0
def update(self, val, n=1):
self.sum += val * n
self.cnt += n
self.avg = self.sum / self.cnt
def collate_1d(values, pad_idx=0, left_pad=False, shift_right=False, max_len=None, shift_id=1):
"""Convert a list of 1d tensors into a padded 2d tensor."""
size = max(v.size(0) for v in values) if max_len is None else max_len
res = values[0].new(len(values), size).fill_(pad_idx)
def copy_tensor(src, dst):
assert dst.numel() == src.numel()
if shift_right:
dst[1:] = src[:-1]
dst[0] = shift_id
else:
dst.copy_(src)
for i, v in enumerate(values):
copy_tensor(v, res[i][size - len(v):] if left_pad else res[i][:len(v)])
return res
def collate_2d(values, pad_idx=0, left_pad=False, shift_right=False, max_len=None):
"""Convert a list of 2d tensors into a padded 3d tensor."""
size = max(v.size(0) for v in values) if max_len is None else max_len
res = values[0].new(len(values), size, values[0].shape[1]).fill_(pad_idx)
def copy_tensor(src, dst):
assert dst.numel() == src.numel()
if shift_right:
dst[1:] = src[:-1]
else:
dst.copy_(src)
for i, v in enumerate(values):
copy_tensor(v, res[i][size - len(v):] if left_pad else res[i][:len(v)])
return res
def _is_batch_full(batch, num_tokens, max_tokens, max_sentences):
if len(batch) == 0:
return 0
if len(batch) == max_sentences:
return 1
if num_tokens > max_tokens:
return 1
return 0
def batch_by_size(
indices, num_tokens_fn, max_tokens=None, max_sentences=None,
required_batch_size_multiple=1, distributed=False
):
"""
Yield mini-batches of indices bucketed by size. Batches may contain
sequences of different lengths.
Args:
indices (List[int]): ordered list of dataset indices
num_tokens_fn (callable): function that returns the number of tokens at
a given index
max_tokens (int, optional): max number of tokens in each batch
(default: None).
max_sentences (int, optional): max number of sentences in each
batch (default: None).
required_batch_size_multiple (int, optional): require batch size to
be a multiple of N (default: 1).
"""
max_tokens = max_tokens if max_tokens is not None else sys.maxsize
max_sentences = max_sentences if max_sentences is not None else sys.maxsize
bsz_mult = required_batch_size_multiple
if isinstance(indices, types.GeneratorType):
indices = np.fromiter(indices, dtype=np.int64, count=-1)
sample_len = 0
sample_lens = []
batch = []
batches = []
for i in range(len(indices)):
idx = indices[i]
num_tokens = num_tokens_fn(idx)
sample_lens.append(num_tokens)
sample_len = max(sample_len, num_tokens)
assert sample_len <= max_tokens, (
"sentence at index {} of size {} exceeds max_tokens "
"limit of {}!".format(idx, sample_len, max_tokens)
)
num_tokens = (len(batch) + 1) * sample_len
if _is_batch_full(batch, num_tokens, max_tokens, max_sentences):
mod_len = max(
bsz_mult * (len(batch) // bsz_mult),
len(batch) % bsz_mult,
)
batches.append(batch[:mod_len])
batch = batch[mod_len:]
sample_lens = sample_lens[mod_len:]
sample_len = max(sample_lens) if len(sample_lens) > 0 else 0
batch.append(idx)
if len(batch) > 0:
batches.append(batch)
return batches
def make_positions(tensor, padding_idx):
"""Replace non-padding symbols with their position numbers.
Position numbers begin at padding_idx+1. Padding symbols are ignored.
"""
# The series of casts and type-conversions here are carefully
# balanced to both work with ONNX export and XLA. In particular XLA
# prefers ints, cumsum defaults to output longs, and ONNX doesn't know
# how to handle the dtype kwarg in cumsum.
mask = tensor.ne(padding_idx).int()
return (
torch.cumsum(mask, dim=1).type_as(mask) * mask
).long() + padding_idx
def softmax(x, dim):
return F.softmax(x, dim=dim, dtype=torch.float32)
def unpack_dict_to_list(samples):
samples_ = []
bsz = samples.get('outputs').size(0)
for i in range(bsz):
res = {}
for k, v in samples.items():
try:
res[k] = v[i]
except:
pass
samples_.append(res)
return samples_
def load_ckpt(cur_model, ckpt_base_dir, prefix_in_ckpt='model', force=True, strict=True):
if os.path.isfile(ckpt_base_dir):
base_dir = os.path.dirname(ckpt_base_dir)
checkpoint_path = [ckpt_base_dir]
else:
base_dir = ckpt_base_dir
checkpoint_path = sorted(glob.glob(f'{base_dir}/model_ckpt_steps_*.ckpt'), key=
lambda x: int(re.findall(f'{base_dir}/model_ckpt_steps_(\d+).ckpt', x.replace('\\','/'))[0]))
if len(checkpoint_path) > 0:
checkpoint_path = checkpoint_path[-1]
state_dict = torch.load(checkpoint_path, map_location="cpu")["state_dict"]
state_dict = {k[len(prefix_in_ckpt) + 1:]: v for k, v in state_dict.items()
if k.startswith(f'{prefix_in_ckpt}.')}
if not strict:
cur_model_state_dict = cur_model.state_dict()
unmatched_keys = []
for key, param in state_dict.items():
if key in cur_model_state_dict:
new_param = cur_model_state_dict[key]
if new_param.shape != param.shape:
unmatched_keys.append(key)
print("| Unmatched keys: ", key, new_param.shape, param.shape)
for key in unmatched_keys:
del state_dict[key]
cur_model.load_state_dict(state_dict, strict=strict)
print(f"| load '{prefix_in_ckpt}' from '{checkpoint_path}'.")
else:
e_msg = f"| ckpt not found in {base_dir}."
if force:
assert False, e_msg
else:
print(e_msg)
def remove_padding(x, padding_idx=0):
if x is None:
return None
assert len(x.shape) in [1, 2]
if len(x.shape) == 2: # [T, H]
return x[np.abs(x).sum(-1) != padding_idx]
elif len(x.shape) == 1: # [T]
return x[x != padding_idx]
class Timer:
timer_map = {}
def __init__(self, name, print_time=False):
if name not in Timer.timer_map:
Timer.timer_map[name] = 0
self.name = name
self.print_time = print_time
def __enter__(self):
self.t = time.time()
def __exit__(self, exc_type, exc_val, exc_tb):
Timer.timer_map[self.name] += time.time() - self.t
if self.print_time:
print(self.name, Timer.timer_map[self.name])
def print_arch(model, model_name='model'):
#print(f"| {model_name} Arch: ", model)
num_params(model, model_name=model_name)
def num_params(model, print_out=True, model_name="model"):
parameters = filter(lambda p: p.requires_grad, model.parameters())
parameters = sum([np.prod(p.size()) for p in parameters]) / 1_000_000
if print_out:
print(f'| {model_name} Trainable Parameters: %.3fM' % parameters)
return parameters

56
utils/audio.py Normal file
View File

@ -0,0 +1,56 @@
import subprocess
import matplotlib
matplotlib.use('Agg')
import librosa
import librosa.filters
import numpy as np
from scipy import signal
from scipy.io import wavfile
def save_wav(wav, path, sr, norm=False):
if norm:
wav = wav / np.abs(wav).max()
wav *= 32767
# proposed by @dsmiller
wavfile.write(path, sr, wav.astype(np.int16))
def get_hop_size(hparams):
hop_size = hparams['hop_size']
if hop_size is None:
assert hparams['frame_shift_ms'] is not None
hop_size = int(hparams['frame_shift_ms'] / 1000 * hparams['audio_sample_rate'])
return hop_size
###########################################################################################
def _stft(y, hparams):
return librosa.stft(y=y, n_fft=hparams['fft_size'], hop_length=get_hop_size(hparams),
win_length=hparams['win_size'], pad_mode='constant')
def _istft(y, hparams):
return librosa.istft(y, hop_length=get_hop_size(hparams), win_length=hparams['win_size'])
def librosa_pad_lr(x, fsize, fshift, pad_sides=1):
'''compute right padding (final frame) or both sides padding (first and final frames)
'''
assert pad_sides in (1, 2)
# return int(fsize // 2)
pad = (x.shape[0] // fshift + 1) * fshift - x.shape[0]
if pad_sides == 1:
return 0, pad
else:
return pad // 2, pad // 2 + pad % 2
# Conversions
def amp_to_db(x):
return 20 * np.log10(np.maximum(1e-5, x))
def normalize(S, hparams):
return (S - hparams['min_level_db']) / -hparams['min_level_db']

136
utils/hparams.py Normal file
View File

@ -0,0 +1,136 @@
import argparse
import os
import yaml
global_print_hparams = True
hparams = {}
class Args:
def __init__(self, **kwargs):
for k, v in kwargs.items():
self.__setattr__(k, v)
def override_config(old_config: dict, new_config: dict):
for k, v in new_config.items():
if isinstance(v, dict) and k in old_config:
override_config(old_config[k], new_config[k])
else:
old_config[k] = v
def set_hparams(config='', exp_name='', hparams_str='', print_hparams=True, global_hparams=True, reset=True,
infer=True):
'''
Load hparams from multiple sources:
1. config chain (i.e. first load base_config, then load config);
2. if reset == True, load from the (auto-saved) complete config file ('config.yaml')
which contains all settings and do not rely on base_config;
3. load from argument --hparams or hparams_str, as temporary modification.
'''
if config == '':
parser = argparse.ArgumentParser(description='neural music')
parser.add_argument('--config', type=str, default='',
help='location of the data corpus')
parser.add_argument('--exp_name', type=str, default='', help='exp_name')
parser.add_argument('--hparams', type=str, default='',
help='location of the data corpus')
parser.add_argument('--infer', action='store_true', help='infer')
parser.add_argument('--validate', action='store_true', help='validate')
parser.add_argument('--reset', action='store_true', help='reset hparams')
parser.add_argument('--debug', action='store_true', help='debug')
args, unknown = parser.parse_known_args()
else:
args = Args(config=config, exp_name=exp_name, hparams=hparams_str,
infer=infer, validate=False, reset=reset, debug=False)
args_work_dir = ''
if args.exp_name != '':
args.work_dir = args.exp_name
args_work_dir = f'checkpoints/{args.work_dir}'
config_chains = []
loaded_config = set()
def load_config(config_fn): # deep first
with open(config_fn, encoding='utf-8') as f:
hparams_ = yaml.safe_load(f)
loaded_config.add(config_fn)
if 'base_config' in hparams_:
ret_hparams = {}
if not isinstance(hparams_['base_config'], list):
hparams_['base_config'] = [hparams_['base_config']]
for c in hparams_['base_config']:
if c not in loaded_config:
if c.startswith('.'):
c = f'{os.path.dirname(config_fn)}/{c}'
c = os.path.normpath(c)
override_config(ret_hparams, load_config(c))
override_config(ret_hparams, hparams_)
else:
ret_hparams = hparams_
config_chains.append(config_fn)
return ret_hparams
global hparams
assert args.config != '' or args_work_dir != ''
saved_hparams = {}
if args_work_dir != 'checkpoints/':
ckpt_config_path = f'{args_work_dir}/config.yaml'
if os.path.exists(ckpt_config_path):
try:
with open(ckpt_config_path, encoding='utf-8') as f:
saved_hparams.update(yaml.safe_load(f))
except:
pass
if args.config == '':
args.config = ckpt_config_path
hparams_ = {}
hparams_.update(load_config(args.config))
if not args.reset:
hparams_.update(saved_hparams)
hparams_['work_dir'] = args_work_dir
if args.hparams != "":
for new_hparam in args.hparams.split(","):
k, v = new_hparam.split("=")
if k not in hparams_:
hparams_[k] = eval(v)
if v in ['True', 'False'] or type(hparams_[k]) == bool:
hparams_[k] = eval(v)
else:
hparams_[k] = type(hparams_[k])(v)
if args_work_dir != '' and (not os.path.exists(ckpt_config_path) or args.reset) and not args.infer:
os.makedirs(hparams_['work_dir'], exist_ok=True)
with open(ckpt_config_path, 'w', encoding='utf-8') as f:
temp_haparams = hparams_
if 'base_config' in temp_haparams.keys():
del temp_haparams['base_config']
yaml.safe_dump(temp_haparams, f)
hparams_['infer'] = args.infer
hparams_['debug'] = args.debug
hparams_['validate'] = args.validate
global global_print_hparams
if global_hparams:
hparams.clear()
hparams.update(hparams_)
if print_hparams and global_print_hparams and global_hparams:
print('| Hparams chains: ', config_chains)
print('| Hparams: ')
for i, (k, v) in enumerate(sorted(hparams_.items())):
print(f"\033[;33;m{k}\033[0m: {v}, ", end="\n" if i % 5 == 4 else "")
print("")
global_print_hparams = False
# print(hparams_.keys())
if hparams.get('exp_name') is None:
hparams['exp_name'] = args.exp_name
if hparams_.get('exp_name') is None:
hparams_['exp_name'] = args.exp_name
return hparams_

73
utils/indexed_datasets.py Normal file
View File

@ -0,0 +1,73 @@
import pickle
from copy import deepcopy
import numpy as np
class IndexedDataset:
def __init__(self, path, num_cache=1):
super().__init__()
self.path = path
self.data_file = None
self.data_offsets = np.load(f"{path}.idx", allow_pickle=True).item()['offsets']
self.data_file = open(f"{path}.data", 'rb', buffering=-1)
self.cache = []
self.num_cache = num_cache
def check_index(self, i):
if i < 0 or i >= len(self.data_offsets) - 1:
raise IndexError('index out of range')
def __del__(self):
if self.data_file:
self.data_file.close()
def __getitem__(self, i):
self.check_index(i)
if self.num_cache > 0:
for c in self.cache:
if c[0] == i:
return c[1]
self.data_file.seek(self.data_offsets[i])
b = self.data_file.read(self.data_offsets[i + 1] - self.data_offsets[i])
item = pickle.loads(b)
if self.num_cache > 0:
self.cache = [(i, deepcopy(item))] + self.cache[:-1]
return item
def __len__(self):
return len(self.data_offsets) - 1
class IndexedDatasetBuilder:
def __init__(self, path):
self.path = path
self.out_file = open(f"{path}.data", 'wb')
self.byte_offsets = [0]
def add_item(self, item):
s = pickle.dumps(item)
bytes = self.out_file.write(s)
self.byte_offsets.append(self.byte_offsets[-1] + bytes)
def finalize(self):
self.out_file.close()
np.save(open(f"{self.path}.idx", 'wb'), {'offsets': self.byte_offsets})
if __name__ == "__main__":
import random
from tqdm import tqdm
ds_path = '/tmp/indexed_ds_example'
size = 100
items = [{"a": np.random.normal(size=[10000, 10]),
"b": np.random.normal(size=[10000, 10])} for i in range(size)]
builder = IndexedDatasetBuilder(ds_path)
for i in tqdm(range(size)):
builder.add_item(items[i])
builder.finalize()
ds = IndexedDataset(ds_path)
for i in tqdm(range(10000)):
idx = random.randint(0, size - 1)
assert (ds[idx]['a'] == items[idx]['a']).all()

64
utils/pitch_utils.py Normal file
View File

@ -0,0 +1,64 @@
import numpy as np
import torch
def f0_to_coarse(f0, hparams):
f0_bin = hparams['f0_bin']
f0_max = hparams['f0_max']
f0_min = hparams['f0_min']
is_torch = isinstance(f0, torch.Tensor)
f0_mel_min = 1127 * np.log(1 + f0_min / 700)
f0_mel_max = 1127 * np.log(1 + f0_max / 700)
f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700)
f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1
f0_mel[f0_mel <= 1] = 1
f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1
f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(int)
assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min())
return f0_coarse
def norm_f0(f0, uv, hparams):
is_torch = isinstance(f0, torch.Tensor)
if hparams['pitch_norm'] == 'standard':
f0 = (f0 - hparams['f0_mean']) / hparams['f0_std']
if hparams['pitch_norm'] == 'log':
f0 = torch.log2(f0) if is_torch else np.log2(f0)
if uv is not None and hparams['use_uv']:
f0[uv > 0] = 0
return f0
def norm_interp_f0(f0, hparams):
is_torch = isinstance(f0, torch.Tensor)
if is_torch:
device = f0.device
f0 = f0.data.cpu().numpy()
uv = f0 == 0
f0 = norm_f0(f0, uv, hparams)
if sum(uv) == len(f0):
f0[uv] = 0
elif sum(uv) > 0:
f0[uv] = np.interp(np.where(uv)[0], np.where(~uv)[0], f0[~uv])
uv = torch.FloatTensor(uv)
f0 = torch.FloatTensor(f0)
if is_torch:
f0 = f0.to(device)
return f0, uv
def denorm_f0(f0, uv, hparams, pitch_padding=None, min=None, max=None):
if hparams['pitch_norm'] == 'standard':
f0 = f0 * hparams['f0_std'] + hparams['f0_mean']
if hparams['pitch_norm'] == 'log':
f0 = 2 ** f0
if min is not None:
f0 = f0.clamp(min=min)
if max is not None:
f0 = f0.clamp(max=max)
if uv is not None and hparams['use_uv']:
f0[uv > 0] = 0
if pitch_padding is not None:
f0[pitch_padding] = 0
return f0

1634
utils/pl_utils.py Normal file

File diff suppressed because it is too large Load Diff

56
utils/plot.py Normal file
View File

@ -0,0 +1,56 @@
import matplotlib.pyplot as plt
import numpy as np
import torch
LINE_COLORS = ['w', 'r', 'y', 'cyan', 'm', 'b', 'lime']
def spec_to_figure(spec, vmin=None, vmax=None):
if isinstance(spec, torch.Tensor):
spec = spec.cpu().numpy()
fig = plt.figure(figsize=(12, 6))
plt.pcolor(spec.T, vmin=vmin, vmax=vmax)
return fig
def spec_f0_to_figure(spec, f0s, figsize=None):
max_y = spec.shape[1]
if isinstance(spec, torch.Tensor):
spec = spec.detach().cpu().numpy()
f0s = {k: f0.detach().cpu().numpy() for k, f0 in f0s.items()}
f0s = {k: f0 / 10 for k, f0 in f0s.items()}
fig = plt.figure(figsize=(12, 6) if figsize is None else figsize)
plt.pcolor(spec.T)
for i, (k, f0) in enumerate(f0s.items()):
plt.plot(f0.clip(0, max_y), label=k, c=LINE_COLORS[i], linewidth=1, alpha=0.8)
plt.legend()
return fig
def dur_to_figure(dur_gt, dur_pred, txt):
dur_gt = dur_gt.long().cpu().numpy()
dur_pred = dur_pred.long().cpu().numpy()
dur_gt = np.cumsum(dur_gt)
dur_pred = np.cumsum(dur_pred)
fig = plt.figure(figsize=(12, 6))
for i in range(len(dur_gt)):
shift = (i % 8) + 1
plt.text(dur_gt[i], shift, txt[i])
plt.text(dur_pred[i], 10 + shift, txt[i])
plt.vlines(dur_gt[i], 0, 10, colors='b') # blue is gt
plt.vlines(dur_pred[i], 10, 20, colors='r') # red is pred
return fig
def f0_to_figure(f0_gt, f0_cwt=None, f0_pred=None):
fig = plt.figure()
f0_gt = f0_gt.cpu().numpy()
plt.plot(f0_gt, color='r', label='gt')
if f0_cwt is not None:
f0_cwt = f0_cwt.cpu().numpy()
plt.plot(f0_cwt, color='b', label='cwt')
if f0_pred is not None:
f0_pred = f0_pred.cpu().numpy()
plt.plot(f0_pred, color='green', label='pred')
plt.legend()
return fig

139
utils/svc_utils.py Normal file
View File

@ -0,0 +1,139 @@
import glob
import importlib
import os
import matplotlib
import numpy as np
import torch
import torch.distributions
import torch.optim
import torch.optim
import torch.utils.data
from preprocessing.process_pipeline import File2Batch
from utils.hparams import hparams
from utils.indexed_datasets import IndexedDataset
from utils.pitch_utils import norm_interp_f0
matplotlib.use('Agg')
class SvcDataset(torch.utils.data.Dataset):
def __init__(self, prefix, shuffle=False):
super().__init__()
self.hparams = hparams
self.shuffle = shuffle
self.sort_by_len = hparams['sort_by_len']
self.sizes = None
self.data_dir = hparams['binary_data_dir']
self.prefix = prefix
self.sizes = np.load(f'{self.data_dir}/{self.prefix}_lengths.npy')
self.indexed_ds = None
# self.name2spk_id={}
# pitch stats
f0_stats_fn = f'{self.data_dir}/train_f0s_mean_std.npy'
if os.path.exists(f0_stats_fn):
hparams['f0_mean'], hparams['f0_std'] = self.f0_mean, self.f0_std = np.load(f0_stats_fn)
hparams['f0_mean'] = float(hparams['f0_mean'])
hparams['f0_std'] = float(hparams['f0_std'])
else:
hparams['f0_mean'], hparams['f0_std'] = self.f0_mean, self.f0_std = None, None
if prefix == 'test':
if hparams['test_input_dir'] != '':
self.indexed_ds, self.sizes = self.load_test_inputs(hparams['test_input_dir'])
else:
if hparams['num_test_samples'] > 0:
self.avail_idxs = list(range(hparams['num_test_samples'])) + hparams['test_ids']
self.sizes = [self.sizes[i] for i in self.avail_idxs]
@property
def _sizes(self):
return self.sizes
def _get_item(self, index):
if hasattr(self, 'avail_idxs') and self.avail_idxs is not None:
index = self.avail_idxs[index]
if self.indexed_ds is None:
self.indexed_ds = IndexedDataset(f'{self.data_dir}/{self.prefix}')
return self.indexed_ds[index]
def __getitem__(self, index):
item = self._get_item(index)
max_frames = hparams['max_frames']
spec = torch.Tensor(item['mel'])[:max_frames]
# energy = (spec.exp() ** 2).sum(-1).sqrt()
mel2ph = torch.LongTensor(item['mel2ph'])[:max_frames] if 'mel2ph' in item else None
f0, uv = norm_interp_f0(item["f0"][:max_frames], hparams)
hubert = torch.Tensor(item['hubert'][:hparams['max_input_tokens']])
pitch = torch.LongTensor(item.get("pitch"))[:max_frames]
sample = {
"id": index,
"item_name": item['item_name'],
"hubert": hubert,
"mel": spec,
"pitch": pitch,
"f0": f0,
"uv": uv,
"mel2ph": mel2ph,
"mel_nonpadding": spec.abs().sum(-1) > 0,
}
if hparams['use_energy_embed']:
sample['energy'] = item['energy']
if hparams['use_spk_id']:
sample["spk_id"] = item['spk_id']
return sample
@staticmethod
def collater(samples):
return File2Batch.processed_input2batch(samples)
@staticmethod
def load_test_inputs(test_input_dir):
inp_wav_paths = glob.glob(f'{test_input_dir}/*.wav') + glob.glob(f'{test_input_dir}/*.mp3')
sizes = []
items = []
binarizer_cls = hparams.get("binarizer_cls", 'basics.base_binarizer.BaseBinarizer')
pkg = ".".join(binarizer_cls.split(".")[:-1])
cls_name = binarizer_cls.split(".")[-1]
binarizer_cls = getattr(importlib.import_module(pkg), cls_name)
from preprocessing.hubertinfer import HubertEncoder
for wav_fn in inp_wav_paths:
item_name = os.path.basename(wav_fn)
wav_fn = wav_fn
encoder = HubertEncoder(hparams['hubert_path'])
item = binarizer_cls.process_item(item_name, {'wav_fn': wav_fn}, encoder)
print(item)
items.append(item)
sizes.append(item['len'])
return items, sizes
def __len__(self):
return len(self._sizes)
def num_tokens(self, index):
return self.size(index)
def size(self, index):
"""Return an example's size as a float or tuple. This value is used when
filtering a dataset with ``--max-positions``."""
size = min(self._sizes[index], hparams['max_frames'])
return size
def ordered_indices(self):
"""Return an ordered list of indices. Batches will be constructed based
on this order."""
if self.shuffle:
indices = np.random.permutation(len(self))
if self.sort_by_len:
indices = indices[np.argsort(np.array(self._sizes)[indices], kind='mergesort')]
# 先random, 然后稳定排序, 保证排序后同长度的数据顺序是依照random permutation的 (被其随机打乱).
else:
indices = np.arange(len(self))
return indices
@property
def num_workers(self):
return int(os.getenv('NUM_WORKERS', hparams['ds_workers']))