here new whole_face + XSeg workflow:
with XSeg model you can train your own mask segmentator for dst(and/or src) faces
that will be used by the merger for whole_face.
Instead of using a pretrained segmentator model (which does not exist),
you control which part of faces should be masked.
new scripts:
5.XSeg) data_dst edit masks.bat
5.XSeg) data_src edit masks.bat
5.XSeg) train.bat
Usage:
unpack dst faceset if packed
run 5.XSeg) data_dst edit masks.bat
Read tooltips on the buttons (en/ru/zn languages are supported)
mask the face using include or exclude polygon mode.
repeat for 50/100 faces,
!!! you don't need to mask every frame of dst
only frames where the face is different significantly,
for example:
closed eyes
changed head direction
changed light
the more various faces you mask, the more quality you will get
Start masking from the upper left area and follow the clockwise direction.
Keep the same logic of masking for all frames, for example:
the same approximated jaw line of the side faces, where the jaw is not visible
the same hair line
Mask the obstructions using exclude polygon mode.
run XSeg) train.bat
train the model
Check the faces of 'XSeg dst faces' preview.
if some faces have wrong or glitchy mask, then repeat steps:
run edit
find these glitchy faces and mask them
train further or restart training from scratch
Restart training of XSeg model is only possible by deleting all 'model\XSeg_*' files.
If you want to get the mask of the predicted face (XSeg-prd mode) in merger,
you should repeat the same steps for src faceset.
New mask modes available in merger for whole_face:
XSeg-prd - XSeg mask of predicted face -> faces from src faceset should be labeled
XSeg-dst - XSeg mask of dst face -> faces from dst faceset should be labeled
XSeg-prd*XSeg-dst - the smallest area of both
if workspace\model folder contains trained XSeg model, then merger will use it,
otherwise you will get transparent mask by using XSeg-* modes.
Some screenshots:
XSegEditor: https://i.imgur.com/7Bk4RRV.jpg
trainer : https://i.imgur.com/NM1Kn3s.jpg
merger : https://i.imgur.com/glUzFQ8.jpg
example of the fake using 13 segmented dst faces
: https://i.imgur.com/wmvyizU.gifv
Getting rid of the weakest link - AMD cards support.
All neural network codebase transferred to pure low-level TensorFlow backend, therefore
removed AMD/Intel cards support, now DFL works only on NVIDIA cards or CPU.
old DFL marked as 1.0 still available for download, but it will no longer be supported.
global code refactoring, fixes and optimizations
Extractor:
now you can choose on which GPUs (or CPU) to process
improved stability for < 4GB GPUs
increased speed of multi gpu initializing
now works in one pass (except manual mode)
so you won't lose the processed data if something goes wrong before the old 3rd pass
Faceset enhancer:
now you can choose on which GPUs (or CPU) to process
Trainer:
now you can choose on which GPUs (or CPU) to train the model.
Multi-gpu training is now supported.
Select identical cards, otherwise fast GPU will wait slow GPU every iteration.
now remembers the previous option input as default with the current workspace/model/ folder.
the number of sample generators now matches the available number of processors
saved models now have names instead of GPU indexes.
Therefore you can switch GPUs for every saved model.
Trainer offers to choose latest saved model by default.
You can rename or delete any model using the dialog.
models now save the optimizer weights in the model folder to continue training properly
removed all models except SAEHD, Quick96
trained model files from DFL 1.0 cannot be reused
AVATAR model is also removed.
How to create AVATAR like in this video? https://www.youtube.com/watch?v=4GdWD0yxvqw
1) capture yourself with your own speech repeating same head direction as celeb in target video
2) train regular deepfake model with celeb faces from target video as src, and your face as dst
3) merge celeb face onto your face with raw-rgb mode
4) compose masked mouth with target video in AfterEffects
SAEHD:
now has 3 options: Encoder dimensions, Decoder dimensions, Decoder mask dimensions
now has 4 arhis: dfhd (default), liaehd, df, liae
df and liae are from SAE model, but use features from SAEHD model (such as combined loss and disable random warp)
dfhd/liaehd - changed encoder/decoder architectures
decoder model is combined with mask decoder model
mask training is combined with face training,
result is reduced time per iteration and decreased vram usage by optimizer
"Initialize CA weights" now works faster and integrated to "Initialize models" progress bar
removed optimizer_mode option
added option 'Place models and optimizer on GPU?'
When you train on one GPU, by default model and optimizer weights are placed on GPU to accelerate the process.
You can place they on CPU to free up extra VRAM, thus you can set larger model parameters.
This option is unavailable in MultiGPU mode.
pretraining now does not use rgb channel shuffling
pretraining now can be continued
when pre-training is disabled:
1) iters and loss history are reset to 1
2) in df/dfhd archis, only the inter part of the encoder is reset (before encoder+inter)
thus the fake will train faster with a pretrained df model
Merger ( renamed from Converter ):
now you can choose on which GPUs (or CPU) to process
new hot key combinations to navigate and override frame's configs
super resolution upscaler "RankSRGAN" is replaced by "FaceEnhancer"
FAN-x mask mode now works on GPU while merging (before on CPU),
therefore all models (Main face model + FAN-x + FaceEnhancer)
now work on GPU while merging, and work properly even on 2GB GPU.
Quick96:
now automatically uses pretrained model
Sorter:
removed all sort by *.bat files except one sort.bat
now you have to choose sort method in the dialog
Other:
all console dialogs are now more convenient
new default example video files data_src/data_dst for newbies ( Robert Downey Jr. on Elon Musk )
XnViewMP is updated to 0.94.1 version
ffmpeg is updated to 4.2.1 version
ffmpeg: video codec is changed to x265
_internal/vscode.bat starts VSCode IDE where you can view and edit DeepFaceLab source code.
removed russian/english manual. Read community manuals and tutorials here
https://mrdeepfakes.com/forums/forum-guides-and-tutorials
new github page design