SAEHD:
added new option
GAN power 0.0 .. 10.0
Train the network in Generative Adversarial manner.
Forces the neural network to learn small details of the face.
You can enable/disable this option at any time,
but better to enable it when the network is trained enough.
Typical value is 1.0
GAN power with pretrain mode will not work.
Example of enabling GAN on 81k iters +5k iters
https://i.imgur.com/OdXHLhU.jpghttps://i.imgur.com/CYAJmJx.jpg
dfhd: default Decoder dimensions are now 48
the preview for 256 res is now correctly displayed
fixed model naming/renaming/removing
Improvements for those involved in post-processing in AfterEffects:
Codec is reverted back to x264 in order to properly use in AfterEffects and video players.
Merger now always outputs the mask to workspace\data_dst\merged_mask
removed raw modes except raw-rgb
raw-rgb mode now outputs selected face mask_mode (before square mask)
'export alpha mask' button is replaced by 'show alpha mask'.
You can view the alpha mask without recompute the frames.
8) 'merged *.bat' now also output 'result_mask.' video file.
8) 'merged lossless' now uses x264 lossless codec (before PNG codec)
result_mask video file is always lossless.
Thus you can use result_mask video file as mask layer in the AfterEffects.
Removed the wait at first launch for most graphics cards.
Increased speed of training by 10-20%, but you have to retrain all models from scratch.
SAEHD:
added option 'use float16'
Experimental option. Reduces the model size by half.
Increases the speed of training.
Decreases the accuracy of the model.
The model may collapse or not train.
Model may not learn the mask in large resolutions.
true_face_training option is replaced by
"True face power". 0.0000 .. 1.0
Experimental option. Discriminates the result face to be more like the src face. Higher value - stronger discrimination.
Comparison - https://i.imgur.com/czScS9q.png
Getting rid of the weakest link - AMD cards support.
All neural network codebase transferred to pure low-level TensorFlow backend, therefore
removed AMD/Intel cards support, now DFL works only on NVIDIA cards or CPU.
old DFL marked as 1.0 still available for download, but it will no longer be supported.
global code refactoring, fixes and optimizations
Extractor:
now you can choose on which GPUs (or CPU) to process
improved stability for < 4GB GPUs
increased speed of multi gpu initializing
now works in one pass (except manual mode)
so you won't lose the processed data if something goes wrong before the old 3rd pass
Faceset enhancer:
now you can choose on which GPUs (or CPU) to process
Trainer:
now you can choose on which GPUs (or CPU) to train the model.
Multi-gpu training is now supported.
Select identical cards, otherwise fast GPU will wait slow GPU every iteration.
now remembers the previous option input as default with the current workspace/model/ folder.
the number of sample generators now matches the available number of processors
saved models now have names instead of GPU indexes.
Therefore you can switch GPUs for every saved model.
Trainer offers to choose latest saved model by default.
You can rename or delete any model using the dialog.
models now save the optimizer weights in the model folder to continue training properly
removed all models except SAEHD, Quick96
trained model files from DFL 1.0 cannot be reused
AVATAR model is also removed.
How to create AVATAR like in this video? https://www.youtube.com/watch?v=4GdWD0yxvqw
1) capture yourself with your own speech repeating same head direction as celeb in target video
2) train regular deepfake model with celeb faces from target video as src, and your face as dst
3) merge celeb face onto your face with raw-rgb mode
4) compose masked mouth with target video in AfterEffects
SAEHD:
now has 3 options: Encoder dimensions, Decoder dimensions, Decoder mask dimensions
now has 4 arhis: dfhd (default), liaehd, df, liae
df and liae are from SAE model, but use features from SAEHD model (such as combined loss and disable random warp)
dfhd/liaehd - changed encoder/decoder architectures
decoder model is combined with mask decoder model
mask training is combined with face training,
result is reduced time per iteration and decreased vram usage by optimizer
"Initialize CA weights" now works faster and integrated to "Initialize models" progress bar
removed optimizer_mode option
added option 'Place models and optimizer on GPU?'
When you train on one GPU, by default model and optimizer weights are placed on GPU to accelerate the process.
You can place they on CPU to free up extra VRAM, thus you can set larger model parameters.
This option is unavailable in MultiGPU mode.
pretraining now does not use rgb channel shuffling
pretraining now can be continued
when pre-training is disabled:
1) iters and loss history are reset to 1
2) in df/dfhd archis, only the inter part of the encoder is reset (before encoder+inter)
thus the fake will train faster with a pretrained df model
Merger ( renamed from Converter ):
now you can choose on which GPUs (or CPU) to process
new hot key combinations to navigate and override frame's configs
super resolution upscaler "RankSRGAN" is replaced by "FaceEnhancer"
FAN-x mask mode now works on GPU while merging (before on CPU),
therefore all models (Main face model + FAN-x + FaceEnhancer)
now work on GPU while merging, and work properly even on 2GB GPU.
Quick96:
now automatically uses pretrained model
Sorter:
removed all sort by *.bat files except one sort.bat
now you have to choose sort method in the dialog
Other:
all console dialogs are now more convenient
new default example video files data_src/data_dst for newbies ( Robert Downey Jr. on Elon Musk )
XnViewMP is updated to 0.94.1 version
ffmpeg is updated to 4.2.1 version
ffmpeg: video codec is changed to x265
_internal/vscode.bat starts VSCode IDE where you can view and edit DeepFaceLab source code.
removed russian/english manual. Read community manuals and tutorials here
https://mrdeepfakes.com/forums/forum-guides-and-tutorials
new github page design