Removed the wait at first launch for most graphics cards.
Increased speed of training by 10-20%, but you have to retrain all models from scratch.
SAEHD:
added option 'use float16'
Experimental option. Reduces the model size by half.
Increases the speed of training.
Decreases the accuracy of the model.
The model may collapse or not train.
Model may not learn the mask in large resolutions.
true_face_training option is replaced by
"True face power". 0.0000 .. 1.0
Experimental option. Discriminates the result face to be more like the src face. Higher value - stronger discrimination.
Comparison - https://i.imgur.com/czScS9q.png
If you want, you can manually remove unnecessary angles from src faceset after sort by yaw.
Optimized sample generators (CPU workers). Now they consume less amount of RAM and work faster.
added
4.2.other) data_src/dst util faceset pack.bat
Packs /aligned/ samples into one /aligned/samples.pak file.
After that, all faces will be deleted.
4.2.other) data_src/dst util faceset unpack.bat
unpacks faces from /aligned/samples.pak to /aligned/ dir.
After that, samples.pak will be deleted.
Packed faceset load and work faster.
improved model generalization, overall accuracy and sharpness
by using new 'Learning rate dropout' technique from paper https://arxiv.org/abs/1912.00144
An example of a loss histogram where this function is enabled after the red arrow:
https://i.imgur.com/3olskOd.jpg
This is the fastest model for low-end cards.
Model has zero options and trains a 96pix fullface.
It is good for quick deepfake demo.
Example of the preview trained in 15 minutes on RTX2080Ti:
https://i.imgur.com/oRMvZFP.jpg