Pretrain the model with large amount of various faces. This technique may help to train the fake with overly different face shapes and light conditions of src/dst data. Face will be look more like a morphed. To reduce the morph effect, some model files will be initialized but not be updated after pretrain: LIAE: inter_AB.h5 DF: both decoders.h5. The longer you pretrain the model the more morphed face will look. After that, save and run the training again.
Fixed bug when SAE can be collapsed during a time.
SAE: removed CA weights and encoder/decoder dims.
added new options:
Encoder dims per channel (21-85 ?:help skip:%d)
More encoder dims help to recognize more facial features, but require more VRAM. You can fine-tune model size to fit your GPU.
Decoder dims per channel (11-85 ?:help skip:%d)
More decoder dims help to get better details, but require more VRAM. You can fine-tune model size to fit your GPU.
Add residual blocks to decoder? (y/n, ?:help skip:n) :
These blocks help to get better details, but require more computing time.
Remove gray border? (y/n, ?:help skip:n) :
Removes gray border of predicted face, but requires more computing resources.