An issue affecting at least 2070 and 2080 cards (possibly other RTX cards too) requires auto growth to be enabled for TensorFlow to work.
I don't know enough about the impact of this change to know whether this ought to be made optional or not, but for RTX owners, this simple change fixes TensorFlow errors when generating models.
Enable autobackup? (y/n ?:help skip:%s) :
Autobackup model files with preview every hour for last 15 hours. Latest backup located in model/<>_autobackups/01
SAE: added option only for CUDA builds:
Enable gradient clipping? (y/n, ?:help skip:%s) :
Gradient clipping reduces chance of model collapse, sacrificing speed of training.
Pretrain the model with large amount of various faces. This technique may help to train the fake with overly different face shapes and light conditions of src/dst data. Face will be look more like a morphed. To reduce the morph effect, some model files will be initialized but not be updated after pretrain: LIAE: inter_AB.h5 DF: both decoders.h5. The longer you pretrain the model the more morphed face will look. After that, save and run the training again.
Pixel loss may help to enhance fine details and stabilize face color. Use it only if quality does not improve over time.
SAE:
previous SAE model will not work with this update.
Greatly decreased chance of model collapse.
Increased model accuracy.
Residual blocks now default and this option has been removed.
Improved 'learn mask'.
Added masked preview (switch by space key)
Converter:
fixed rct/lct in seamless mode
added mask mode (6) learned*FAN-prd*FAN-dst
added mask editor, its created for refining dataset for FANSeg model, and not for production, but you can spend your time and test it in regular fakes with face obstructions
added Intel's plaidML backend to use OpenCL engine. Check new requirements.
smart choosing of backend in device.py
env var 'force_plaidML' can be choosed to forced using plaidML
all tf functions transferred to pure keras
MTCNN transferred to pure keras, but it works slow on plaidML (forced to CPU in this case)
default batch size for all models and VRAMs now 4, feel free to adjust it on your own
SAE: default style options now ZERO, because there are no best values for all scenes, set them on your own.
SAE: return back option pixel_loss, feel free to enable it on your own.
SAE: added option multiscale_decoder default is true, but you can disable it to get 100% same as H,DF,LIAEF model behaviour.
fix converter output to .png
added linux fork reference to doc/doc_build_and_repository_info.md
Model files names will be prefixed with GPU index if GPU choosed explicitly on train/convert start.
if you leave GPU idx choice default, then best GPU idx will be choosed and model file names will not contain index prefix.
It gives you possibility to train same fake with various models or options on multiple GPUs.
H64 and H128: now you can choose 'Lighter autoencoder'. It is same as vram gb <= 4 before this update.
added archived_models.zip contains old experiments
RecycleGAN: archived
devicelib: if your system has no NVML installed (some old cards), then it will work with gpu_idx=0 as 'Generic GeForce GPU' with 2GB vram.
refactorings