now you have 3 ways:
1) define light directions manually (not for google colab)
watch demo https://youtu.be/79xz7yEO5Jw
2) relight faceset with one random direction
3) relight faceset with predefined 8 directions
Synthesize new faces from existing ones by relighting them using DeepPortraitRelighter network.
With the relighted faces neural network will better reproduce face shadows.
Therefore you can synthsize shadowed faces from fully lit faceset.
https://i.imgur.com/wxcmQoi.jpg
as a result, better fakes on dark faces:
https://i.imgur.com/5xXIbz5.jpg
in OpenCL build Relighter runs on CPU,
install pytorch directly via pip install, look at requirements
fixed crashes
removed useless 'ebs' color transfer
changed keys for color degrade
added image degrade via denoise - same as denoise extracted data_dst.bat ,
but you can control this option directly in the interactive converter
added image degrade via bicubic downscale and upscale
SAEHD: default ae_dims for df now 256.
* Restore mask functionality
Once mask is saved (using 'c'), mask tool can apply same modifications to the next alignment (by pressing 'r'). Thus some routine work is decreased.
* Mask edit added function to re-apply changes
fixed model sizes from previous update.
avoided bug in ML framework(keras) that forces to train the model on random noise.
Converter: added blur on the same keys as sharpness
Added new model 'TrueFace'. This is a GAN model ported from https://github.com/NVlabs/FUNIT
Model produces near zero morphing and high detail face.
Model has higher failure rate than other models.
Keep src and dst faceset in same lighting conditions.
Session is now saved to the model folder.
blur and erode ranges are increased to -400+400
hist-match-bw is now replaced with seamless2 mode.
Added 'ebs' color transfer mode (works only on Windows).
FANSEG model (used in FAN-x mask modes) is retrained with new model configuration
and now produces better precision and less jitter
if input frames are changed (amount or filenames)
then interactive converter automatically starts a new session.
if model is more trained then all frames will be recomputed again with their saved configs.
ConvertAvatar: fix input image after fix landmarks face align,
VideoEd: video_from_sequence now uses pipe input to input any filenames instead of %.5d. formatted
With interactive converter you can change any parameter of any frame and see the result in real time.
Converter: added motion_blur_power param.
Motion blur is applied by precomputed motion vectors.
So the moving face will look more realistic.
RecycleGAN model is removed.
Added experimental AVATAR model. Minimum required VRAM is 6GB (NVIDIA), 12GB (AMD)
Usage:
1) place data_src.mp4 10-20min square resolution video of news reporter sitting at the table with static background,
other faces should not appear in frames.
2) process "extract images from video data_src.bat" with FULL fps
3) place data_dst.mp4 video of face who will control the src face
4) process "extract images from video data_dst FULL FPS.bat"
5) process "data_src mark faces S3FD best GPU.bat"
6) process "data_dst extract unaligned faces S3FD best GPU.bat"
7) train AVATAR.bat stage 1, tune batch size to maximum for your card (32 for 6GB), train to 50k+ iters.
8) train AVATAR.bat stage 2, tune batch size to maximum for your card (4 for 6GB), train to decent sharpness.
9) convert AVATAR.bat
10) converted to mp4.bat
updated versions of modules
Pretrain the model with large amount of various faces. This technique may help to train the fake with overly different face shapes and light conditions of src/dst data. Face will be look more like a morphed. To reduce the morph effect, some model files will be initialized but not be updated after pretrain: LIAE: inter_AB.h5 DF: both decoders.h5. The longer you pretrain the model the more morphed face will look. After that, save and run the training again.