Maximum resolution is increased to 640.
‘hd’ archi is removed. ‘hd’ was experimental archi created to remove subpixel shake, but ‘lr_dropout’ and ‘disable random warping’ do that better.
‘uhd’ is renamed to ‘-u’
dfuhd and liaeuhd will be automatically renamed to df-u and liae-u in existing models.
Added new experimental archi (key -d) which doubles the resolution using the same computation cost.
It is mean same configs will be x2 faster, or for example you can set 448 resolution and it will train as 224.
Strongly recommended not to train from scratch and use pretrained models.
New archi naming:
'df' keeps more identity-preserved face.
'liae' can fix overly different face shapes.
'-u' increased likeness of the face.
'-d' (experimental) doubling the resolution using the same computation cost
Examples: df, liae, df-d, df-ud, liae-ud, ...
Improved GAN training (GAN_power option). It was used for dst model, but actually we don’t need it for dst.
Instead, a second src GAN model with x2 smaller patch size was added, so the overall quality for hi-res models should be higher.
Added option ‘Uniform yaw distribution of samples (y/n)’:
Helps to fix blurry side faces due to small amount of them in the faceset.
Quick96:
Now based on df-ud archi and 20% faster.
XSeg trainer:
Improved sample generator.
Now it randomly adds the background from other samples.
Result is reduced chance of random mask noise on the area outside the face.
Now you can specify ‘batch_size’ in range 2-16.
Reduced size of samples with applied XSeg mask. Thus size of packed samples with applied xseg mask is also reduced.
Now you can replace the head.
Example: https://www.youtube.com/watch?v=xr5FHd0AdlQ
Requirements:
Post processing skill in Adobe After Effects or Davinci Resolve.
Usage:
1) Find suitable dst footage with the monotonous background behind head
2) Use “extract head” script
3) Gather rich src headset from only one scene (same color and haircut)
4) Mask whole head for src and dst using XSeg editor
5) Train XSeg
6) Apply trained XSeg mask for src and dst headsets
7) Train SAEHD using ‘head’ face_type as regular deepfake model with DF archi. You can use pretrained model for head. Minimum recommended resolution for head is 224.
8) Extract multiple tracks, using Merger:
a. Raw-rgb
b. XSeg-prd mask
c. XSeg-dst mask
9) Using AAE or DavinciResolve, do:
a. Hide source head using XSeg-prd mask: content-aware-fill, clone-stamp, background retraction, or other technique
b. Overlay new head using XSeg-dst mask
Warning: Head faceset can be used for whole_face or less types of training only with XSeg masking.
XSegEditor: added button ‘view trained XSeg mask’, so you can see which frames should be masked to improve mask quality.
5.XSeg) data_dst/src mask for XSeg trainer - fetch.bat
Copies faces containing XSeg polygons to aligned_xseg\ dir.
Useful only if you want to collect labeled faces and reuse them in other fakes.
Now you can use trained XSeg mask in the SAEHD training process.
It’s mean default ‘full_face’ mask obtained from landmarks will be replaced with the mask obtained from the trained XSeg model.
use
5.XSeg.optional) trained mask for data_dst/data_src - apply.bat
5.XSeg.optional) trained mask for data_dst/data_src - remove.bat
Normally you don’t need it. You can use it, if you want to use ‘face_style’ and ‘bg_style’ with obstructions.
XSeg trainer : now you can choose type of face
XSeg trainer : now you can restart training in “override settings”
Merger: XSeg-* modes now can be used with all types of faces.
Therefore old MaskEditor, FANSEG models, and FAN-x modes have been removed,
because the new XSeg solution is better, simpler and more convenient, which costs only 1 hour of manual masking for regular deepfake.
Basic usage instruction: https://i.imgur.com/w7LkId2.jpg
'whole_face' requires skill in Adobe After Effects.
For using whole_face you have to extract whole_face's by using
4) data_src extract whole_face
and
5) data_dst extract whole_face
Images will be extracted in 512 resolution, so they can be used for regular full_face's and half_face's.
'whole_face' covers whole area of face include forehead in training square,
but training mask is still 'full_face'
therefore it requires manual final masking and composing in Adobe After Effects.
added option 'masked_training'
This option is available only for 'whole_face' type.
Default is ON.
Masked training clips training area to full_face mask,
thus network will train the faces properly.
When the face is trained enough, disable this option to train all area of the frame.
Merge with 'raw-rgb' mode, then use Adobe After Effects to manually mask, tune color, and compose whole face include forehead.
added option Eyes priority (y/n)
fix eye problems during training ( especially on HD architectures )
by forcing the neural network to train eyes with higher priority
before/after https://i.imgur.com/YQHOuSR.jpg
It does not guarantee the right eye direction.
SAEHD:
added new option
GAN power 0.0 .. 10.0
Train the network in Generative Adversarial manner.
Forces the neural network to learn small details of the face.
You can enable/disable this option at any time,
but better to enable it when the network is trained enough.
Typical value is 1.0
GAN power with pretrain mode will not work.
Example of enabling GAN on 81k iters +5k iters
https://i.imgur.com/OdXHLhU.jpghttps://i.imgur.com/CYAJmJx.jpg
dfhd: default Decoder dimensions are now 48
the preview for 256 res is now correctly displayed
fixed model naming/renaming/removing
Improvements for those involved in post-processing in AfterEffects:
Codec is reverted back to x264 in order to properly use in AfterEffects and video players.
Merger now always outputs the mask to workspace\data_dst\merged_mask
removed raw modes except raw-rgb
raw-rgb mode now outputs selected face mask_mode (before square mask)
'export alpha mask' button is replaced by 'show alpha mask'.
You can view the alpha mask without recompute the frames.
8) 'merged *.bat' now also output 'result_mask.' video file.
8) 'merged lossless' now uses x264 lossless codec (before PNG codec)
result_mask video file is always lossless.
Thus you can use result_mask video file as mask layer in the AfterEffects.
Removed the wait at first launch for most graphics cards.
Increased speed of training by 10-20%, but you have to retrain all models from scratch.
SAEHD:
added option 'use float16'
Experimental option. Reduces the model size by half.
Increases the speed of training.
Decreases the accuracy of the model.
The model may collapse or not train.
Model may not learn the mask in large resolutions.
true_face_training option is replaced by
"True face power". 0.0000 .. 1.0
Experimental option. Discriminates the result face to be more like the src face. Higher value - stronger discrimination.
Comparison - https://i.imgur.com/czScS9q.png
More stable and precise version of the face transformation matrix.
Now full_faces are aligned with the upper and lateral boundaries of the frame,
result: fix of cutted mouth, increase area of the cheeks of side faces
before/after https://i.imgur.com/t9IyGZv.jpg
therefore, additional training is required for existing models.
Optionally, you can re-extract dst faces of your project, if they have problems with cutted mouth or cheeks.
removed option 'apply random ct'
added option
Color transfer mode apply to src faceset. ( none/rct/lct/mkl/idt, ?:help skip: none )
Change color distribution of src samples close to dst samples. Try all modes to find the best.
before was lct mode, but sometime it does not work properly for some facesets.
Random warp is required to generalize facial expressions of both faces. When the face is trained enough, you can disable it to get extra sharpness for less amount of iterations.
removed TrueFace model.
added SAEv2 model. Differences from SAE:
+ default e_ch_dims is now 21
+ new encoder produces more stable face and less scale jitter
before: https://i.imgur.com/4jUcol8.gifv
after: https://i.imgur.com/lyiax49.gifv - scale of the face is less changed within frame size
+ decoder now has only 1 residual block instead of 2, result is same quality with less decoder size
+ added mid-full face, which covers 30% more area than half face.
+ added option " Enable 'true face' training "
Enable it only after 50k iters, when the face is sharp enough.
the result face will be more like src.
The most src-like face with 'true-face-training' you can achieve with DF architecture.
Session is now saved to the model folder.
blur and erode ranges are increased to -400+400
hist-match-bw is now replaced with seamless2 mode.
Added 'ebs' color transfer mode (works only on Windows).
FANSEG model (used in FAN-x mask modes) is retrained with new model configuration
and now produces better precision and less jitter
ConvertAvatar: fix input image after fix landmarks face align,
VideoEd: video_from_sequence now uses pipe input to input any filenames instead of %.5d. formatted
With interactive converter you can change any parameter of any frame and see the result in real time.
Converter: added motion_blur_power param.
Motion blur is applied by precomputed motion vectors.
So the moving face will look more realistic.
RecycleGAN model is removed.
Added experimental AVATAR model. Minimum required VRAM is 6GB (NVIDIA), 12GB (AMD)
Usage:
1) place data_src.mp4 10-20min square resolution video of news reporter sitting at the table with static background,
other faces should not appear in frames.
2) process "extract images from video data_src.bat" with FULL fps
3) place data_dst.mp4 video of face who will control the src face
4) process "extract images from video data_dst FULL FPS.bat"
5) process "data_src mark faces S3FD best GPU.bat"
6) process "data_dst extract unaligned faces S3FD best GPU.bat"
7) train AVATAR.bat stage 1, tune batch size to maximum for your card (32 for 6GB), train to 50k+ iters.
8) train AVATAR.bat stage 2, tune batch size to maximum for your card (4 for 6GB), train to decent sharpness.
9) convert AVATAR.bat
10) converted to mp4.bat
updated versions of modules