.github | ||
doc | ||
facelib | ||
gpufmkmgr | ||
localization | ||
mainscripts | ||
mathlib | ||
models | ||
nnlib | ||
utils | ||
.gitignore | ||
CODEGUIDELINES | ||
LICENSE | ||
main.py | ||
README.md | ||
requirements-gpu-cuda9-cudnn7.txt |
DeepFaceLab is a tool that utilizes deep learning to recognize and swap faces in pictures and videos.
Based on original FaceSwap repo. Facesets of FaceSwap or FakeApp are not compatible with this repo. You should to run extract again.
Features:
-
new models
-
new architecture, easy to experiment with models
-
works on 2GB old cards , such as GT730. Example of fake trained on 2GB gtx850m notebook in 18 hours https://www.youtube.com/watch?v=bprVuRxBA34
-
face data embedded to png files
-
automatic GPU manager, chooses best gpu(s) and supports --multi-gpu
-
new preview window
-
extractor in parallel
-
converter in parallel
-
added --debug option for all stages
-
added MTCNN extractor which produce less jittered aligned face than DLIBCNN, but can produce more false faces. Comparison dlib (at left) vs mtcnn on hard case:
MTCNN produces less jitter.
-
added Manual extractor. You can fix missed faces manually or do full manual extract, click on video:
-
standalone zero dependencies ready to work prebuilt binary for all windows versions, see below
Model types:
- H64 (2GB+) - half face with 64 resolution. It is as original FakeApp or FaceSwap, but with new TensorFlow 1.8 DSSIM Loss func and separated mask decoder + better ConverterMasked. for 2GB and 3GB VRAM model works in reduced mode.
- H128 (3GB+) - as H64, but in 128 resolution. Better face details. for 3GB and 4GB VRAM model works in reduced mode.
- DF (5GB+) - @dfaker model. As H128, but fullface model.
- DF example - later
- LIAEF128 (5GB+) - new model. Result of combining DF, IAE, + experiments. Model tries to morph src face to dst, while keeping facial features of src face, but less agressive morphing. Model has problems with closed eyes recognizing.
- LIAEF128YAW (5GB+) - currently testing. Useful when your src faceset has too many side faces vs dst faceset. It feeds NN by sorted samples by yaw.
- MIAEF128 (5GB+) - as LIAEF128, but also it tries to match brightness/color features.
- AVATAR (4GB+) - face controlling model. Usage:
- src - controllable face (Cage)
- dst - controller face (your face)
- converter --input-dir contains aligned dst faces in sequence to be converted, its mean you can train on 1500 dst faces, but use only 100 for convert.
Sort tool:
hist
groups images by similar content
hist-dissim
places most similar to each other images to end.
hist-blur
sort by blur in groups of similar content
brightness
hue
face
and face-dissim
currently useless
Best practice for gather src faceset:
- delete first unsorted aligned groups of images what you can to delete. Dont touch target face mixed with others.
blur
-> delete ~half of themhist
-> delete groups of similar and leave only target facehist-blur
-> delete blurred at end of groups of similarhist-dissim
-> leave only first 1000-1500 faces, because number of src faces can affect result. For YAW feeder model skip this step.face-yaw
-> just for finalize faceset
Best practice for dst faces:
- delete first unsorted aligned groups of images what you can to delete. Dont touch target face mixed with others.
hist
-> delete groups of similar and leave only target face
Prebuilt binary:
Windows 7,8,8.1,10 zero dependency binary except NVidia Video Drivers can be downloaded from torrent.
Torrent page: https://rutracker.org/forum/viewtopic.php?p=75318742 (magnet link inside)
Facesets:
-
Nicolas Cage.
-
Cage/Trump workspace
download from here: https://mega.nz/#F!y1ERHDaL!PPwg01PQZk0FhWLVo5_MaQ
Pull requesting:
I understand some people want to help. But result of mass people contribution we can see in deepfakes\faceswap. High chance I will decline PR. Therefore before PR better ask me what you want to change or add to save your time.