From 9f48b2c87a0d5c07a27e8fca475be5e162918793 Mon Sep 17 00:00:00 2001 From: iperov Date: Tue, 5 Jun 2018 14:14:20 +0400 Subject: [PATCH] Update README.md --- README.md | 75 +++++++++++++++++++++++++++++++++++++------------------ 1 file changed, 51 insertions(+), 24 deletions(-) diff --git a/README.md b/README.md index 22ea215..d200c79 100644 --- a/README.md +++ b/README.md @@ -35,37 +35,64 @@ MTCNN produces less jitter. ### **Model types**: - **H64 (2GB+)** - half face with 64 resolution. It is as original FakeApp or FaceSwap, but with new TensorFlow 1.8 DSSIM Loss func and separated mask decoder + better ConverterMasked. for 2GB and 3GB VRAM model works in reduced mode. -* H64 Robert Downey Jr.: -* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/H64_Downey_0.jpg) -* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/H64_Downey_1.jpg) + +H64 Robert Downey Jr.: + +![](https://github.com/iperov/DeepFaceLab/blob/master/doc/H64_Downey_0.jpg) + +![](https://github.com/iperov/DeepFaceLab/blob/master/doc/H64_Downey_1.jpg) - **H128 (3GB+)** - as H64, but in 128 resolution. Better face details. for 3GB and 4GB VRAM model works in reduced mode. -* H128 Cage: -* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/H128_Cage_0.jpg) -* H128 asian face on blurry target: -* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/H128_Asian_0.jpg) -* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/H128_Asian_1.jpg) + +H128 Cage: + +![](https://github.com/iperov/DeepFaceLab/blob/master/doc/H128_Cage_0.jpg) + +H128 asian face on blurry target: + +![](https://github.com/iperov/DeepFaceLab/blob/master/doc/H128_Asian_0.jpg) + +![](https://github.com/iperov/DeepFaceLab/blob/master/doc/H128_Asian_1.jpg) + - **DF (5GB+)** - @dfaker model. As H128, but fullface model. -* DF example - later + +DF example - later - **LIAEF128 (5GB+)** - new model. Result of combining DF, IAE, + experiments. Model tries to morph src face to dst, while keeping facial features of src face, but less agressive morphing. Model has problems with closed eyes recognizing. -* LIAEF128 Cage: -* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/LIAEF128_Cage_0.jpg) -* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/LIAEF128_Cage_1.jpg) -* LIAEF128 Cage video: -* [![Watch the video](https://img.youtube.com/vi/mRsexePEVco/0.jpg)](https://www.youtube.com/watch?v=mRsexePEVco) + +LIAEF128 Cage: + +![](https://github.com/iperov/DeepFaceLab/blob/master/doc/LIAEF128_Cage_0.jpg) + +![](https://github.com/iperov/DeepFaceLab/blob/master/doc/LIAEF128_Cage_1.jpg) + +LIAEF128 Cage video: + +[![Watch the video](https://img.youtube.com/vi/mRsexePEVco/0.jpg)](https://www.youtube.com/watch?v=mRsexePEVco) + - **LIAEF128YAW (5GB+)** - currently testing. Useful when your src faceset has too many side faces vs dst faceset. It feeds NN by sorted samples by yaw. + - **MIAEF128 (5GB+)** - as LIAEF128, but also it tries to match brightness/color features. -* MIAEF128 model diagramm: -* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/MIAEF128_diagramm.png) -* MIAEF128 Ford success case: -* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/MIAEF128_Ford_0.jpg) -* MIAEF128 Cage fail case: -* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/MIAEF128_Cage_fail.jpg) + +MIAEF128 model diagramm: + +![](https://github.com/iperov/DeepFaceLab/blob/master/doc/MIAEF128_diagramm.png) + +MIAEF128 Ford success case: + +![](https://github.com/iperov/DeepFaceLab/blob/master/doc/MIAEF128_Ford_0.jpg) + +MIAEF128 Cage fail case: + +![](https://github.com/iperov/DeepFaceLab/blob/master/doc/MIAEF128_Cage_fail.jpg) + - **AVATAR (4GB+)** - 256pix face controlling model. Usage: -* src - controllable face (Cage) -* dst - controller face (your face) -* converter --input-dir must contains *extracted dst faces* in sequence to be converted, its mean you can train on 1500 dst faces, but use only 100 for convert. + +src - controllable face (Cage) + +dst - controller face (your face) + +converter --input-dir must contains *extracted dst faces* in sequence to be converted, its mean you can train on 1500 dst faces, but use only 100 for convert. ### **Sort tool**: @@ -116,4 +143,4 @@ Windows 7,8,8.1,10 zero dependency (except GeForce Drivers) prebuilt Python 3.6. ### **Pull requesting**: I understand some people want to help. But result of mass people contribution we can see in deepfakes\faceswap. -High chance I will decline PR. Therefore before PR better ask me what you want to change or add to save your time. \ No newline at end of file +High chance I will decline PR. Therefore before PR better ask me what you want to change or add to save your time.