upd readme

This commit is contained in:
iperov 2018-06-04 17:30:47 +04:00
parent 6bd5a44264
commit 4754c622f7

View file

@ -36,33 +36,33 @@ MTCNN produces less jitter.
- **H64 (2GB+)** - half face with 64 resolution. It is as original FakeApp or FaceSwap, but with new TensorFlow 1.8 DSSIM Loss func and separated mask decoder + better ConverterMasked. for 2GB and 3GB VRAM model works in reduced mode.
* H64 Robert Downey Jr.:
* ![](https://github.com/iperov/OpenDeepFaceSwap/blob/master/doc/H64_Downey_0.jpg)
* ![](https://github.com/iperov/OpenDeepFaceSwap/blob/master/doc/H64_Downey_1.jpg)
* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/H64_Downey_0.jpg)
* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/H64_Downey_1.jpg)
- **H128 (3GB+)** - as H64, but in 128 resolution. Better face details. for 3GB and 4GB VRAM model works in reduced mode.
* H128 Cage:
* ![](https://github.com/iperov/OpenDeepFaceSwap/blob/master/doc/H128_Cage_0.jpg)
* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/H128_Cage_0.jpg)
* H128 asian face on blurry target:
* ![](https://github.com/iperov/OpenDeepFaceSwap/blob/master/doc/H128_Asian_0.jpg)
* ![](https://github.com/iperov/OpenDeepFaceSwap/blob/master/doc/H128_Asian_1.jpg)
* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/H128_Asian_0.jpg)
* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/H128_Asian_1.jpg)
- **DF (5GB+)** - @dfaker model. As H128, but fullface model.
* DF example - later
- **LIAEF128 (5GB+)** - new model. Result of combining DF, IAE, + experiments. Model tries to morph src face to dst, while keeping facial features of src face, but less agressive morphing. Model has problems with closed eyes recognizing.
* LIAEF128 Cage:
* ![](https://github.com/iperov/OpenDeepFaceSwap/blob/master/doc/LIAEF128_Cage_0.jpg)
* ![](https://github.com/iperov/OpenDeepFaceSwap/blob/master/doc/LIAEF128_Cage_1.jpg)
* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/LIAEF128_Cage_0.jpg)
* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/LIAEF128_Cage_1.jpg)
* LIAEF128 Cage video:
* [![Watch the video](https://img.youtube.com/vi/mRsexePEVco/0.jpg)](https://www.youtube.com/watch?v=mRsexePEVco)
- **LIAEF128YAW (5GB+)** - currently testing. Useful when your src faceset has too many side faces vs dst faceset. It feeds NN by sorted samples by yaw.
- **MIAEF128 (5GB+)** - as LIAEF128, but also it tries to match brightness/color features.
* MIAEF128 model diagramm:
* ![](https://github.com/iperov/OpenDeepFaceSwap/blob/master/doc/MIAEF128_diagramm.png)
* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/MIAEF128_diagramm.png)
* MIAEF128 Ford success case:
* ![](https://github.com/iperov/OpenDeepFaceSwap/blob/master/doc/MIAEF128_Ford_0.jpg)
* ![](https://github.com/iperov/OpenDeepFaceSwap/blob/master/doc/MIAEF128_Ford_1.jpg)
* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/MIAEF128_Ford_0.jpg)
* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/MIAEF128_Ford_1.jpg)
* MIAEF128 Cage fail case:
* ![](https://github.com/iperov/OpenDeepFaceSwap/blob/master/doc/MIAEF128_Cage_fail.jpg)
* ![](https://github.com/iperov/DeepFaceLab/blob/master/doc/MIAEF128_Cage_fail.jpg)
- **AVATAR (4GB+)** - face controlling model. Usage:
* src - controllable face (Cage)
* dst - controller face (your face)