Commit graph

21 commits

Author SHA1 Message Date
iperov
1badaa4b4c fix 2021-11-20 16:08:41 +04:00
iperov
388964e8d0 fix model export. Update tf2onnx to 1.9.2 2021-11-20 16:05:44 +04:00
iperov
7326771c02 fixed rct color transfer function, incorrectly clip colors before 2021-10-23 09:53:40 +04:00
iperov
5783191849 AMP: code refactoring, fix preview history
added dumpdflive command
2021-06-26 10:44:41 +04:00
Ø
bcfc794a1b
tensorflow-gpu 2.4.0 depends on h5py~=2.10.0 (#5301)
ERROR: Cannot install -r ./DeepFaceLab/requirements-cuda.txt (line 9) and h5py==2.9.0 because these package versions have conflicting dependencies.   

The conflict is caused by:
    The user requested h5py==2.9.0
    tensorflow-gpu 2.4.0 depends on h5py~=2.10.0
2021-04-02 16:13:30 +04:00
iperov
98e31c8171 update cuda requirements tf 2.4.0 2020-12-16 18:24:23 +04:00
iperov
ca03bbca04 upd requirements-cuda.txt 2020-12-11 13:56:47 +04:00
Colombo
5446e17ea4 upd requirements 2020-11-13 23:12:14 +04:00
Colombo
0eb7e06ac1 upgrade to tf 2.4.0rc1 2020-11-13 17:00:07 +04:00
Colombo
01d81674fd added new XSegEditor !
here new whole_face + XSeg workflow:

with XSeg model you can train your own mask segmentator for dst(and/or src) faces
that will be used by the merger for whole_face.

Instead of using a pretrained segmentator model (which does not exist),
you control which part of faces should be masked.

new scripts:
	5.XSeg) data_dst edit masks.bat
	5.XSeg) data_src edit masks.bat
	5.XSeg) train.bat

Usage:
	unpack dst faceset if packed

	run 5.XSeg) data_dst edit masks.bat

	Read tooltips on the buttons (en/ru/zn languages are supported)

	mask the face using include or exclude polygon mode.

	repeat for 50/100 faces,
		!!! you don't need to mask every frame of dst
		only frames where the face is different significantly,
		for example:
			closed eyes
			changed head direction
			changed light
		the more various faces you mask, the more quality you will get

		Start masking from the upper left area and follow the clockwise direction.
		Keep the same logic of masking for all frames, for example:
			the same approximated jaw line of the side faces, where the jaw is not visible
			the same hair line
		Mask the obstructions using exclude polygon mode.

	run XSeg) train.bat
		train the model

		Check the faces of 'XSeg dst faces' preview.

		if some faces have wrong or glitchy mask, then repeat steps:
			run edit
			find these glitchy faces and mask them
			train further or restart training from scratch

Restart training of XSeg model is only possible by deleting all 'model\XSeg_*' files.

If you want to get the mask of the predicted face (XSeg-prd mode) in merger,
you should repeat the same steps for src faceset.

New mask modes available in merger for whole_face:

XSeg-prd	  - XSeg mask of predicted face	-> faces from src faceset should be labeled
XSeg-dst	  - XSeg mask of dst face        	-> faces from dst faceset should be labeled
XSeg-prd*XSeg-dst - the smallest area of both

if workspace\model folder contains trained XSeg model, then merger will use it,
otherwise you will get transparent mask by using XSeg-* modes.

Some screenshots:
XSegEditor: https://i.imgur.com/7Bk4RRV.jpg
trainer   : https://i.imgur.com/NM1Kn3s.jpg
merger    : https://i.imgur.com/glUzFQ8.jpg

example of the fake using 13 segmented dst faces
          : https://i.imgur.com/wmvyizU.gifv
2020-03-24 12:15:31 +04:00
Colombo
45582d129d added XSeg model.
with XSeg model you can train your own mask segmentator of dst(and src) faces
that will be used in merger for whole_face.

Instead of using a pretrained model (which does not exist),
you control which part of faces should be masked.

Workflow is not easy, but at the moment it is the best solution
for obtaining the best quality of whole_face's deepfakes using minimum effort
without rotoscoping in AfterEffects.

new scripts:
	XSeg) data_dst edit.bat
	XSeg) data_dst merge.bat
	XSeg) data_dst split.bat
	XSeg) data_src edit.bat
	XSeg) data_src merge.bat
	XSeg) data_src split.bat
	XSeg) train.bat

Usage:
	unpack dst faceset if packed

	run XSeg) data_dst split.bat
		this scripts extracts (previously saved) .json data from jpg faces to use in label tool.

	run XSeg) data_dst edit.bat
		new tool 'labelme' is used

		use polygon (CTRL-N) to mask the face
			name polygon "1" (one symbol) as include polygon
			name polygon "0" (one symbol) as exclude polygon

			'exclude polygons' will be applied after all 'include polygons'

		Hot keys:
		ctrl-N			create polygon
		ctrl-J			edit polygon
		A/D 			navigate between frames
		ctrl + mousewheel 	image zoom
		mousewheel		vertical scroll
		alt+mousewheel		horizontal scroll

		repeat for 10/50/100 faces,
			you don't need to mask every frame of dst,
			only frames where the face is different significantly,
			for example:
				closed eyes
				changed head direction
				changed light
			the more various faces you mask, the more quality you will get

			Start masking from the upper left area and follow the clockwise direction.
			Keep the same logic of masking for all frames, for example:
				the same approximated jaw line of the side faces, where the jaw is not visible
				the same hair line
			Mask the obstructions using polygon with name "0".

	run XSeg) data_dst merge.bat
		this script merges .json data of polygons into jpg faces,
		therefore faceset can be sorted or packed as usual.

	run XSeg) train.bat
		train the model

		Check the faces of 'XSeg dst faces' preview.

		if some faces have wrong or glitchy mask, then repeat steps:
			split
			run edit
			find these glitchy faces and mask them
			merge
			train further or restart training from scratch

Restart training of XSeg model is only possible by deleting all 'model\XSeg_*' files.

If you want to get the mask of the predicted face in merger,
you should repeat the same steps for src faceset.

New mask modes available in merger for whole_face:

XSeg-prd	  - XSeg mask of predicted face	 -> faces from src faceset should be labeled
XSeg-dst	  - XSeg mask of dst face        -> faces from dst faceset should be labeled
XSeg-prd*XSeg-dst - the smallest area of both

if workspace\model folder contains trained XSeg model, then merger will use it,
otherwise you will get transparent mask by using XSeg-* modes.

Some screenshots:
label tool: https://i.imgur.com/aY6QGw1.jpg
trainer   : https://i.imgur.com/NM1Kn3s.jpg
merger    : https://i.imgur.com/glUzFQ8.jpg

example of the fake using 13 segmented dst faces
          : https://i.imgur.com/wmvyizU.gifv
2020-03-15 15:12:44 +04:00
Colombo
76ca79216e Upgraded to TF version 1.13.2
Removed the wait at first launch for most graphics cards.

Increased speed of training by 10-20%, but you have to retrain all models from scratch.

SAEHD:

added option 'use float16'
	Experimental option. Reduces the model size by half.
	Increases the speed of training.
	Decreases the accuracy of the model.
	The model may collapse or not train.
	Model may not learn the mask in large resolutions.

true_face_training option is replaced by
"True face power". 0.0000 .. 1.0
Experimental option. Discriminates the result face to be more like the src face. Higher value - stronger discrimination.
Comparison - https://i.imgur.com/czScS9q.png
2020-01-25 21:58:19 +04:00
Colombo
38b85108b3 DFL-2.0 initial branch commit 2020-01-21 18:43:39 +04:00
Colombo
fe58459f36 added FacesetRelighter:
Synthesize new faces from existing ones by relighting them using DeepPortraitRelighter network.
With the relighted faces neural network will better reproduce face shadows.

Therefore you can synthsize shadowed faces from fully lit faceset.
https://i.imgur.com/wxcmQoi.jpg

as a result, better fakes on dark faces:
https://i.imgur.com/5xXIbz5.jpg

in OpenCL build Relighter runs on CPU,

install pytorch directly via pip install, look at requirements
2019-11-11 11:42:52 +04:00
iperov
407ce3b1ca Added interactive converter.
With interactive converter you can change any parameter of any frame and see the result in real time.

Converter: added motion_blur_power param.
Motion blur is applied by precomputed motion vectors.
So the moving face will look more realistic.

RecycleGAN model is removed.

Added experimental AVATAR model. Minimum required VRAM is 6GB (NVIDIA), 12GB (AMD)
Usage:
1) place data_src.mp4 10-20min square resolution video of news reporter sitting at the table with static background,
   other faces should not appear in frames.
2) process "extract images from video data_src.bat" with FULL fps
3) place data_dst.mp4 video of face who will control the src face
4) process "extract images from video data_dst FULL FPS.bat"
5) process "data_src mark faces S3FD best GPU.bat"
6) process "data_dst extract unaligned faces S3FD best GPU.bat"
7) train AVATAR.bat stage 1, tune batch size to maximum for your card (32 for 6GB), train to 50k+ iters.
8) train AVATAR.bat stage 2, tune batch size to maximum for your card (4 for 6GB), train to decent sharpness.
9) convert AVATAR.bat
10) converted to mp4.bat

updated versions of modules
2019-08-24 12:57:29 +04:00
iperov
3114ae9d7b upd plaidml ver 2019-05-20 22:42:34 +04:00
iperov
35e1bceeb4 update numpy ver 2019-04-27 18:37:46 +04:00
iperov
580e8250e0 fix req for new plaidml 2019-03-21 21:29:35 +04:00
iperov
48314f74db removing dlib detector that is outdated and inaccurate 2019-03-21 19:00:42 +04:00
iperov
af1760b22d plaidML: upgrading version 2019-03-21 18:57:02 +04:00
iperov
5587c93e01 fix ConverterMasked.py,
changing requirements
changing device.py ENV vars
2019-03-07 19:48:12 +04:00
Renamed from requirements-AVX-cuda10.0-cudnn7.4.1.txt (Browse further)