Commit graph

105 commits

Author SHA1 Message Date
Colombo
33b0aadb4e Decreased amount of RAM used by Sample Generator. 2020-04-05 13:52:32 +04:00
Colombo
2b7364005d Added new face type : head
Now you can replace the head.
Example: https://www.youtube.com/watch?v=xr5FHd0AdlQ
Requirements:
	Post processing skill in Adobe After Effects or Davinci Resolve.
Usage:
1)	Find suitable dst footage with the monotonous background behind head
2)	Use “extract head” script
3)	Gather rich src headset from only one scene (same color and haircut)
4)	Mask whole head for src and dst using XSeg editor
5)	Train XSeg
6)	Apply trained XSeg mask for src and dst headsets
7)	Train SAEHD using ‘head’ face_type as regular deepfake model with DF archi. You can use pretrained model for head. Minimum recommended resolution for head is 224.
8)	Extract multiple tracks, using Merger:
a.	Raw-rgb
b.	XSeg-prd mask
c.	XSeg-dst mask
9)	Using AAE or DavinciResolve, do:
a.	Hide source head using XSeg-prd mask: content-aware-fill, clone-stamp, background retraction, or other technique
b.	Overlay new head using XSeg-dst mask

Warning: Head faceset can be used for whole_face or less types of training only with XSeg masking.

XSegEditor: added button ‘view trained XSeg mask’, so you can see which frames should be masked to improve mask quality.
2020-04-04 09:28:06 +04:00
Colombo
9a9b7e4f81 fix 2020-03-30 14:39:07 +04:00
Colombo
6d3607a13d New script:
5.XSeg) data_dst/src mask for XSeg trainer - fetch.bat
Copies faces containing XSeg polygons to aligned_xseg\ dir.
Useful only if you want to collect labeled faces and reuse them in other fakes.

Now you can use trained XSeg mask in the SAEHD training process.
It’s mean default ‘full_face’ mask obtained from landmarks will be replaced with the mask obtained from the trained XSeg model.
use
5.XSeg.optional) trained mask for data_dst/data_src - apply.bat
5.XSeg.optional) trained mask for data_dst/data_src - remove.bat

Normally you don’t need it. You can use it, if you want to use ‘face_style’ and ‘bg_style’ with obstructions.

XSeg trainer : now you can choose type of face
XSeg trainer : now you can restart training in “override settings”
Merger: XSeg-* modes now can be used with all types of faces.

Therefore old MaskEditor, FANSEG models, and FAN-x modes have been removed,
because the new XSeg solution is better, simpler and more convenient, which costs only 1 hour of manual masking for regular deepfake.
2020-03-30 14:00:40 +04:00
Colombo
01d81674fd added new XSegEditor !
here new whole_face + XSeg workflow:

with XSeg model you can train your own mask segmentator for dst(and/or src) faces
that will be used by the merger for whole_face.

Instead of using a pretrained segmentator model (which does not exist),
you control which part of faces should be masked.

new scripts:
	5.XSeg) data_dst edit masks.bat
	5.XSeg) data_src edit masks.bat
	5.XSeg) train.bat

Usage:
	unpack dst faceset if packed

	run 5.XSeg) data_dst edit masks.bat

	Read tooltips on the buttons (en/ru/zn languages are supported)

	mask the face using include or exclude polygon mode.

	repeat for 50/100 faces,
		!!! you don't need to mask every frame of dst
		only frames where the face is different significantly,
		for example:
			closed eyes
			changed head direction
			changed light
		the more various faces you mask, the more quality you will get

		Start masking from the upper left area and follow the clockwise direction.
		Keep the same logic of masking for all frames, for example:
			the same approximated jaw line of the side faces, where the jaw is not visible
			the same hair line
		Mask the obstructions using exclude polygon mode.

	run XSeg) train.bat
		train the model

		Check the faces of 'XSeg dst faces' preview.

		if some faces have wrong or glitchy mask, then repeat steps:
			run edit
			find these glitchy faces and mask them
			train further or restart training from scratch

Restart training of XSeg model is only possible by deleting all 'model\XSeg_*' files.

If you want to get the mask of the predicted face (XSeg-prd mode) in merger,
you should repeat the same steps for src faceset.

New mask modes available in merger for whole_face:

XSeg-prd	  - XSeg mask of predicted face	-> faces from src faceset should be labeled
XSeg-dst	  - XSeg mask of dst face        	-> faces from dst faceset should be labeled
XSeg-prd*XSeg-dst - the smallest area of both

if workspace\model folder contains trained XSeg model, then merger will use it,
otherwise you will get transparent mask by using XSeg-* modes.

Some screenshots:
XSegEditor: https://i.imgur.com/7Bk4RRV.jpg
trainer   : https://i.imgur.com/NM1Kn3s.jpg
merger    : https://i.imgur.com/glUzFQ8.jpg

example of the fake using 13 segmented dst faces
          : https://i.imgur.com/wmvyizU.gifv
2020-03-24 12:15:31 +04:00
Colombo
57ceba3225 fix 2020-03-21 06:20:45 +04:00
Colombo
efe3b56683 DFLIMG refactoring 2020-03-21 01:18:15 +04:00
Colombo
79b8b8a7a7 upd SampleProcessor.py 2020-03-20 11:38:08 +04:00
Colombo
f3b4658810 fixes 2020-03-16 22:40:55 +04:00
Colombo
45582d129d added XSeg model.
with XSeg model you can train your own mask segmentator of dst(and src) faces
that will be used in merger for whole_face.

Instead of using a pretrained model (which does not exist),
you control which part of faces should be masked.

Workflow is not easy, but at the moment it is the best solution
for obtaining the best quality of whole_face's deepfakes using minimum effort
without rotoscoping in AfterEffects.

new scripts:
	XSeg) data_dst edit.bat
	XSeg) data_dst merge.bat
	XSeg) data_dst split.bat
	XSeg) data_src edit.bat
	XSeg) data_src merge.bat
	XSeg) data_src split.bat
	XSeg) train.bat

Usage:
	unpack dst faceset if packed

	run XSeg) data_dst split.bat
		this scripts extracts (previously saved) .json data from jpg faces to use in label tool.

	run XSeg) data_dst edit.bat
		new tool 'labelme' is used

		use polygon (CTRL-N) to mask the face
			name polygon "1" (one symbol) as include polygon
			name polygon "0" (one symbol) as exclude polygon

			'exclude polygons' will be applied after all 'include polygons'

		Hot keys:
		ctrl-N			create polygon
		ctrl-J			edit polygon
		A/D 			navigate between frames
		ctrl + mousewheel 	image zoom
		mousewheel		vertical scroll
		alt+mousewheel		horizontal scroll

		repeat for 10/50/100 faces,
			you don't need to mask every frame of dst,
			only frames where the face is different significantly,
			for example:
				closed eyes
				changed head direction
				changed light
			the more various faces you mask, the more quality you will get

			Start masking from the upper left area and follow the clockwise direction.
			Keep the same logic of masking for all frames, for example:
				the same approximated jaw line of the side faces, where the jaw is not visible
				the same hair line
			Mask the obstructions using polygon with name "0".

	run XSeg) data_dst merge.bat
		this script merges .json data of polygons into jpg faces,
		therefore faceset can be sorted or packed as usual.

	run XSeg) train.bat
		train the model

		Check the faces of 'XSeg dst faces' preview.

		if some faces have wrong or glitchy mask, then repeat steps:
			split
			run edit
			find these glitchy faces and mask them
			merge
			train further or restart training from scratch

Restart training of XSeg model is only possible by deleting all 'model\XSeg_*' files.

If you want to get the mask of the predicted face in merger,
you should repeat the same steps for src faceset.

New mask modes available in merger for whole_face:

XSeg-prd	  - XSeg mask of predicted face	 -> faces from src faceset should be labeled
XSeg-dst	  - XSeg mask of dst face        -> faces from dst faceset should be labeled
XSeg-prd*XSeg-dst - the smallest area of both

if workspace\model folder contains trained XSeg model, then merger will use it,
otherwise you will get transparent mask by using XSeg-* modes.

Some screenshots:
label tool: https://i.imgur.com/aY6QGw1.jpg
trainer   : https://i.imgur.com/NM1Kn3s.jpg
merger    : https://i.imgur.com/glUzFQ8.jpg

example of the fake using 13 segmented dst faces
          : https://i.imgur.com/wmvyizU.gifv
2020-03-15 15:12:44 +04:00
Colombo
eb63466baf fix 2020-03-14 17:17:53 +04:00
Colombo
144675020c update SampleGeneratorFaceSkinSegDataset 2020-03-13 19:27:27 +04:00
Colombo
61472cdaf7 global refactoring and fixes,
removed support of extracted(aligned) PNG faces. Use old builds to convert from PNG to JPG.

fanseg model file in facelib/ is renamed
2020-03-13 08:09:00 +04:00
Colombo
b0b9072981 added XSeg model 2020-03-09 13:09:46 +04:00
Colombo
a030ff6951 refactoring 2020-03-09 13:08:32 +04:00
Colombo
eda6433936 refactoring 2020-03-08 23:19:04 +04:00
Colombo
18d93376fc update FANSeg 2020-03-08 10:34:48 +04:00
Colombo
143792fd31 added fanseg for future WF segmentation model 2020-03-08 00:49:12 +04:00
Colombo
3b6ad4abf9 refactoring 2020-03-07 20:51:54 +04:00
Colombo
d0c280a902 fix 2020-03-07 16:33:50 +04:00
Colombo
54548afe1a refactoring 2020-03-06 01:21:38 +04:00
Colombo
302d23a612 refactoring 2020-03-03 22:20:15 +04:00
Colombo
757ec77e44 refactoring 2020-03-01 19:09:50 +04:00
Colombo
acb0b34811 revert 2020-02-27 12:03:01 +04:00
Colombo
8ad1481209 _ 2020-02-27 11:39:29 +04:00
Colombo
9860a38907 upd SampleGenerator 2020-02-27 09:58:46 +04:00
Colombo
0a40d8e5da _ 2020-02-22 13:45:00 +04:00
Colombo
f1d115b63b added experimental face type 'whole_face'
Basic usage instruction: https://i.imgur.com/w7LkId2.jpg

	'whole_face' requires skill in Adobe After Effects.

	For using whole_face you have to extract whole_face's by using
	4) data_src extract whole_face
	and
	5) data_dst extract whole_face
	Images will be extracted in 512 resolution, so they can be used for regular full_face's and half_face's.

	'whole_face' covers whole area of face include forehead in training square,
	but training mask is still 'full_face'
	therefore it requires manual final masking and composing in Adobe After Effects.

added option 'masked_training'
	This option is available only for 'whole_face' type.
	Default is ON.
	Masked training clips training area to full_face mask,
	thus network will train the faces properly.
	When the face is trained enough, disable this option to train all area of the frame.
	Merge with 'raw-rgb' mode, then use Adobe After Effects to manually mask, tune color, and compose whole face include forehead.
2020-02-21 16:21:04 +04:00
Colombo
e6e11ca056 SampleProcessor : added unused random rgb levels 2020-02-20 08:30:53 +04:00
Colombo
5d5718704d fix SampleProcessor 2020-02-19 07:00:46 +04:00
Colombo
9598ba0141 SAEHD:
added option Eyes priority (y/n)

	fix eye problems during training  ( especially on HD architectures )
	by forcing the neural network to train eyes with higher priority
	before/after https://i.imgur.com/YQHOuSR.jpg

	It does not guarantee the right eye direction.
2020-02-18 14:30:07 +04:00
Colombo
01376fd17c decreased time of training initialization 2020-02-18 10:01:33 +04:00
Colombo
60cc917350 add eye masking code 2020-02-03 06:38:58 +04:00
Colombo
5620763ccf "Enable autobackup" option is replaced by
"Autobackup every N hour" 0..24 (default 0 disabled), Autobackup model files with preview every N hour
2020-02-02 20:53:18 +04:00
Colombo
9fd49ee3f0 removed use_float16 option
fix multigpu training
2020-01-30 07:35:33 +04:00
Colombo
5fe5fa131c SampleProcessor.py : refactoring and gen mask struct 2020-01-29 18:08:54 +04:00
Colombo
7386a9d6fd optimized face sample generator, CPU load is significantly reduced
SAEHD:

added new option
GAN power 0.0 .. 10.0
	Train the network in Generative Adversarial manner.
	Forces the neural network to learn small details of the face.
	You can enable/disable this option at any time,
	but better to enable it when the network is trained enough.
	Typical value is 1.0
	GAN power with pretrain mode will not work.

Example of enabling GAN on 81k iters +5k iters
https://i.imgur.com/OdXHLhU.jpg
https://i.imgur.com/CYAJmJx.jpg

dfhd: default Decoder dimensions are now 48
the preview for 256 res is now correctly displayed

fixed model naming/renaming/removing

Improvements for those involved in post-processing in AfterEffects:

Codec is reverted back to x264 in order to properly use in AfterEffects and video players.

Merger now always outputs the mask to workspace\data_dst\merged_mask

removed raw modes except raw-rgb
raw-rgb mode now outputs selected face mask_mode (before square mask)

'export alpha mask' button is replaced by 'show alpha mask'.
You can view the alpha mask without recompute the frames.

8) 'merged *.bat' now also output 'result_mask.' video file.
8) 'merged lossless' now uses x264 lossless codec (before PNG codec)
result_mask video file is always lossless.

Thus you can use result_mask video file as mask layer in the AfterEffects.
2020-01-28 12:24:45 +04:00
Colombo
76ca79216e Upgraded to TF version 1.13.2
Removed the wait at first launch for most graphics cards.

Increased speed of training by 10-20%, but you have to retrain all models from scratch.

SAEHD:

added option 'use float16'
	Experimental option. Reduces the model size by half.
	Increases the speed of training.
	Decreases the accuracy of the model.
	The model may collapse or not train.
	Model may not learn the mask in large resolutions.

true_face_training option is replaced by
"True face power". 0.0000 .. 1.0
Experimental option. Discriminates the result face to be more like the src face. Higher value - stronger discrimination.
Comparison - https://i.imgur.com/czScS9q.png
2020-01-25 21:58:19 +04:00
Colombo
1f0c91f053 add SampleGeneratorFaceTemporal.py 2020-01-23 19:09:23 +04:00
Colombo
38b85108b3 DFL-2.0 initial branch commit 2020-01-21 18:43:39 +04:00
Colombo
52a67a61b3 Sample loader : back to serial one core loader 2020-01-11 23:20:39 +04:00
Colombo
d3e6b435aa fixes and optimizations 2020-01-07 13:45:54 +04:00
Colombo
842a48964f fix 2020-01-05 14:21:34 +04:00
Colombo
ea33541177 fix 2020-01-05 13:58:25 +04:00
Colombo
21b25038ac optimized sample generator 2020-01-05 11:53:31 +04:00
Colombo
2429d28737 optimize memory usage 2020-01-04 23:51:33 +04:00
Colombo
94c99b429d fix 2019-12-23 15:17:13 +04:00
Colombo
c1612c5553 fixes 2019-12-23 14:57:47 +04:00
Colombo
7174dc835a fixes 2019-12-22 21:44:28 +04:00
Colombo
7e609542db fixes 2019-12-22 21:17:34 +04:00