Commit graph

92 commits

Author SHA1 Message Date
Colombo
5b4d023712 added 5.XSeg) data_dst/data_src mask for XSeg trainer - remove.bat
removes labeled xseg polygons from the extracted frames
2020-04-06 21:45:46 +04:00
Colombo
8e9e346c9d Extractor: now face_type can be choosed in console dialog 2020-04-04 10:08:23 +04:00
Colombo
0fb912e91f Trainer: added --silent-start cmd option 2020-04-02 13:05:04 +04:00
Colombo
6d3607a13d New script:
5.XSeg) data_dst/src mask for XSeg trainer - fetch.bat
Copies faces containing XSeg polygons to aligned_xseg\ dir.
Useful only if you want to collect labeled faces and reuse them in other fakes.

Now you can use trained XSeg mask in the SAEHD training process.
It’s mean default ‘full_face’ mask obtained from landmarks will be replaced with the mask obtained from the trained XSeg model.
use
5.XSeg.optional) trained mask for data_dst/data_src - apply.bat
5.XSeg.optional) trained mask for data_dst/data_src - remove.bat

Normally you don’t need it. You can use it, if you want to use ‘face_style’ and ‘bg_style’ with obstructions.

XSeg trainer : now you can choose type of face
XSeg trainer : now you can restart training in “override settings”
Merger: XSeg-* modes now can be used with all types of faces.

Therefore old MaskEditor, FANSEG models, and FAN-x modes have been removed,
because the new XSeg solution is better, simpler and more convenient, which costs only 1 hour of manual masking for regular deepfake.
2020-03-30 14:00:40 +04:00
Colombo
01d81674fd added new XSegEditor !
here new whole_face + XSeg workflow:

with XSeg model you can train your own mask segmentator for dst(and/or src) faces
that will be used by the merger for whole_face.

Instead of using a pretrained segmentator model (which does not exist),
you control which part of faces should be masked.

new scripts:
	5.XSeg) data_dst edit masks.bat
	5.XSeg) data_src edit masks.bat
	5.XSeg) train.bat

Usage:
	unpack dst faceset if packed

	run 5.XSeg) data_dst edit masks.bat

	Read tooltips on the buttons (en/ru/zn languages are supported)

	mask the face using include or exclude polygon mode.

	repeat for 50/100 faces,
		!!! you don't need to mask every frame of dst
		only frames where the face is different significantly,
		for example:
			closed eyes
			changed head direction
			changed light
		the more various faces you mask, the more quality you will get

		Start masking from the upper left area and follow the clockwise direction.
		Keep the same logic of masking for all frames, for example:
			the same approximated jaw line of the side faces, where the jaw is not visible
			the same hair line
		Mask the obstructions using exclude polygon mode.

	run XSeg) train.bat
		train the model

		Check the faces of 'XSeg dst faces' preview.

		if some faces have wrong or glitchy mask, then repeat steps:
			run edit
			find these glitchy faces and mask them
			train further or restart training from scratch

Restart training of XSeg model is only possible by deleting all 'model\XSeg_*' files.

If you want to get the mask of the predicted face (XSeg-prd mode) in merger,
you should repeat the same steps for src faceset.

New mask modes available in merger for whole_face:

XSeg-prd	  - XSeg mask of predicted face	-> faces from src faceset should be labeled
XSeg-dst	  - XSeg mask of dst face        	-> faces from dst faceset should be labeled
XSeg-prd*XSeg-dst - the smallest area of both

if workspace\model folder contains trained XSeg model, then merger will use it,
otherwise you will get transparent mask by using XSeg-* modes.

Some screenshots:
XSegEditor: https://i.imgur.com/7Bk4RRV.jpg
trainer   : https://i.imgur.com/NM1Kn3s.jpg
merger    : https://i.imgur.com/glUzFQ8.jpg

example of the fake using 13 segmented dst faces
          : https://i.imgur.com/wmvyizU.gifv
2020-03-24 12:15:31 +04:00
Colombo
efe3b56683 DFLIMG refactoring 2020-03-21 01:18:15 +04:00
Colombo
45582d129d added XSeg model.
with XSeg model you can train your own mask segmentator of dst(and src) faces
that will be used in merger for whole_face.

Instead of using a pretrained model (which does not exist),
you control which part of faces should be masked.

Workflow is not easy, but at the moment it is the best solution
for obtaining the best quality of whole_face's deepfakes using minimum effort
without rotoscoping in AfterEffects.

new scripts:
	XSeg) data_dst edit.bat
	XSeg) data_dst merge.bat
	XSeg) data_dst split.bat
	XSeg) data_src edit.bat
	XSeg) data_src merge.bat
	XSeg) data_src split.bat
	XSeg) train.bat

Usage:
	unpack dst faceset if packed

	run XSeg) data_dst split.bat
		this scripts extracts (previously saved) .json data from jpg faces to use in label tool.

	run XSeg) data_dst edit.bat
		new tool 'labelme' is used

		use polygon (CTRL-N) to mask the face
			name polygon "1" (one symbol) as include polygon
			name polygon "0" (one symbol) as exclude polygon

			'exclude polygons' will be applied after all 'include polygons'

		Hot keys:
		ctrl-N			create polygon
		ctrl-J			edit polygon
		A/D 			navigate between frames
		ctrl + mousewheel 	image zoom
		mousewheel		vertical scroll
		alt+mousewheel		horizontal scroll

		repeat for 10/50/100 faces,
			you don't need to mask every frame of dst,
			only frames where the face is different significantly,
			for example:
				closed eyes
				changed head direction
				changed light
			the more various faces you mask, the more quality you will get

			Start masking from the upper left area and follow the clockwise direction.
			Keep the same logic of masking for all frames, for example:
				the same approximated jaw line of the side faces, where the jaw is not visible
				the same hair line
			Mask the obstructions using polygon with name "0".

	run XSeg) data_dst merge.bat
		this script merges .json data of polygons into jpg faces,
		therefore faceset can be sorted or packed as usual.

	run XSeg) train.bat
		train the model

		Check the faces of 'XSeg dst faces' preview.

		if some faces have wrong or glitchy mask, then repeat steps:
			split
			run edit
			find these glitchy faces and mask them
			merge
			train further or restart training from scratch

Restart training of XSeg model is only possible by deleting all 'model\XSeg_*' files.

If you want to get the mask of the predicted face in merger,
you should repeat the same steps for src faceset.

New mask modes available in merger for whole_face:

XSeg-prd	  - XSeg mask of predicted face	 -> faces from src faceset should be labeled
XSeg-dst	  - XSeg mask of dst face        -> faces from dst faceset should be labeled
XSeg-prd*XSeg-dst - the smallest area of both

if workspace\model folder contains trained XSeg model, then merger will use it,
otherwise you will get transparent mask by using XSeg-* modes.

Some screenshots:
label tool: https://i.imgur.com/aY6QGw1.jpg
trainer   : https://i.imgur.com/NM1Kn3s.jpg
merger    : https://i.imgur.com/glUzFQ8.jpg

example of the fake using 13 segmented dst faces
          : https://i.imgur.com/wmvyizU.gifv
2020-03-15 15:12:44 +04:00
Colombo
61472cdaf7 global refactoring and fixes,
removed support of extracted(aligned) PNG faces. Use old builds to convert from PNG to JPG.

fanseg model file in facelib/ is renamed
2020-03-13 08:09:00 +04:00
Colombo
6f4ea69d4d added dev_segmented_extract,
extracts marked images in 'labelme' tool, that can be used in FANseg training
2020-03-08 10:13:15 +04:00
Colombo
123bccf01a returned back
3.optional) denoise data_dst images.bat
	Apply it if dst video is very sharp.

	Denoise dst images before face extraction.
	This technique helps neural network not to learn the noise.
	The result is less pixel shake of the predicted face.
2020-03-07 15:51:30 +04:00
Colombo
9ccdd271a4 added sort by "best faces faster"
same as sort by "best faces"
but faces will be sorted by source-rect-area instead of blur, x40 times faster.
2020-03-06 21:38:41 +04:00
Colombo
f56d583cb5 added sort by "face rect size in source image"
small faces from source image will be placed at the end
2020-03-03 21:47:49 +04:00
Colombo
780ad5679f fix --force-gpu-idxs commandline param for trainer 2020-03-03 21:25:24 +04:00
Colombo
2af067b18d _ 2020-02-29 15:25:49 +04:00
Colombo
27e24d5d28 _ 2020-02-29 15:24:14 +04:00
Colombo
7f609d0f0a fix for linux 2020-02-29 14:49:46 +04:00
Colombo
5eca4958b7 fix for linux 2020-02-29 14:34:08 +04:00
Colombo
10ffca37c0 _ 2020-02-29 14:27:02 +04:00
Colombo
6b6aaba4db ignore UserWarning for platforms != win32 2020-02-29 14:24:25 +04:00
Colombo
f1d115b63b added experimental face type 'whole_face'
Basic usage instruction: https://i.imgur.com/w7LkId2.jpg

	'whole_face' requires skill in Adobe After Effects.

	For using whole_face you have to extract whole_face's by using
	4) data_src extract whole_face
	and
	5) data_dst extract whole_face
	Images will be extracted in 512 resolution, so they can be used for regular full_face's and half_face's.

	'whole_face' covers whole area of face include forehead in training square,
	but training mask is still 'full_face'
	therefore it requires manual final masking and composing in Adobe After Effects.

added option 'masked_training'
	This option is available only for 'whole_face' type.
	Default is ON.
	Masked training clips training area to full_face mask,
	thus network will train the faces properly.
	When the face is trained enough, disable this option to train all area of the frame.
	Merge with 'raw-rgb' mode, then use Adobe After Effects to manually mask, tune color, and compose whole face include forehead.
2020-02-21 16:21:04 +04:00
Colombo
7386a9d6fd optimized face sample generator, CPU load is significantly reduced
SAEHD:

added new option
GAN power 0.0 .. 10.0
	Train the network in Generative Adversarial manner.
	Forces the neural network to learn small details of the face.
	You can enable/disable this option at any time,
	but better to enable it when the network is trained enough.
	Typical value is 1.0
	GAN power with pretrain mode will not work.

Example of enabling GAN on 81k iters +5k iters
https://i.imgur.com/OdXHLhU.jpg
https://i.imgur.com/CYAJmJx.jpg

dfhd: default Decoder dimensions are now 48
the preview for 256 res is now correctly displayed

fixed model naming/renaming/removing

Improvements for those involved in post-processing in AfterEffects:

Codec is reverted back to x264 in order to properly use in AfterEffects and video players.

Merger now always outputs the mask to workspace\data_dst\merged_mask

removed raw modes except raw-rgb
raw-rgb mode now outputs selected face mask_mode (before square mask)

'export alpha mask' button is replaced by 'show alpha mask'.
You can view the alpha mask without recompute the frames.

8) 'merged *.bat' now also output 'result_mask.' video file.
8) 'merged lossless' now uses x264 lossless codec (before PNG codec)
result_mask video file is always lossless.

Thus you can use result_mask video file as mask layer in the AfterEffects.
2020-01-28 12:24:45 +04:00
Colombo
76ca79216e Upgraded to TF version 1.13.2
Removed the wait at first launch for most graphics cards.

Increased speed of training by 10-20%, but you have to retrain all models from scratch.

SAEHD:

added option 'use float16'
	Experimental option. Reduces the model size by half.
	Increases the speed of training.
	Decreases the accuracy of the model.
	The model may collapse or not train.
	Model may not learn the mask in large resolutions.

true_face_training option is replaced by
"True face power". 0.0000 .. 1.0
Experimental option. Discriminates the result face to be more like the src face. Higher value - stronger discrimination.
Comparison - https://i.imgur.com/czScS9q.png
2020-01-25 21:58:19 +04:00
Colombo
2d61b715de 1 2020-01-22 18:27:19 +04:00
Colombo
38b85108b3 DFL-2.0 initial branch commit 2020-01-21 18:43:39 +04:00
Colombo
d46fb5cfd3 fixed mask editor
added FacesetEnhancer
4.2.other) data_src util faceset enhance best GPU.bat
4.2.other) data_src util faceset enhance multi GPU.bat

FacesetEnhancer greatly increases details in your source face set,
same as Gigapixel enhancer, but in fully automatic mode.
In OpenCL build it works on CPU only.

Please consider a donation.
2019-12-26 21:27:10 +04:00
Colombo
ac1913c070 fix 2019-12-24 14:07:06 +04:00
Colombo
3feb386a8e fix for colab 2019-12-24 11:20:32 +04:00
Colombo
c1612c5553 fixes 2019-12-23 14:57:47 +04:00
Colombo
e0e8970ab9 fix PackedFaceset 2019-12-22 15:58:46 +04:00
Colombo
dd45b7dacc cleaning 2019-12-22 15:02:53 +04:00
Colombo
50f892d57d all models: removed options 'src_scale_mod', and 'sort samples by yaw as target'
If you want, you can manually remove unnecessary angles from src faceset after sort by yaw.

Optimized sample generators (CPU workers). Now they consume less amount of RAM and work faster.

added
4.2.other) data_src/dst util faceset pack.bat
	Packs /aligned/ samples into one /aligned/samples.pak file.
	After that, all faces will be deleted.

4.2.other) data_src/dst util faceset unpack.bat
	unpacks faces from /aligned/samples.pak to /aligned/ dir.
	After that, samples.pak will be deleted.

Packed faceset load and work faster.
2019-12-21 23:16:55 +04:00
Colombo
8866dce22e added
4.2.other) data_src util faceset metadata save.bat
	saves metadata of data_src\aligned\ faces into data_src\aligned\meta.dat

4.2.other) data_src util faceset metadata restore.bat
	restore metadata from 'meta.dat' to images
	if image size different from original, then it will be automatically resized

You can greatly enhance face details of src faceset by using Topaz Gigapixel software.
example https://i.imgur.com/Gwee99L.jpg
Example of workflow:
1) run 'data_src util faceset metadata save.bat'
2) launch Topaz Gigapixel
3) open 'data_src\aligned\' and select all images
4) set output folder to 'data_src\aligned_topaz' (create folder in save dialog)
5) set settings as on screenshot https://i.imgur.com/kAVWMQG.jpg
	you can choose 2x, 4x, or 6x upscale rate
6) start process images and wait full process
7) rename folders:
	data_src\aligned        ->  data_src\aligned_original
	data_src\aligned_topaz  ->  data_src\aligned
8) copy 'data_src\aligned_original\meta.dat' to 'data_src\aligned\'
9) run 'data_src util faceset metadata restore.bat'
	images will be downscaled back to original size (256x256) preserving details
	metadata will be restored
10) now your new enhanced faceset is ready to use !
2019-12-20 15:03:17 +04:00
Colombo
d4745b5cf8 added sort by absdiff
This is sort method by absolute per pixel difference between all faces.
options:
Sort by similar? ( y/n ?:help skip:y ) :
if you choose 'n', then most dissimilar faces will be placed first.
2019-12-11 22:33:49 +04:00
Colombo
fe58459f36 added FacesetRelighter:
Synthesize new faces from existing ones by relighting them using DeepPortraitRelighter network.
With the relighted faces neural network will better reproduce face shadows.

Therefore you can synthsize shadowed faces from fully lit faceset.
https://i.imgur.com/wxcmQoi.jpg

as a result, better fakes on dark faces:
https://i.imgur.com/5xXIbz5.jpg

in OpenCL build Relighter runs on CPU,

install pytorch directly via pip install, look at requirements
2019-11-11 11:42:52 +04:00
Colombo
d897893fd5 fix denoise-image-sequence for jpg files 2019-11-08 22:58:39 +04:00
Colombo
734d97d729 added 'sort by vggface': sorting by face similarity using VGGFace model.
Requires 4GB+ VRAM and internet connection for the first run.
2019-10-23 15:06:39 +04:00
Colombo
e013cb0f6b moving some files 2019-10-14 09:34:44 +04:00
Colombo
dc11ec32be SAE : WARNING, RETRAIN IS REQUIRED !
fixed model sizes from previous update.
avoided bug in ML framework(keras) that forces to train the model on random noise.

Converter: added blur on the same keys as sharpness

Added new model 'TrueFace'. This is a GAN model ported from https://github.com/NVlabs/FUNIT
Model produces near zero morphing and high detail face.
Model has higher failure rate than other models.
Keep src and dst faceset in same lighting conditions.
2019-09-19 11:13:56 +04:00
Colombo
7ed38a8097 Converter:
Session is now saved to the model folder.

blur and erode ranges are increased to -400+400

hist-match-bw is now replaced with seamless2 mode.

Added 'ebs' color transfer mode (works only on Windows).

FANSEG model (used in FAN-x mask modes) is retrained with new model configuration
and now produces better precision and less jitter
2019-09-07 13:57:42 +04:00
iperov
407ce3b1ca Added interactive converter.
With interactive converter you can change any parameter of any frame and see the result in real time.

Converter: added motion_blur_power param.
Motion blur is applied by precomputed motion vectors.
So the moving face will look more realistic.

RecycleGAN model is removed.

Added experimental AVATAR model. Minimum required VRAM is 6GB (NVIDIA), 12GB (AMD)
Usage:
1) place data_src.mp4 10-20min square resolution video of news reporter sitting at the table with static background,
   other faces should not appear in frames.
2) process "extract images from video data_src.bat" with FULL fps
3) place data_dst.mp4 video of face who will control the src face
4) process "extract images from video data_dst FULL FPS.bat"
5) process "data_src mark faces S3FD best GPU.bat"
6) process "data_dst extract unaligned faces S3FD best GPU.bat"
7) train AVATAR.bat stage 1, tune batch size to maximum for your card (32 for 6GB), train to 50k+ iters.
8) train AVATAR.bat stage 2, tune batch size to maximum for your card (4 for 6GB), train to decent sharpness.
9) convert AVATAR.bat
10) converted to mp4.bat

updated versions of modules
2019-08-24 12:57:29 +04:00
iperov
b72d5a3f9a fixed error "Failed to get convolution algorithm" on some systems
fixed error "dll load failed" on some systems
Expanded eyebrows line of face masks. It does not affect mask of FAN-x converter mode.
2019-08-11 11:17:22 +04:00
iperov
54fa75aa13 fix 2019-05-02 00:39:18 +04:00
iperov
2a8dd788dc SAE: added option 'Pretrain the model?',
Pretrain the model with large amount of various faces. This technique may help to train the fake with overly different face shapes and light conditions of src/dst data. Face will be look more like a morphed. To reduce the morph effect, some model files will be initialized but not be updated after pretrain: LIAE: inter_AB.h5 DF: both decoders.h5. The longer you pretrain the model the more morphed face will look. After that, save and run the training again.
2019-05-01 19:55:27 +04:00
iperov
e58197ca22 initial code to extract umdfaces.io dataset and train pose estimator 2019-04-23 08:14:09 +04:00
iperov
046649e6be
update == 04.20.2019 == (#242)
* superb improved fanseg

* _

* _

* added FANseg extractor for src and dst faces to use it in training

* -

* _

* _

* update to 'partial' func

* _

* trained FANSeg_256_full_face.h5,
new experimental models: AVATAR, RecycleGAN

* _

* _

* _

* fix for TCC mode cards(tesla), was conflict with plaidML initialization.

* _

* update manuals

* _
2019-04-20 08:23:08 +04:00
iperov
de8d75b4f7 fix mask editor,
upd manuals
2019-04-04 22:19:56 +04:00
iperov
5ac7e5d7f1 changed help message for pixel loss:
Pixel loss may help to enhance fine details and stabilize face color. Use it only if quality does not improve over time.

SAE:
previous SAE model will not work with this update.
Greatly decreased chance of model collapse.
Increased model accuracy.
Residual blocks now default and this option has been removed.
Improved 'learn mask'.
Added masked preview (switch by space key)

Converter:
fixed rct/lct in seamless mode
added mask mode (6) learned*FAN-prd*FAN-dst

added mask editor, its created for refining dataset for FANSeg model, and not for production, but you can spend your time and test it in regular fakes with face obstructions
2019-04-04 10:22:53 +04:00
iperov
f63f78e1ab fix 2019-03-26 19:15:20 +04:00
iperov
808f74e9c8 fix 2019-03-26 19:14:30 +04:00
iperov
77d9f3c856 fix 2019-03-26 19:09:00 +04:00