Commit graph

46 commits

Author SHA1 Message Date
Colombo
c1bf3f53ba add comment 2020-02-28 19:31:33 +04:00
Colombo
f1d115b63b added experimental face type 'whole_face'
Basic usage instruction: https://i.imgur.com/w7LkId2.jpg

	'whole_face' requires skill in Adobe After Effects.

	For using whole_face you have to extract whole_face's by using
	4) data_src extract whole_face
	and
	5) data_dst extract whole_face
	Images will be extracted in 512 resolution, so they can be used for regular full_face's and half_face's.

	'whole_face' covers whole area of face include forehead in training square,
	but training mask is still 'full_face'
	therefore it requires manual final masking and composing in Adobe After Effects.

added option 'masked_training'
	This option is available only for 'whole_face' type.
	Default is ON.
	Masked training clips training area to full_face mask,
	thus network will train the faces properly.
	When the face is trained enough, disable this option to train all area of the frame.
	Merge with 'raw-rgb' mode, then use Adobe After Effects to manually mask, tune color, and compose whole face include forehead.
2020-02-21 16:21:04 +04:00
Colombo
9598ba0141 SAEHD:
added option Eyes priority (y/n)

	fix eye problems during training  ( especially on HD architectures )
	by forcing the neural network to train eyes with higher priority
	before/after https://i.imgur.com/YQHOuSR.jpg

	It does not guarantee the right eye direction.
2020-02-18 14:30:07 +04:00
Colombo
4f928074b9 removing smooth_rect option 2020-02-18 10:28:01 +04:00
Colombo
814da70577 Merger:
added smooth_rect option
	default is ON.
	Decreases jitter of predicting rect by using temporal interpolation.
	You can disable this option if you have problems with dynamic scenes.
2020-02-17 18:27:09 +04:00
Colombo
60cc917350 add eye masking code 2020-02-03 06:38:58 +04:00
Colombo
5fe5fa131c SampleProcessor.py : refactoring and gen mask struct 2020-01-29 18:08:54 +04:00
Colombo
76ca79216e Upgraded to TF version 1.13.2
Removed the wait at first launch for most graphics cards.

Increased speed of training by 10-20%, but you have to retrain all models from scratch.

SAEHD:

added option 'use float16'
	Experimental option. Reduces the model size by half.
	Increases the speed of training.
	Decreases the accuracy of the model.
	The model may collapse or not train.
	Model may not learn the mask in large resolutions.

true_face_training option is replaced by
"True face power". 0.0000 .. 1.0
Experimental option. Discriminates the result face to be more like the src face. Higher value - stronger discrimination.
Comparison - https://i.imgur.com/czScS9q.png
2020-01-25 21:58:19 +04:00
Colombo
38b85108b3 DFL-2.0 initial branch commit 2020-01-21 18:43:39 +04:00
Colombo
47e539ccdd fix extract unaligned faces 2019-12-29 19:36:34 +04:00
Colombo
64021b9c62 more stable and precise version of face transformation matrix.
fixed bleeding mask on some samples
2019-12-20 10:30:49 +04:00
Colombo
068c7d0d55 temporary revert last fixes 2019-12-20 10:21:59 +04:00
Colombo
dd1d5e8909 improved face align,
More stable and precise version of the face transformation matrix.
Now full_faces are aligned with the upper and lateral boundaries of the frame,
result: fix of cutted mouth, increase area of the cheeks of side faces
before/after https://i.imgur.com/t9IyGZv.jpg
therefore, additional training is required for existing models.
Optionally, you can re-extract dst faces of your project, if they have problems with cutted mouth or cheeks.
2019-12-19 18:33:04 +04:00
Colombo
9e9dc364c9 temporary revert fix 2019-12-19 15:46:50 +04:00
Colombo
853a056769 more stable and precise version of face transformation matrix 2019-12-19 15:25:06 +04:00
Colombo
59d6fada23 draw up arrow in the red landmark debug square 2019-10-24 10:09:51 +04:00
Colombo
d781af3d1f fixed GPU detection and indexes, got rid of using nvml, now using direct cuda lib to determine gpu info that match tensorflow indexes,
removed TrueFace model.

added SAEv2 model. Differences from SAE:
+ default e_ch_dims is now 21
+ new encoder produces more stable face and less scale jitter
  before: https://i.imgur.com/4jUcol8.gifv
  after:  https://i.imgur.com/lyiax49.gifv - scale of the face is less changed within frame size
+ decoder now has only 1 residual block instead of 2, result is same quality with less decoder size
+ added mid-full face, which covers 30% more area than half face.
+ added option " Enable 'true face' training "
  Enable it only after 50k iters, when the face is sharp enough.
  the result face will be more like src.
  The most src-like face with 'true-face-training' you can achieve with DF architecture.
2019-10-05 16:26:23 +04:00
Colombo
dc11ec32be SAE : WARNING, RETRAIN IS REQUIRED !
fixed model sizes from previous update.
avoided bug in ML framework(keras) that forces to train the model on random noise.

Converter: added blur on the same keys as sharpness

Added new model 'TrueFace'. This is a GAN model ported from https://github.com/NVlabs/FUNIT
Model produces near zero morphing and high detail face.
Model has higher failure rate than other models.
Keep src and dst faceset in same lighting conditions.
2019-09-19 11:13:56 +04:00
Colombo
7ed38a8097 Converter:
Session is now saved to the model folder.

blur and erode ranges are increased to -400+400

hist-match-bw is now replaced with seamless2 mode.

Added 'ebs' color transfer mode (works only on Windows).

FANSEG model (used in FAN-x mask modes) is retrained with new model configuration
and now produces better precision and less jitter
2019-09-07 13:57:42 +04:00
Colombo
bac9d5a99d nothing interesting 2019-08-30 09:49:07 +04:00
iperov
23854ac8bc removing landmarks of lips which are used in face aligning, and leave only corners of mouth landmarks,
result: less scale jittering aligning that will be feeded into AE and produce more stable face
before: https://i.imgur.com/gJaW5Y4.gifv
after: https://i.imgur.com/Vq7gvhY.gifv
2019-08-26 19:36:41 +04:00
iperov
407ce3b1ca Added interactive converter.
With interactive converter you can change any parameter of any frame and see the result in real time.

Converter: added motion_blur_power param.
Motion blur is applied by precomputed motion vectors.
So the moving face will look more realistic.

RecycleGAN model is removed.

Added experimental AVATAR model. Minimum required VRAM is 6GB (NVIDIA), 12GB (AMD)
Usage:
1) place data_src.mp4 10-20min square resolution video of news reporter sitting at the table with static background,
   other faces should not appear in frames.
2) process "extract images from video data_src.bat" with FULL fps
3) place data_dst.mp4 video of face who will control the src face
4) process "extract images from video data_dst FULL FPS.bat"
5) process "data_src mark faces S3FD best GPU.bat"
6) process "data_dst extract unaligned faces S3FD best GPU.bat"
7) train AVATAR.bat stage 1, tune batch size to maximum for your card (32 for 6GB), train to 50k+ iters.
8) train AVATAR.bat stage 2, tune batch size to maximum for your card (4 for 6GB), train to decent sharpness.
9) convert AVATAR.bat
10) converted to mp4.bat

updated versions of modules
2019-08-24 12:57:29 +04:00
iperov
b72d5a3f9a fixed error "Failed to get convolution algorithm" on some systems
fixed error "dll load failed" on some systems
Expanded eyebrows line of face masks. It does not affect mask of FAN-x converter mode.
2019-08-11 11:17:22 +04:00
iperov
ab714dfbfe _ 2019-04-28 16:29:32 +04:00
iperov
0e088f6415 _ 2019-04-25 14:46:39 +04:00
iperov
e58197ca22 initial code to extract umdfaces.io dataset and train pose estimator 2019-04-23 08:14:09 +04:00
iperov
5ac7e5d7f1 changed help message for pixel loss:
Pixel loss may help to enhance fine details and stabilize face color. Use it only if quality does not improve over time.

SAE:
previous SAE model will not work with this update.
Greatly decreased chance of model collapse.
Increased model accuracy.
Residual blocks now default and this option has been removed.
Improved 'learn mask'.
Added masked preview (switch by space key)

Converter:
fixed rct/lct in seamless mode
added mask mode (6) learned*FAN-prd*FAN-dst

added mask editor, its created for refining dataset for FANSeg model, and not for production, but you can spend your time and test it in regular fakes with face obstructions
2019-04-04 10:22:53 +04:00
iperov
b03b147bae refactoring 2019-03-26 11:09:44 +04:00
iperov
7a0cc56603 extractor: fixes, optimizations,
manual extractor: added 'a' option to switch accuracy mode
2019-03-21 18:56:32 +04:00
iperov
a3df04999c removing trailing spaces 2019-03-19 23:53:27 +04:00
iperov
73e91cc0b5 upd LandmarksProcessor.py 2019-03-17 21:14:05 +04:00
iperov
f3b343c0e5 small fixed and refactorings 2019-03-16 20:55:51 +04:00
iperov
1beb2f07f0 fix get_image_hull_mask 2019-03-14 15:43:10 +04:00
iperov
9823421a44 added transparent mask to draw_landmarks 2019-03-14 12:16:21 +04:00
iperov
438213e97c manual extractor: increased FPS,
sort by final : now you can specify target number of images,
converter: fix seamless mask and exception,
huge refactoring
2019-02-28 11:56:31 +04:00
iperov
6e12594af1 added util --add-landmarks-debug-images 2019-02-13 10:17:08 +04:00
iperov
06fe1314d8 removing default yaw_value from DFLIMG files,
added better pitch/yaw estimator from 68 landmarks,
improving face yaw accuracy for sorting and trainers,
added sort by face-pitch
2019-02-12 21:31:37 +04:00
iperov
b6c4171ea1 optimizations of nnlib and SampleGeneratorFace,
refactorings
2019-01-22 11:52:04 +04:00
iperov
64c3e57f1c added option to converter --output-face-scale-modifier 2018-11-28 20:38:48 +04:00
Artem Ivanov
f87ee259b0
Landmarks nose drawing fix
Fixes nose landmarks drawing if `image_landmarks` are passes as a `numpy.array`
2018-08-24 14:16:47 +03:00
David
b877367260 Fix for Issue #13
Swapped + operator for np.concatenate to ensure that the resulting array is 2 by X.
This ensures that we are able to draw the point in the array.
2018-08-19 17:41:57 -04:00
iperov
a1ff86a6b4 1 2018-07-09 00:09:24 +04:00
Artem Ivanov
a8c0613c79
shorter and faster version of draw_landmarks
- drawing lines via single CV call
2018-07-07 16:58:50 +03:00
Artem Ivanov
48e281f675
Better face landmark representation
- fixed nose missing one point
- made AA lines instead of regular ones
- made closed polylines for eyes and mouth
- reduced radius of circles for eyes, nose, and mouth as they obscure lines for small images
2018-07-07 04:59:17 +03:00
Christopher Throwaway
c8c8e8dadc Manual preview now draws 68-pt face landmarks
It could be difficult to tell if the point cloud was 'correct' or not
when manually fixing a face detection. For 68-point face landmarks, the
facial landmarks are now drawn to make it easier to tell if the face
is correctly detected.
2018-06-27 21:32:31 -05:00
iperov
6bd5a44264 initial 2018-06-04 17:12:43 +04:00