added experimental face type 'whole_face'

Basic usage instruction: https://i.imgur.com/w7LkId2.jpg

	'whole_face' requires skill in Adobe After Effects.

	For using whole_face you have to extract whole_face's by using
	4) data_src extract whole_face
	and
	5) data_dst extract whole_face
	Images will be extracted in 512 resolution, so they can be used for regular full_face's and half_face's.

	'whole_face' covers whole area of face include forehead in training square,
	but training mask is still 'full_face'
	therefore it requires manual final masking and composing in Adobe After Effects.

added option 'masked_training'
	This option is available only for 'whole_face' type.
	Default is ON.
	Masked training clips training area to full_face mask,
	thus network will train the faces properly.
	When the face is trained enough, disable this option to train all area of the frame.
	Merge with 'raw-rgb' mode, then use Adobe After Effects to manually mask, tune color, and compose whole face include forehead.
This commit is contained in:
Colombo 2020-02-21 16:21:04 +04:00
parent 778fb94246
commit f1d115b63b
10 changed files with 74 additions and 58 deletions

View file

@ -454,7 +454,6 @@ class QModel(ModelBase):
import merger
return self.predictor_func, (self.resolution, self.resolution, 3), merger.MergerConfigMasked(face_type=face_type,
default_mode = 'overlay',
clip_hborder_mask_per=0.0625 if (face_type != FaceType.HALF) else 0,
)
Model = QModel