New script:

5.XSeg) data_dst/src mask for XSeg trainer - fetch.bat
Copies faces containing XSeg polygons to aligned_xseg\ dir.
Useful only if you want to collect labeled faces and reuse them in other fakes.

Now you can use trained XSeg mask in the SAEHD training process.
It’s mean default ‘full_face’ mask obtained from landmarks will be replaced with the mask obtained from the trained XSeg model.
use
5.XSeg.optional) trained mask for data_dst/data_src - apply.bat
5.XSeg.optional) trained mask for data_dst/data_src - remove.bat

Normally you don’t need it. You can use it, if you want to use ‘face_style’ and ‘bg_style’ with obstructions.

XSeg trainer : now you can choose type of face
XSeg trainer : now you can restart training in “override settings”
Merger: XSeg-* modes now can be used with all types of faces.

Therefore old MaskEditor, FANSEG models, and FAN-x modes have been removed,
because the new XSeg solution is better, simpler and more convenient, which costs only 1 hour of manual masking for regular deepfake.
This commit is contained in:
Colombo 2020-03-30 14:00:40 +04:00
parent e5bad483ca
commit 6d3607a13d
30 changed files with 279 additions and 1520 deletions

View file

@ -56,8 +56,14 @@ class SampleProcessor(object):
ct_sample_bgr = None
h,w,c = sample_bgr.shape
def get_full_face_mask():
full_face_mask = LandmarksProcessor.get_image_hull_mask (sample_bgr.shape, sample_landmarks, eyebrows_expand_mod=sample.eyebrows_expand_mod )
def get_full_face_mask():
if sample.xseg_mask is not None:
full_face_mask = sample.xseg_mask
if full_face_mask.shape[0] != h or full_face_mask.shape[1] != w:
full_face_mask = cv2.resize(full_face_mask, (w,h), interpolation=cv2.INTER_CUBIC)
full_face_mask = imagelib.normalize_channels(full_face_mask, 1)
else:
full_face_mask = LandmarksProcessor.get_image_hull_mask (sample_bgr.shape, sample_landmarks, eyebrows_expand_mod=sample.eyebrows_expand_mod )
return np.clip(full_face_mask, 0, 1)
def get_eyes_mask():
@ -125,19 +131,18 @@ class SampleProcessor(object):
raise Exception ('sample %s type %s does not match model requirement %s. Consider extract necessary type of faces.' % (sample.filename, sample.face_type, face_type) )
if sample_type == SPST.FACE_MASK:
if sample_type == SPST.FACE_MASK:
if face_mask_type == SPFMT.FULL_FACE:
img = get_full_face_mask()
elif face_mask_type == SPFMT.EYES:
img = get_eyes_mask()
elif face_mask_type == SPFMT.FULL_FACE_EYES:
img = get_full_face_mask() + get_eyes_mask()
img = get_full_face_mask()
img += get_eyes_mask()*img
else:
img = np.zeros ( sample_bgr.shape[0:2]+(1,), dtype=np.float32)
if sample.ie_polys is not None:
sample.ie_polys.overlay_mask(img)
if sample_face_type == FaceType.MARK_ONLY:
mat = LandmarksProcessor.get_transform_mat (sample_landmarks, warp_resolution, face_type)