Merge remote-tracking branch 'jh/master'

This commit is contained in:
seranus 2021-08-19 13:43:53 +02:00
commit 83bc49eb92
64 changed files with 3551 additions and 600 deletions

0
.github/FUNDING.yml vendored Normal file
View file

154
CHANGELOG.md Normal file
View file

@ -0,0 +1,154 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.8.0] - 2021-06-20
### Added
- Morph factor option
- Migrated options from SAEHD to AMP:
- Loss function
- Random downsample
- Random noise
- Random blur
- Random jpeg
- Background Power
- CT mode: fs-aug
- Random color
## [1.7.3] - 2021-06-16
### Fixed
- AMP mask type
## [1.7.2] - 2021-06-15
### Added
- New sample degradation options (only affects input, similar to random warp):
- Random noise (gaussian/laplace/poisson)
- Random blur (gaussian/motion)
- Random jpeg compression
- Random downsampling
- New "warped" preview(s): Shows the input samples with any/all distortions.
## [1.7.1] - 2021-06-15
### Added
- New autobackup options:
- Session name
- ISO Timestamps (instead of numbered)
- Max number of backups to keep (use "0" for unlimited)
## [1.7.0] - 2021-06-15
### Updated
- Merged in latest changes from upstream, including new AMP model
## [1.6.2] - 2021-05-08
### Fixed
- Fixed bug with GAN smoothing/noisy labels with certain versions of Tensorflow
## [1.6.1] - 2021-05-04
### Fixed
- Fixed bug when `fs-aug` used on model with same resolution as dataset
## [1.6.0] - 2021-05-04
### Added
- New loss function "MS-SSIM+L1", based on ["Loss Functions for Image Restoration with Neural Networks"](https://research.nvidia.com/publication/loss-functions-image-restoration-neural-networks)
## [1.5.1] - 2021-04-23
### Fixed
- Fixes bug with MS-SSIM when using a version of tensorflow < 1.14
## [1.5.0] - 2021-03-29
### Changed
- Web UI previews now show preview pane as PNG (loss-less), instead of JPG (lossy), so we can see the same output
as on desktop, without any changes from JPG compression. This has the side-effect of the preview images loading slower
over web, as they are now larger, a future update may be considered which would give the option to view as JPG
instead.
## [1.4.2] - 2021-03-26
### Fixed
- Fixes bug in background power with MS-SSIM, that misattributed loss from dst to src
## [1.4.1] - 2021-03-25
### Fixed
- When both Background Power and MS-SSIM were enabled, the src and dst losses were being overwritten with the
"background power" losses. Fixed so "background power" losses are properly added with the total losses.
- *Note: since all the other losses were being skipped when ms-ssim and background loss were being enabled, this had
the side-effect of lowering the memory requirements (and raising the max batch size). With this fix, you may
experience an OOM error on models ran with both these features enabled. I may revisit this in another feature,
allowing you to manually disable certain loss calculations, for similar performance benefits.*
## [1.4.0] - 2021-03-24
### Added
- [MS-SSIM loss training option](doc/features/ms-ssim)
- GAN version option (v2 - late 2020 or v3 - current GAN)
- [GAN label smoothing and label noise options](doc/features/gan-options)
### Fixed
- Background Power now uses the entire image, not just the area outside of the mask for comparison.
This should help with rough areas directly next to the mask
## [1.3.0] - 2021-03-20
### Added
- [Background Power training option](doc/features/background-power/README.md)
## [1.2.1] - 2021-03-20
### Fixed
- Fixes bug with `fs-aug` color mode.
## [1.2.0] - 2021-03-17
### Added
- [Random color training option](doc/features/random-color/README.md)
## [1.1.5] - 2021-03-16
### Fixed
- Fixed unclosed websocket in Web UI client when exiting
## [1.1.4] - 2021-03-16
### Fixed
- Fixed bug when exiting from Web UI
## [1.1.3] - 2021-03-16
### Changed
- Updated changelog with unreleased features, links to working branches
## [1.1.2] - 2021-03-12
### Fixed
- [Fixed missing predicted src mask in 'SAEHD masked' preview](doc/fixes/predicted_src_mask/README.md)
## [1.1.1] - 2021-03-12
### Added
- CHANGELOG file for tracking updates, new features, and bug fixes
- Documentation for Web UI
- Link to CHANGELOG at top of README
## [1.1.0] - 2021-03-11
### Added
- [Web UI for training preview](doc/features/webui/README.md)
## [1.0.0] - 2021-03-09
### Initialized
- Reset stale master branch to [seranus/DeepFaceLab](https://github.com/seranus/DeepFaceLab),
21 commits ahead of [iperov/DeepFaceLab](https://github.com/iperov/DeepFaceLab) ([compare](https://github.com/iperov/DeepFaceLab/compare/4818183...seranus:3f5ae05))
[1.8.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.7.3...v1.8.0
[1.7.3]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.7.2...v1.7.3
[1.7.2]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.7.1...v1.7.2
[1.7.1]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.7.0...v1.7.1
[1.7.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.6.2...v1.7.0
[1.6.2]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.6.1...v1.6.2
[1.6.1]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.6.0...v1.6.1
[1.6.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.5.1...v1.6.0
[1.5.1]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.5.0...v1.5.1
[1.5.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.4.2...v1.5.0
[1.4.2]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.4.1...v1.4.2
[1.4.1]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.4.0...v1.4.1
[1.4.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.3.0...v1.4.0
[1.3.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.2.1...v1.3.0
[1.2.1]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.2.0...v1.2.1
[1.2.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.1.5...v1.2.0
[1.1.5]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.1.4...v1.1.5
[1.1.4]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.1.3...v1.1.4
[1.1.3]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.1.2...v1.1.3
[1.1.2]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.1.1...v1.1.2
[1.1.1]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.1.0...v1.1.1
[1.1.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.0.0...v1.1.0
[1.0.0]: https://github.com/faceshiftlabs/DeepFaceLab/releases/tag/v1.0.0

View file

@ -1,4 +1,11 @@
<table align="center" border="0"> [![Patreon](https://c5.patreon.com/external/logo/become_a_patron_button@2x.png)](https://www.patreon.com/bePatron?u=22997465)
# CHANGELOG
### [View most recent changes](CHANGELOG.md)
![](doc/dfl_cover.png)
<table align="center" border="0">
<tr><td colspan=2 align="center"> <tr><td colspan=2 align="center">
@ -19,9 +26,9 @@ https://arxiv.org/abs/2005.05535</a>
<p align="center"> <p align="center">
![](doc/logo_cuda.png)
![](doc/logo_tensorflow.png) ![](doc/logo_tensorflow.png)
![](doc/logo_python.png) ![](doc/logo_cuda.png)
![](doc/logo_directx.png)
</p> </p>
@ -29,8 +36,8 @@ More than 95% of deepfake videos are created with DeepFaceLab.
DeepFaceLab is used by such popular youtube channels as DeepFaceLab is used by such popular youtube channels as
|![](doc/tiktok_icon.png) [deeptomcruise](https://www.tiktok.com/@deeptomcruise)| |![](doc/tiktok_icon.png) [deeptomcruise](https://www.tiktok.com/@deeptomcruise)|![](doc/tiktok_icon.png) [1facerussia](https://www.tiktok.com/@1facerussia)|![](doc/tiktok_icon.png) [arnoldschwarzneggar](https://www.tiktok.com/@arnoldschwarzneggar)
|---| |---|---|---|
|![](doc/youtube_icon.png) [Ctrl Shift Face](https://www.youtube.com/channel/UCKpH0CKltc73e4wh0_pgL3g)|![](doc/youtube_icon.png) [VFXChris Ume](https://www.youtube.com/channel/UCGf4OlX_aTt8DlrgiH3jN3g/videos)|![](doc/youtube_icon.png) [Sham00k](https://www.youtube.com/channel/UCZXbWcv7fSZFTAZV4beckyw/videos)| |![](doc/youtube_icon.png) [Ctrl Shift Face](https://www.youtube.com/channel/UCKpH0CKltc73e4wh0_pgL3g)|![](doc/youtube_icon.png) [VFXChris Ume](https://www.youtube.com/channel/UCGf4OlX_aTt8DlrgiH3jN3g/videos)|![](doc/youtube_icon.png) [Sham00k](https://www.youtube.com/channel/UCZXbWcv7fSZFTAZV4beckyw/videos)|
|---|---|---| |---|---|---|
@ -194,7 +201,7 @@ Unfortunately, there is no "make everything ok" button in DeepFaceLab. You shoul
</td></tr> </td></tr>
<tr><td align="right"> <tr><td align="right">
<a href="https://tinyurl.com/y8lntghz">Windows (magnet link)</a> <a href="https://tinyurl.com/4tb2tn4w">Windows (magnet link)</a>
</td><td align="center">Last release. Use torrent client to download.</td></tr> </td><td align="center">Last release. Use torrent client to download.</td></tr>
<tr><td align="right"> <tr><td align="right">
@ -333,10 +340,6 @@ QQ 951138799
bitcoin:bc1qkhh7h0gwwhxgg6h6gpllfgstkd645fefrd5s6z bitcoin:bc1qkhh7h0gwwhxgg6h6gpllfgstkd645fefrd5s6z
</td></tr> </td></tr>
<tr><td align="right">
Alipay 捐款
</td><td align="center"> <img src="doc/Alipay_donation.jpg" align="center"> </td></tr>
<tr><td colspan=2 align="center"> <tr><td colspan=2 align="center">
### Collect facesets ### Collect facesets

View file

@ -17,6 +17,7 @@ class QIconDB():
QIconDB.poly_type_exclude = QIcon ( str(icon_path / 'poly_type_exclude.png') ) QIconDB.poly_type_exclude = QIcon ( str(icon_path / 'poly_type_exclude.png') )
QIconDB.left = QIcon ( str(icon_path / 'left.png') ) QIconDB.left = QIcon ( str(icon_path / 'left.png') )
QIconDB.right = QIcon ( str(icon_path / 'right.png') ) QIconDB.right = QIcon ( str(icon_path / 'right.png') )
QIconDB.trashcan = QIcon ( str(icon_path / 'trashcan.png') )
QIconDB.pt_edit_mode = QIcon ( str(icon_path / 'pt_edit_mode.png') ) QIconDB.pt_edit_mode = QIcon ( str(icon_path / 'pt_edit_mode.png') )
QIconDB.view_lock_center = QIcon ( str(icon_path / 'view_lock_center.png') ) QIconDB.view_lock_center = QIcon ( str(icon_path / 'view_lock_center.png') )
QIconDB.view_baked = QIcon ( str(icon_path / 'view_baked.png') ) QIconDB.view_baked = QIcon ( str(icon_path / 'view_baked.png') )

View file

@ -85,6 +85,11 @@ class QStringDB():
'zh' : '保存并转到下一张图片\n按住SHIFT : 加快\n按住CTRL : 跳过未标记的\n', 'zh' : '保存并转到下一张图片\n按住SHIFT : 加快\n按住CTRL : 跳过未标记的\n',
}[lang] }[lang]
QStringDB.btn_delete_image_tip = { 'en' : 'Move to _trash and Next image\n',
'ru' : 'Переместить в _trash и следующее изображение\n',
'zh' : '移至_trash转到下一张图片 ',
}[lang]
QStringDB.loading_tip = {'en' : 'Loading', QStringDB.loading_tip = {'en' : 'Loading',
'ru' : 'Загрузка', 'ru' : 'Загрузка',
'zh' : '正在载入', 'zh' : '正在载入',

View file

@ -1164,6 +1164,7 @@ class MainWindow(QXMainWindow):
super().__init__() super().__init__()
self.input_dirpath = input_dirpath self.input_dirpath = input_dirpath
self.trash_dirpath = input_dirpath.parent / (input_dirpath.name + '_trash')
self.cfg_root_path = cfg_root_path self.cfg_root_path = cfg_root_path
self.cfg_path = cfg_root_path / 'MainWindow_cfg.dat' self.cfg_path = cfg_root_path / 'MainWindow_cfg.dat'
@ -1342,6 +1343,17 @@ class MainWindow(QXMainWindow):
self.update_cached_images() self.update_cached_images()
self.update_preview_bar() self.update_preview_bar()
def trash_current_image(self):
self.process_next_image()
img_path = self.image_paths_done.pop(-1)
img_path = Path(img_path)
self.trash_dirpath.mkdir(parents=True, exist_ok=True)
img_path.rename( self.trash_dirpath / img_path.name )
self.update_cached_images()
self.update_preview_bar()
def initialize_ui(self): def initialize_ui(self):
self.canvas = QCanvas() self.canvas = QCanvas()
@ -1356,19 +1368,35 @@ class MainWindow(QXMainWindow):
btn_next_image = QXIconButton(QIconDB.right, QStringDB.btn_next_image_tip, shortcut='D', click_func=self.process_next_image) btn_next_image = QXIconButton(QIconDB.right, QStringDB.btn_next_image_tip, shortcut='D', click_func=self.process_next_image)
btn_next_image.setIconSize(QUIConfig.preview_bar_icon_q_size) btn_next_image.setIconSize(QUIConfig.preview_bar_icon_q_size)
btn_delete_image = QXIconButton(QIconDB.trashcan, QStringDB.btn_delete_image_tip, shortcut='X', click_func=self.trash_current_image)
btn_delete_image.setIconSize(QUIConfig.preview_bar_icon_q_size)
pad_image = QWidget()
pad_image.setFixedSize(QUIConfig.preview_bar_icon_q_size)
preview_image_bar_frame_l = QHBoxLayout() preview_image_bar_frame_l = QHBoxLayout()
preview_image_bar_frame_l.setContentsMargins(0,0,0,0) preview_image_bar_frame_l.setContentsMargins(0,0,0,0)
preview_image_bar_frame_l.addWidget ( pad_image, alignment=Qt.AlignCenter)
preview_image_bar_frame_l.addWidget ( btn_prev_image, alignment=Qt.AlignCenter) preview_image_bar_frame_l.addWidget ( btn_prev_image, alignment=Qt.AlignCenter)
preview_image_bar_frame_l.addWidget ( image_bar) preview_image_bar_frame_l.addWidget ( image_bar)
preview_image_bar_frame_l.addWidget ( btn_next_image, alignment=Qt.AlignCenter) preview_image_bar_frame_l.addWidget ( btn_next_image, alignment=Qt.AlignCenter)
#preview_image_bar_frame_l.addWidget ( btn_delete_image, alignment=Qt.AlignCenter)
preview_image_bar_frame = QFrame() preview_image_bar_frame = QFrame()
preview_image_bar_frame.setSizePolicy ( QSizePolicy.Fixed, QSizePolicy.Fixed ) preview_image_bar_frame.setSizePolicy ( QSizePolicy.Fixed, QSizePolicy.Fixed )
preview_image_bar_frame.setLayout(preview_image_bar_frame_l) preview_image_bar_frame.setLayout(preview_image_bar_frame_l)
preview_image_bar_frame2_l = QHBoxLayout()
preview_image_bar_frame2_l.setContentsMargins(0,0,0,0)
preview_image_bar_frame2_l.addWidget ( btn_delete_image, alignment=Qt.AlignCenter)
preview_image_bar_frame2 = QFrame()
preview_image_bar_frame2.setSizePolicy ( QSizePolicy.Fixed, QSizePolicy.Fixed )
preview_image_bar_frame2.setLayout(preview_image_bar_frame2_l)
preview_image_bar_l = QHBoxLayout() preview_image_bar_l = QHBoxLayout()
preview_image_bar_l.addWidget (preview_image_bar_frame) preview_image_bar_l.addWidget (preview_image_bar_frame, alignment=Qt.AlignCenter)
preview_image_bar_l.addWidget (preview_image_bar_frame2)
preview_image_bar = QFrame() preview_image_bar = QFrame()
preview_image_bar.setFrameShape(QFrame.StyledPanel) preview_image_bar.setFrameShape(QFrame.StyledPanel)

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.2 KiB

View file

@ -77,6 +77,8 @@ class SegIEPoly():
self.pts = np.array(pts) self.pts = np.array(pts)
self.n_max = self.n = len(pts) self.n_max = self.n = len(pts)
def mult_points(self, val):
self.pts *= val
@ -137,6 +139,10 @@ class SegIEPolys():
def dump(self): def dump(self):
return {'polys' : [ poly.dump() for poly in self.polys ] } return {'polys' : [ poly.dump() for poly in self.polys ] }
def mult_points(self, val):
for poly in self.polys:
poly.mult_points(val)
@staticmethod @staticmethod
def load(data=None): def load(data=None):
ie_polys = SegIEPolys() ie_polys = SegIEPolys()

View file

@ -14,14 +14,19 @@ from .reduce_colors import reduce_colors
from .color_transfer import color_transfer, color_transfer_mix, color_transfer_sot, color_transfer_mkl, color_transfer_idt, color_hist_match, reinhard_color_transfer, linear_color_transfer, color_augmentation from .color_transfer import color_transfer, color_transfer_mix, color_transfer_sot, color_transfer_mkl, color_transfer_idt, color_hist_match, reinhard_color_transfer, linear_color_transfer, color_augmentation
from .common import normalize_channels, cut_odd_image, overlay_alpha_image from .common import random_crop, normalize_channels, cut_odd_image, overlay_alpha_image
from .SegIEPolys import * from .SegIEPolys import *
from .blursharpen import LinearMotionBlur, blursharpen from .blursharpen import LinearMotionBlur, blursharpen
from .filters import apply_random_rgb_levels, \ from .filters import apply_random_rgb_levels, \
apply_random_overlay_triangle, \
apply_random_hsv_shift, \ apply_random_hsv_shift, \
apply_random_sharpen, \
apply_random_motion_blur, \ apply_random_motion_blur, \
apply_random_gaussian_blur, \ apply_random_gaussian_blur, \
apply_random_bilinear_resize apply_random_nearest_resize, \
apply_random_bilinear_resize, \
apply_random_jpeg_compress, \
apply_random_relight

View file

@ -373,6 +373,7 @@ def color_transfer(ct_mode, img_src, img_trg):
# imported from faceswap # imported from faceswap
def color_augmentation(img, seed=None): def color_augmentation(img, seed=None):
""" Color adjust RGB image """ """ Color adjust RGB image """
img = img.astype(np.float32)
face = img face = img
face = np.clip(face*255.0, 0, 255).astype(np.uint8) face = np.clip(face*255.0, 0, 255).astype(np.uint8)
face = random_clahe(face, seed) face = random_clahe(face, seed)
@ -381,6 +382,25 @@ def color_augmentation(img, seed=None):
return (face / 255.0).astype(np.float32) return (face / 255.0).astype(np.float32)
def random_lab_rotation(image, seed=None):
"""
Randomly rotates image color around the L axis in LAB colorspace,
keeping perceptual lightness constant.
"""
image = cv2.cvtColor(image.astype(np.float32), cv2.COLOR_BGR2LAB)
M = np.eye(3)
M[1:, 1:] = special_ortho_group.rvs(2, 1, seed)
image = image.dot(M)
l, a, b = cv2.split(image)
l = np.clip(l, 0, 100)
a = np.clip(a, -127, 127)
b = np.clip(b, -127, 127)
image = cv2.merge([l, a, b])
image = cv2.cvtColor(image.astype(np.float32), cv2.COLOR_LAB2BGR)
np.clip(image, 0, 1, out=image)
return image
def random_lab(image, seed=None): def random_lab(image, seed=None):
""" Perform random color/lightness adjustment in L*a*b* colorspace """ """ Perform random color/lightness adjustment in L*a*b* colorspace """
random.seed(seed) random.seed(seed)

View file

@ -1,5 +1,16 @@
import numpy as np import numpy as np
def random_crop(img, w, h):
height, width = img.shape[:2]
h_rnd = height - h
w_rnd = width - w
y = np.random.randint(0, h_rnd) if h_rnd > 0 else 0
x = np.random.randint(0, w_rnd) if w_rnd > 0 else 0
return img[y:y+height, x:x+width]
def normalize_channels(img, target_channels): def normalize_channels(img, target_channels):
img_shape_len = len(img.shape) img_shape_len = len(img.shape)
if img_shape_len == 2: if img_shape_len == 2:

View file

@ -1,5 +1,5 @@
import numpy as np import numpy as np
from .blursharpen import LinearMotionBlur from .blursharpen import LinearMotionBlur, blursharpen
import cv2 import cv2
def apply_random_rgb_levels(img, mask=None, rnd_state=None): def apply_random_rgb_levels(img, mask=None, rnd_state=None):
@ -38,6 +38,24 @@ def apply_random_hsv_shift(img, mask=None, rnd_state=None):
return result return result
def apply_random_sharpen( img, chance, kernel_max_size, mask=None, rnd_state=None ):
if rnd_state is None:
rnd_state = np.random
sharp_rnd_kernel = rnd_state.randint(kernel_max_size)+1
result = img
if rnd_state.randint(100) < np.clip(chance, 0, 100):
if rnd_state.randint(2) == 0:
result = blursharpen(result, 1, sharp_rnd_kernel, rnd_state.randint(10) )
else:
result = blursharpen(result, 2, sharp_rnd_kernel, rnd_state.randint(50) )
if mask is not None:
result = img*(1-mask) + result*mask
return result
def apply_random_motion_blur( img, chance, mb_max_size, mask=None, rnd_state=None ): def apply_random_motion_blur( img, chance, mb_max_size, mask=None, rnd_state=None ):
if rnd_state is None: if rnd_state is None:
rnd_state = np.random rnd_state = np.random
@ -66,8 +84,7 @@ def apply_random_gaussian_blur( img, chance, kernel_max_size, mask=None, rnd_sta
return result return result
def apply_random_resize( img, chance, max_size_per, interpolation=cv2.INTER_LINEAR, mask=None, rnd_state=None ):
def apply_random_bilinear_resize( img, chance, max_size_per, mask=None, rnd_state=None ):
if rnd_state is None: if rnd_state is None:
rnd_state = np.random rnd_state = np.random
@ -79,9 +96,150 @@ def apply_random_bilinear_resize( img, chance, max_size_per, mask=None, rnd_stat
rw = w - int( trg * int(w*(max_size_per/100.0)) ) rw = w - int( trg * int(w*(max_size_per/100.0)) )
rh = h - int( trg * int(h*(max_size_per/100.0)) ) rh = h - int( trg * int(h*(max_size_per/100.0)) )
result = cv2.resize (result, (rw,rh), interpolation=cv2.INTER_LINEAR ) result = cv2.resize (result, (rw,rh), interpolation=interpolation )
result = cv2.resize (result, (w,h), interpolation=cv2.INTER_LINEAR ) result = cv2.resize (result, (w,h), interpolation=interpolation )
if mask is not None: if mask is not None:
result = img*(1-mask) + result*mask result = img*(1-mask) + result*mask
return result return result
def apply_random_nearest_resize( img, chance, max_size_per, mask=None, rnd_state=None ):
return apply_random_resize( img, chance, max_size_per, interpolation=cv2.INTER_NEAREST, mask=mask, rnd_state=rnd_state )
def apply_random_bilinear_resize( img, chance, max_size_per, mask=None, rnd_state=None ):
return apply_random_resize( img, chance, max_size_per, interpolation=cv2.INTER_LINEAR, mask=mask, rnd_state=rnd_state )
def apply_random_jpeg_compress( img, chance, mask=None, rnd_state=None ):
if rnd_state is None:
rnd_state = np.random
result = img
if rnd_state.randint(100) < np.clip(chance, 0, 100):
h,w,c = result.shape
quality = rnd_state.randint(10,101)
ret, result = cv2.imencode('.jpg', np.clip(img*255, 0,255).astype(np.uint8), [int(cv2.IMWRITE_JPEG_QUALITY), quality] )
if ret == True:
result = cv2.imdecode(result, flags=cv2.IMREAD_UNCHANGED)
result = result.astype(np.float32) / 255.0
if mask is not None:
result = img*(1-mask) + result*mask
return result
def apply_random_overlay_triangle( img, max_alpha, mask=None, rnd_state=None ):
if rnd_state is None:
rnd_state = np.random
h,w,c = img.shape
pt1 = [rnd_state.randint(w), rnd_state.randint(h) ]
pt2 = [rnd_state.randint(w), rnd_state.randint(h) ]
pt3 = [rnd_state.randint(w), rnd_state.randint(h) ]
alpha = rnd_state.uniform()*max_alpha
tri_mask = cv2.fillPoly( np.zeros_like(img), [ np.array([pt1,pt2,pt3], np.int32) ], (alpha,)*c )
if rnd_state.randint(2) == 0:
result = np.clip(img+tri_mask, 0, 1)
else:
result = np.clip(img-tri_mask, 0, 1)
if mask is not None:
result = img*(1-mask) + result*mask
return result
def _min_resize(x, m):
if x.shape[0] < x.shape[1]:
s0 = m
s1 = int(float(m) / float(x.shape[0]) * float(x.shape[1]))
else:
s0 = int(float(m) / float(x.shape[1]) * float(x.shape[0]))
s1 = m
new_max = min(s1, s0)
raw_max = min(x.shape[0], x.shape[1])
return cv2.resize(x, (s1, s0), interpolation=cv2.INTER_LANCZOS4)
def _d_resize(x, d, fac=1.0):
new_min = min(int(d[1] * fac), int(d[0] * fac))
raw_min = min(x.shape[0], x.shape[1])
if new_min < raw_min:
interpolation = cv2.INTER_AREA
else:
interpolation = cv2.INTER_LANCZOS4
y = cv2.resize(x, (int(d[1] * fac), int(d[0] * fac)), interpolation=interpolation)
return y
def _get_image_gradient(dist):
cols = cv2.filter2D(dist, cv2.CV_32F, np.array([[-1, 0, +1], [-2, 0, +2], [-1, 0, +1]]))
rows = cv2.filter2D(dist, cv2.CV_32F, np.array([[-1, -2, -1], [0, 0, 0], [+1, +2, +1]]))
return cols, rows
def _generate_lighting_effects(content):
h512 = content
h256 = cv2.pyrDown(h512)
h128 = cv2.pyrDown(h256)
h64 = cv2.pyrDown(h128)
h32 = cv2.pyrDown(h64)
h16 = cv2.pyrDown(h32)
c512, r512 = _get_image_gradient(h512)
c256, r256 = _get_image_gradient(h256)
c128, r128 = _get_image_gradient(h128)
c64, r64 = _get_image_gradient(h64)
c32, r32 = _get_image_gradient(h32)
c16, r16 = _get_image_gradient(h16)
c = c16
c = _d_resize(cv2.pyrUp(c), c32.shape) * 4.0 + c32
c = _d_resize(cv2.pyrUp(c), c64.shape) * 4.0 + c64
c = _d_resize(cv2.pyrUp(c), c128.shape) * 4.0 + c128
c = _d_resize(cv2.pyrUp(c), c256.shape) * 4.0 + c256
c = _d_resize(cv2.pyrUp(c), c512.shape) * 4.0 + c512
r = r16
r = _d_resize(cv2.pyrUp(r), r32.shape) * 4.0 + r32
r = _d_resize(cv2.pyrUp(r), r64.shape) * 4.0 + r64
r = _d_resize(cv2.pyrUp(r), r128.shape) * 4.0 + r128
r = _d_resize(cv2.pyrUp(r), r256.shape) * 4.0 + r256
r = _d_resize(cv2.pyrUp(r), r512.shape) * 4.0 + r512
coarse_effect_cols = c
coarse_effect_rows = r
EPS = 1e-10
max_effect = np.max((coarse_effect_cols**2 + coarse_effect_rows**2)**0.5, axis=0, keepdims=True, ).max(1, keepdims=True)
coarse_effect_cols = (coarse_effect_cols + EPS) / (max_effect + EPS)
coarse_effect_rows = (coarse_effect_rows + EPS) / (max_effect + EPS)
return np.stack([ np.zeros_like(coarse_effect_rows), coarse_effect_rows, coarse_effect_cols], axis=-1)
def apply_random_relight(img, mask=None, rnd_state=None):
if rnd_state is None:
rnd_state = np.random
def_img = img
if rnd_state.randint(2) == 0:
light_pos_y = 1.0 if rnd_state.randint(2) == 0 else -1.0
light_pos_x = rnd_state.uniform()*2-1.0
else:
light_pos_y = rnd_state.uniform()*2-1.0
light_pos_x = 1.0 if rnd_state.randint(2) == 0 else -1.0
light_source_height = 0.3*rnd_state.uniform()*0.7
light_intensity = 1.0+rnd_state.uniform()
ambient_intensity = 0.5
light_source_location = np.array([[[light_source_height, light_pos_y, light_pos_x ]]], dtype=np.float32)
light_source_direction = light_source_location / np.sqrt(np.sum(np.square(light_source_location)))
lighting_effect = _generate_lighting_effects(img)
lighting_effect = np.sum(lighting_effect * light_source_direction, axis=-1).clip(0, 1)
lighting_effect = np.mean(lighting_effect, axis=-1, keepdims=True)
result = def_img * (ambient_intensity + lighting_effect * light_intensity) #light_source_color
result = np.clip(result, 0, 1)
if mask is not None:
result = def_img*(1-mask) + result*mask
return result

View file

@ -1,2 +1,2 @@
from .draw import * from .draw import circle_faded, random_circle_faded, bezier, random_bezier_split_faded, random_faded
from .calc import * from .calc import *

View file

@ -1,23 +1,36 @@
""" """
Signed distance drawing functions using numpy. Signed distance drawing functions using numpy.
""" """
import math
import numpy as np import numpy as np
from numpy import linalg as npla from numpy import linalg as npla
def circle_faded( hw, center, fade_dists ):
def vector2_dot(a,b):
return a[...,0]*b[...,0]+a[...,1]*b[...,1]
def vector2_dot2(a):
return a[...,0]*a[...,0]+a[...,1]*a[...,1]
def vector2_cross(a,b):
return a[...,0]*b[...,1]-a[...,1]*b[...,0]
def circle_faded( wh, center, fade_dists ):
""" """
returns drawn circle in [h,w,1] output range [0..1.0] float32 returns drawn circle in [h,w,1] output range [0..1.0] float32
hw = [h,w] resolution wh = [w,h] resolution
center = [y,x] center of circle center = [x,y] center of circle
fade_dists = [fade_start, fade_end] fade values fade_dists = [fade_start, fade_end] fade values
""" """
h,w = hw w,h = wh
pts = np.empty( (h,w,2), dtype=np.float32 ) pts = np.empty( (h,w,2), dtype=np.float32 )
pts[...,1] = np.arange(h)[None,:]
pts[...,0] = np.arange(w)[:,None] pts[...,0] = np.arange(w)[:,None]
pts[...,1] = np.arange(h)[None,:]
pts = pts.reshape ( (h*w, -1) ) pts = pts.reshape ( (h*w, -1) )
pts_dists = np.abs ( npla.norm(pts-center, axis=-1) ) pts_dists = np.abs ( npla.norm(pts-center, axis=-1) )
@ -31,14 +44,157 @@ def circle_faded( hw, center, fade_dists ):
return pts_dists.reshape ( (h,w,1) ).astype(np.float32) return pts_dists.reshape ( (h,w,1) ).astype(np.float32)
def random_circle_faded ( hw, rnd_state=None ):
def bezier( wh, A, B, C ):
"""
returns drawn bezier in [h,w,1] output range float32,
every pixel contains signed distance to bezier line
wh [w,h] resolution
A,B,C points [x,y]
"""
width,height = wh
A = np.float32(A)
B = np.float32(B)
C = np.float32(C)
pos = np.empty( (height,width,2), dtype=np.float32 )
pos[...,0] = np.arange(width)[:,None]
pos[...,1] = np.arange(height)[None,:]
a = B-A
b = A - 2.0*B + C
c = a * 2.0
d = A - pos
b_dot = vector2_dot(b,b)
if b_dot == 0.0:
return np.zeros( (height,width), dtype=np.float32 )
kk = 1.0 / b_dot
kx = kk * vector2_dot(a,b)
ky = kk * (2.0*vector2_dot(a,a)+vector2_dot(d,b))/3.0;
kz = kk * vector2_dot(d,a);
res = 0.0;
sgn = 0.0;
p = ky - kx*kx;
p3 = p*p*p;
q = kx*(2.0*kx*kx - 3.0*ky) + kz;
h = q*q + 4.0*p3;
hp_sel = h >= 0.0
hp_p = h[hp_sel]
hp_p = np.sqrt(hp_p)
hp_x = ( np.stack( (hp_p,-hp_p), -1) -q[hp_sel,None] ) / 2.0
hp_uv = np.sign(hp_x) * np.power( np.abs(hp_x), [1.0/3.0, 1.0/3.0] )
hp_t = np.clip( hp_uv[...,0] + hp_uv[...,1] - kx, 0.0, 1.0 )
hp_t = hp_t[...,None]
hp_q = d[hp_sel]+(c+b*hp_t)*hp_t
hp_res = vector2_dot2(hp_q)
hp_sgn = vector2_cross(c+2.0*b*hp_t,hp_q)
hl_sel = h < 0.0
hl_q = q[hl_sel]
hl_p = p[hl_sel]
hl_z = np.sqrt(-hl_p)
hl_v = np.arccos( hl_q / (hl_p*hl_z*2.0)) / 3.0
hl_m = np.cos(hl_v)
hl_n = np.sin(hl_v)*1.732050808;
hl_t = np.clip( np.stack( (hl_m+hl_m,-hl_n-hl_m,hl_n-hl_m), -1)*hl_z[...,None]-kx, 0.0, 1.0 );
hl_d = d[hl_sel]
hl_qx = hl_d+(c+b*hl_t[...,0:1])*hl_t[...,0:1]
hl_dx = vector2_dot2(hl_qx)
hl_sx = vector2_cross(c+2.0*b*hl_t[...,0:1], hl_qx)
hl_qy = hl_d+(c+b*hl_t[...,1:2])*hl_t[...,1:2]
hl_dy = vector2_dot2(hl_qy)
hl_sy = vector2_cross(c+2.0*b*hl_t[...,1:2],hl_qy);
hl_dx_l_dy = hl_dx<hl_dy
hl_dx_ge_dy = hl_dx>=hl_dy
hl_res = np.empty_like(hl_dx)
hl_res[hl_dx_l_dy] = hl_dx[hl_dx_l_dy]
hl_res[hl_dx_ge_dy] = hl_dy[hl_dx_ge_dy]
hl_sgn = np.empty_like(hl_sx)
hl_sgn[hl_dx_l_dy] = hl_sx[hl_dx_l_dy]
hl_sgn[hl_dx_ge_dy] = hl_sy[hl_dx_ge_dy]
res = np.empty( (height, width), np.float32 )
res[hp_sel] = hp_res
res[hl_sel] = hl_res
sgn = np.empty( (height, width), np.float32 )
sgn[hp_sel] = hp_sgn
sgn[hl_sel] = hl_sgn
sgn = np.sign(sgn)
res = np.sqrt(res)*sgn
return res[...,None]
def random_faded(wh):
"""
apply one of them:
random_circle_faded
random_bezier_split_faded
"""
rnd = np.random.randint(2)
if rnd == 0:
return random_circle_faded(wh)
elif rnd == 1:
return random_bezier_split_faded(wh)
def random_circle_faded ( wh, rnd_state=None ):
if rnd_state is None: if rnd_state is None:
rnd_state = np.random rnd_state = np.random
h,w = hw w,h = wh
hw_max = max(h,w) wh_max = max(w,h)
fade_start = rnd_state.randint(hw_max) fade_start = rnd_state.randint(wh_max)
fade_end = fade_start + rnd_state.randint(hw_max- fade_start) fade_end = fade_start + rnd_state.randint(wh_max- fade_start)
return circle_faded (hw, [ rnd_state.randint(h), rnd_state.randint(w) ], return circle_faded (wh, [ rnd_state.randint(h), rnd_state.randint(w) ],
[fade_start, fade_end] ) [fade_start, fade_end] )
def random_bezier_split_faded( wh ):
width, height = wh
degA = np.random.randint(360)
degB = np.random.randint(360)
degC = np.random.randint(360)
deg_2_rad = math.pi / 180.0
center = np.float32([width / 2.0, height / 2.0])
radius = max(width, height)
A = center + radius*np.float32([ math.sin( degA * deg_2_rad), math.cos( degA * deg_2_rad) ] )
B = center + np.random.randint(radius)*np.float32([ math.sin( degB * deg_2_rad), math.cos( degB * deg_2_rad) ] )
C = center + radius*np.float32([ math.sin( degC * deg_2_rad), math.cos( degC * deg_2_rad) ] )
x = bezier( (width,height), A, B, C )
x = x / (1+np.random.randint(radius)) + 0.5
x = np.clip(x, 0, 1)
return x

View file

@ -2,7 +2,7 @@ import numpy as np
import cv2 import cv2
from core import randomex from core import randomex
def gen_warp_params (w, flip, rotation_range=[-2,2], scale_range=[-0.5, 0.5], tx_range=[-0.05, 0.05], ty_range=[-0.05, 0.05], rnd_state=None ): def gen_warp_params (w, flip=False, rotation_range=[-2,2], scale_range=[-0.5, 0.5], tx_range=[-0.05, 0.05], ty_range=[-0.05, 0.05], rnd_state=None ):
if rnd_state is None: if rnd_state is None:
rnd_state = np.random rnd_state = np.random

View file

@ -1,12 +1,19 @@
import sys import sys
import ctypes import ctypes
import os import os
import multiprocessing
import json
import time
from pathlib import Path
from core.interact import interact as io
class Device(object): class Device(object):
def __init__(self, index, name, total_mem, free_mem, cc=0): def __init__(self, index, tf_dev_type, name, total_mem, free_mem):
self.index = index self.index = index
self.tf_dev_type = tf_dev_type
self.name = name self.name = name
self.cc = cc
self.total_mem = total_mem self.total_mem = total_mem
self.total_mem_gb = total_mem / 1024**3 self.total_mem_gb = total_mem / 1024**3
self.free_mem = free_mem self.free_mem = free_mem
@ -82,12 +89,135 @@ class Devices(object):
result.append (device) result.append (device)
return Devices(result) return Devices(result)
@staticmethod
def _get_tf_devices_proc(q : multiprocessing.Queue):
if sys.platform[0:3] == 'win':
compute_cache_path = Path(os.environ['APPDATA']) / 'NVIDIA' / ('ComputeCache_ALL')
os.environ['CUDA_CACHE_PATH'] = str(compute_cache_path)
if not compute_cache_path.exists():
io.log_info("Caching GPU kernels...")
compute_cache_path.mkdir(parents=True, exist_ok=True)
import tensorflow
tf_version = tensorflow.version.VERSION
#if tf_version is None:
# tf_version = tensorflow.version.GIT_VERSION
if tf_version[0] == 'v':
tf_version = tf_version[1:]
if tf_version[0] == '2':
tf = tensorflow.compat.v1
else:
tf = tensorflow
import logging
# Disable tensorflow warnings
tf_logger = logging.getLogger('tensorflow')
tf_logger.setLevel(logging.ERROR)
from tensorflow.python.client import device_lib
devices = []
physical_devices = device_lib.list_local_devices()
physical_devices_f = {}
for dev in physical_devices:
dev_type = dev.device_type
dev_tf_name = dev.name
dev_tf_name = dev_tf_name[ dev_tf_name.index(dev_type) : ]
dev_idx = int(dev_tf_name.split(':')[-1])
if dev_type in ['GPU','DML']:
dev_name = dev_tf_name
dev_desc = dev.physical_device_desc
if len(dev_desc) != 0:
if dev_desc[0] == '{':
dev_desc_json = json.loads(dev_desc)
dev_desc_json_name = dev_desc_json.get('name',None)
if dev_desc_json_name is not None:
dev_name = dev_desc_json_name
else:
for param, value in ( v.split(':') for v in dev_desc.split(',') ):
param = param.strip()
value = value.strip()
if param == 'name':
dev_name = value
break
physical_devices_f[dev_idx] = (dev_type, dev_name, dev.memory_limit)
q.put(physical_devices_f)
time.sleep(0.1)
@staticmethod @staticmethod
def initialize_main_env(): def initialize_main_env():
os.environ['NN_DEVICES_INITIALIZED'] = '1' if int(os.environ.get("NN_DEVICES_INITIALIZED", 0)) != 0:
os.environ['NN_DEVICES_COUNT'] = '0' return
if 'CUDA_VISIBLE_DEVICES' in os.environ.keys():
os.environ.pop('CUDA_VISIBLE_DEVICES')
os.environ['CUDA_CACHE_MAXSIZE'] = '2147483647' os.environ['CUDA_CACHE_MAXSIZE'] = '2147483647'
os.environ['TF_MIN_GPU_MULTIPROCESSOR_COUNT'] = '2'
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # tf log errors only
q = multiprocessing.Queue()
p = multiprocessing.Process(target=Devices._get_tf_devices_proc, args=(q,), daemon=True)
p.start()
p.join()
visible_devices = q.get()
os.environ['NN_DEVICES_INITIALIZED'] = '1'
os.environ['NN_DEVICES_COUNT'] = str(len(visible_devices))
for i in visible_devices:
dev_type, name, total_mem = visible_devices[i]
os.environ[f'NN_DEVICE_{i}_TF_DEV_TYPE'] = dev_type
os.environ[f'NN_DEVICE_{i}_NAME'] = name
os.environ[f'NN_DEVICE_{i}_TOTAL_MEM'] = str(total_mem)
os.environ[f'NN_DEVICE_{i}_FREE_MEM'] = str(total_mem)
@staticmethod
def getDevices():
if Devices.all_devices is None:
if int(os.environ.get("NN_DEVICES_INITIALIZED", 0)) != 1:
raise Exception("nn devices are not initialized. Run initialize_main_env() in main process.")
devices = []
for i in range ( int(os.environ['NN_DEVICES_COUNT']) ):
devices.append ( Device(index=i,
tf_dev_type=os.environ[f'NN_DEVICE_{i}_TF_DEV_TYPE'],
name=os.environ[f'NN_DEVICE_{i}_NAME'],
total_mem=int(os.environ[f'NN_DEVICE_{i}_TOTAL_MEM']),
free_mem=int(os.environ[f'NN_DEVICE_{i}_FREE_MEM']), )
)
Devices.all_devices = Devices(devices)
return Devices.all_devices
"""
# {'name' : name.split(b'\0', 1)[0].decode(),
# 'total_mem' : totalMem.value
# }
return
min_cc = int(os.environ.get("TF_MIN_REQ_CAP", 35)) min_cc = int(os.environ.get("TF_MIN_REQ_CAP", 35))
libnames = ('libcuda.so', 'libcuda.dylib', 'nvcuda.dll') libnames = ('libcuda.so', 'libcuda.dylib', 'nvcuda.dll')
for libname in libnames: for libname in libnames:
@ -139,70 +269,4 @@ class Devices(object):
os.environ[f'NN_DEVICE_{i}_TOTAL_MEM'] = str(device['total_mem']) os.environ[f'NN_DEVICE_{i}_TOTAL_MEM'] = str(device['total_mem'])
os.environ[f'NN_DEVICE_{i}_FREE_MEM'] = str(device['free_mem']) os.environ[f'NN_DEVICE_{i}_FREE_MEM'] = str(device['free_mem'])
os.environ[f'NN_DEVICE_{i}_CC'] = str(device['cc']) os.environ[f'NN_DEVICE_{i}_CC'] = str(device['cc'])
@staticmethod
def getDevices():
if Devices.all_devices is None:
if int(os.environ.get("NN_DEVICES_INITIALIZED", 0)) != 1:
raise Exception("nn devices are not initialized. Run initialize_main_env() in main process.")
devices = []
for i in range ( int(os.environ['NN_DEVICES_COUNT']) ):
devices.append ( Device(index=i,
name=os.environ[f'NN_DEVICE_{i}_NAME'],
total_mem=int(os.environ[f'NN_DEVICE_{i}_TOTAL_MEM']),
free_mem=int(os.environ[f'NN_DEVICE_{i}_FREE_MEM']),
cc=int(os.environ[f'NN_DEVICE_{i}_CC']) ))
Devices.all_devices = Devices(devices)
return Devices.all_devices
"""
if Devices.all_devices is None:
min_cc = int(os.environ.get("TF_MIN_REQ_CAP", 35))
libnames = ('libcuda.so', 'libcuda.dylib', 'nvcuda.dll')
for libname in libnames:
try:
cuda = ctypes.CDLL(libname)
except:
continue
else:
break
else:
return Devices([])
nGpus = ctypes.c_int()
name = b' ' * 200
cc_major = ctypes.c_int()
cc_minor = ctypes.c_int()
freeMem = ctypes.c_size_t()
totalMem = ctypes.c_size_t()
result = ctypes.c_int()
device = ctypes.c_int()
context = ctypes.c_void_p()
error_str = ctypes.c_char_p()
devices = []
if cuda.cuInit(0) == 0 and \
cuda.cuDeviceGetCount(ctypes.byref(nGpus)) == 0:
for i in range(nGpus.value):
if cuda.cuDeviceGet(ctypes.byref(device), i) != 0 or \
cuda.cuDeviceGetName(ctypes.c_char_p(name), len(name), device) != 0 or \
cuda.cuDeviceComputeCapability(ctypes.byref(cc_major), ctypes.byref(cc_minor), device) != 0:
continue
if cuda.cuCtxCreate_v2(ctypes.byref(context), 0, device) == 0:
if cuda.cuMemGetInfo_v2(ctypes.byref(freeMem), ctypes.byref(totalMem)) == 0:
cc = cc_major.value * 10 + cc_minor.value
if cc >= min_cc:
devices.append ( Device(index=i,
name=name.split(b'\0', 1)[0].decode(),
total_mem=totalMem.value,
free_mem=freeMem.value,
cc=cc) )
cuda.cuCtxDetach(context)
Devices.all_devices = Devices(devices)
return Devices.all_devices
""" """

View file

@ -23,28 +23,13 @@ class Conv2D(nn.LayerBase):
if padding == "SAME": if padding == "SAME":
padding = ( (kernel_size - 1) * dilations + 1 ) // 2 padding = ( (kernel_size - 1) * dilations + 1 ) // 2
elif padding == "VALID": elif padding == "VALID":
padding = 0 padding = None
else: else:
raise ValueError ("Wrong padding type. Should be VALID SAME or INT or 4x INTs") raise ValueError ("Wrong padding type. Should be VALID SAME or INT or 4x INTs")
if isinstance(padding, int):
if padding != 0:
if nn.data_format == "NHWC":
padding = [ [0,0], [padding,padding], [padding,padding], [0,0] ]
else:
padding = [ [0,0], [0,0], [padding,padding], [padding,padding] ]
else:
padding = None
if nn.data_format == "NHWC":
strides = [1,strides,strides,1]
else: else:
strides = [1,1,strides,strides] padding = int(padding)
if nn.data_format == "NHWC":
dilations = [1,dilations,dilations,1]
else:
dilations = [1,1,dilations,dilations]
self.in_ch = in_ch self.in_ch = in_ch
self.out_ch = out_ch self.out_ch = out_ch
@ -93,10 +78,27 @@ class Conv2D(nn.LayerBase):
if self.use_wscale: if self.use_wscale:
weight = weight * self.wscale weight = weight * self.wscale
if self.padding is not None: padding = self.padding
x = tf.pad (x, self.padding, mode='CONSTANT') if padding is not None:
if nn.data_format == "NHWC":
padding = [ [0,0], [padding,padding], [padding,padding], [0,0] ]
else:
padding = [ [0,0], [0,0], [padding,padding], [padding,padding] ]
x = tf.pad (x, padding, mode='CONSTANT')
x = tf.nn.conv2d(x, weight, self.strides, 'VALID', dilations=self.dilations, data_format=nn.data_format) strides = self.strides
if nn.data_format == "NHWC":
strides = [1,strides,strides,1]
else:
strides = [1,1,strides,strides]
dilations = self.dilations
if nn.data_format == "NHWC":
dilations = [1,dilations,dilations,1]
else:
dilations = [1,1,dilations,dilations]
x = tf.nn.conv2d(x, weight, strides, 'VALID', dilations=dilations, data_format=nn.data_format)
if self.use_bias: if self.use_bias:
if nn.data_format == "NHWC": if nn.data_format == "NHWC":
bias = tf.reshape (self.bias, (1,1,1,self.out_ch) ) bias = tf.reshape (self.bias, (1,1,1,self.out_ch) )

View file

@ -0,0 +1,50 @@
from core.leras import nn
tf = nn.tf
class MsSsim(nn.LayerBase):
default_power_factors = (0.0448, 0.2856, 0.3001, 0.2363, 0.1333)
default_l1_alpha = 0.84
def __init__(self, batch_size, in_ch, resolution, kernel_size=11, use_l1=False, **kwargs):
# restrict mssim factors to those greater/equal to kernel size
power_factors = [p for i, p in enumerate(self.default_power_factors) if resolution//(2**i) >= kernel_size]
# normalize power factors if reduced because of size
if sum(power_factors) < 1.0:
power_factors = [x/sum(power_factors) for x in power_factors]
self.power_factors = power_factors
self.num_scale = len(power_factors)
self.kernel_size = kernel_size
self.use_l1 = use_l1
if use_l1:
self.gaussian_weights = nn.get_gaussian_weights(batch_size, in_ch, resolution, num_scale=self.num_scale)
super().__init__(**kwargs)
def __call__(self, y_true, y_pred, max_val):
# Transpose images from NCHW to NHWC
y_true_t = tf.transpose(tf.cast(y_true, tf.float32), [0, 2, 3, 1])
y_pred_t = tf.transpose(tf.cast(y_pred, tf.float32), [0, 2, 3, 1])
# ssim_multiscale returns values in range [0, 1] (where 1 is completely identical)
# subtract from 1 to get loss
if tf.__version__ >= "1.14":
ms_ssim_loss = 1.0 - tf.image.ssim_multiscale(y_true_t, y_pred_t, max_val, power_factors=self.power_factors, filter_size=self.kernel_size)
else:
ms_ssim_loss = 1.0 - tf.image.ssim_multiscale(y_true_t, y_pred_t, max_val, power_factors=self.power_factors)
# If use L1 is enabled, use mix of ms-ssim and L1 (weighted by gaussian filters)
# H. Zhao, O. Gallo, I. Frosio and J. Kautz, "Loss Functions for Image Restoration With Neural Networks,"
# in IEEE Transactions on Computational Imaging, vol. 3, no. 1, pp. 47-57, March 2017,
# doi: 10.1109/TCI.2016.2644865.
# https://research.nvidia.com/publication/loss-functions-image-restoration-neural-networks
if self.use_l1:
diff = tf.tile(tf.expand_dims(tf.abs(y_true - y_pred), axis=0), multiples=[self.num_scale, 1, 1, 1, 1])
l1_loss = tf.reduce_mean(tf.reduce_sum(self.gaussian_weights[-1, :, :, :, :] * diff, axis=[0, 3, 4]), axis=[1])
return self.default_l1_alpha * ms_ssim_loss + (1 - self.default_l1_alpha) * l1_loss
return ms_ssim_loss
nn.MsSsim = MsSsim

View file

@ -14,3 +14,4 @@ from .TLU import *
from .ScaleAdd import * from .ScaleAdd import *
from .DenseNorm import * from .DenseNorm import *
from .AdaIN import * from .AdaIN import *
from .MsSsim import *

View file

@ -195,3 +195,117 @@ class UNetPatchDiscriminator(nn.ModelBase):
return center_out, self.out_conv(x) return center_out, self.out_conv(x)
nn.UNetPatchDiscriminator = UNetPatchDiscriminator nn.UNetPatchDiscriminator = UNetPatchDiscriminator
class UNetPatchDiscriminatorV2(nn.ModelBase):
"""
Inspired by https://arxiv.org/abs/2002.12655 "A U-Net Based Discriminator for Generative Adversarial Networks"
"""
def calc_receptive_field_size(self, layers):
"""
result the same as https://fomoro.com/research/article/receptive-field-calculatorindex.html
"""
rf = 0
ts = 1
for i, (k, s) in enumerate(layers):
if i == 0:
rf = k
else:
rf += (k-1)*ts
ts *= s
return rf
def find_archi(self, target_patch_size, max_layers=6):
"""
Find the best configuration of layers using only 3x3 convs for target patch size
"""
s = {}
for layers_count in range(1,max_layers+1):
val = 1 << (layers_count-1)
while True:
val -= 1
layers = []
sum_st = 0
for i in range(layers_count-1):
st = 1 + (1 if val & (1 << i) !=0 else 0 )
layers.append ( [3, st ])
sum_st += st
layers.append ( [3, 2])
sum_st += 2
rf = self.calc_receptive_field_size(layers)
s_rf = s.get(rf, None)
if s_rf is None:
s[rf] = (layers_count, sum_st, layers)
else:
if layers_count < s_rf[0] or \
( layers_count == s_rf[0] and sum_st > s_rf[1] ):
s[rf] = (layers_count, sum_st, layers)
if val == 0:
break
x = sorted(list(s.keys()))
q=x[np.abs(np.array(x)-target_patch_size).argmin()]
return s[q][2]
def on_build(self, patch_size, in_ch):
class ResidualBlock(nn.ModelBase):
def on_build(self, ch, kernel_size=3 ):
self.conv1 = nn.Conv2D( ch, ch, kernel_size=kernel_size, padding='SAME')
self.conv2 = nn.Conv2D( ch, ch, kernel_size=kernel_size, padding='SAME')
def forward(self, inp):
x = self.conv1(inp)
x = tf.nn.leaky_relu(x, 0.2)
x = self.conv2(x)
x = tf.nn.leaky_relu(inp + x, 0.2)
return x
prev_ch = in_ch
self.convs = []
self.res = []
self.upconvs = []
self.upres = []
layers = self.find_archi(patch_size)
base_ch = 16
level_chs = { i-1:v for i,v in enumerate([ min( base_ch * (2**i), 512 ) for i in range(len(layers)+1)]) }
self.in_conv = nn.Conv2D( in_ch, level_chs[-1], kernel_size=1, padding='VALID')
for i, (kernel_size, strides) in enumerate(layers):
self.convs.append ( nn.Conv2D( level_chs[i-1], level_chs[i], kernel_size=kernel_size, strides=strides, padding='SAME') )
self.res.append ( ResidualBlock(level_chs[i]) )
self.upconvs.insert (0, nn.Conv2DTranspose( level_chs[i]*(2 if i != len(layers)-1 else 1), level_chs[i-1], kernel_size=kernel_size, strides=strides, padding='SAME') )
self.upres.insert (0, ResidualBlock(level_chs[i-1]*2) )
self.out_conv = nn.Conv2D( level_chs[-1]*2, 1, kernel_size=1, padding='VALID')
self.center_out = nn.Conv2D( level_chs[len(layers)-1], 1, kernel_size=1, padding='VALID')
self.center_conv = nn.Conv2D( level_chs[len(layers)-1], level_chs[len(layers)-1], kernel_size=1, padding='VALID')
def forward(self, x):
x = tf.nn.leaky_relu( self.in_conv(x), 0.1 )
encs = []
for conv, res in zip(self.convs, self.res):
encs.insert(0, x)
x = tf.nn.leaky_relu( conv(x), 0.1 )
x = res(x)
center_out, x = self.center_out(x), self.center_conv(x)
for i, (upconv, enc, upres) in enumerate(zip(self.upconvs, encs, self.upres)):
x = tf.nn.leaky_relu( upconv(x), 0.1 )
x = tf.concat( [enc, x], axis=nn.conv2d_ch_axis)
x = upres(x)
return center_out, self.out_conv(x)
nn.UNetPatchDiscriminatorV2 = UNetPatchDiscriminatorV2

View file

@ -29,10 +29,11 @@ class XSeg(nn.ModelBase):
x = self.tlu(x) x = self.tlu(x)
return x return x
self.base_ch = base_ch
self.conv01 = ConvBlock(in_ch, base_ch) self.conv01 = ConvBlock(in_ch, base_ch)
self.conv02 = ConvBlock(base_ch, base_ch) self.conv02 = ConvBlock(base_ch, base_ch)
self.bp0 = nn.BlurPool (filt_size=3) self.bp0 = nn.BlurPool (filt_size=4)
self.conv11 = ConvBlock(base_ch, base_ch*2) self.conv11 = ConvBlock(base_ch, base_ch*2)
self.conv12 = ConvBlock(base_ch*2, base_ch*2) self.conv12 = ConvBlock(base_ch*2, base_ch*2)
@ -40,19 +41,30 @@ class XSeg(nn.ModelBase):
self.conv21 = ConvBlock(base_ch*2, base_ch*4) self.conv21 = ConvBlock(base_ch*2, base_ch*4)
self.conv22 = ConvBlock(base_ch*4, base_ch*4) self.conv22 = ConvBlock(base_ch*4, base_ch*4)
self.conv23 = ConvBlock(base_ch*4, base_ch*4) self.bp2 = nn.BlurPool (filt_size=2)
self.bp2 = nn.BlurPool (filt_size=3)
self.conv31 = ConvBlock(base_ch*4, base_ch*8) self.conv31 = ConvBlock(base_ch*4, base_ch*8)
self.conv32 = ConvBlock(base_ch*8, base_ch*8) self.conv32 = ConvBlock(base_ch*8, base_ch*8)
self.conv33 = ConvBlock(base_ch*8, base_ch*8) self.conv33 = ConvBlock(base_ch*8, base_ch*8)
self.bp3 = nn.BlurPool (filt_size=3) self.bp3 = nn.BlurPool (filt_size=2)
self.conv41 = ConvBlock(base_ch*8, base_ch*8) self.conv41 = ConvBlock(base_ch*8, base_ch*8)
self.conv42 = ConvBlock(base_ch*8, base_ch*8) self.conv42 = ConvBlock(base_ch*8, base_ch*8)
self.conv43 = ConvBlock(base_ch*8, base_ch*8) self.conv43 = ConvBlock(base_ch*8, base_ch*8)
self.bp4 = nn.BlurPool (filt_size=3) self.bp4 = nn.BlurPool (filt_size=2)
self.conv51 = ConvBlock(base_ch*8, base_ch*8)
self.conv52 = ConvBlock(base_ch*8, base_ch*8)
self.conv53 = ConvBlock(base_ch*8, base_ch*8)
self.bp5 = nn.BlurPool (filt_size=2)
self.dense1 = nn.Dense ( 4*4* base_ch*8, 512)
self.dense2 = nn.Dense ( 512, 4*4* base_ch*8)
self.up5 = UpConvBlock (base_ch*8, base_ch*4)
self.uconv53 = ConvBlock(base_ch*12, base_ch*8)
self.uconv52 = ConvBlock(base_ch*8, base_ch*8)
self.uconv51 = ConvBlock(base_ch*8, base_ch*8)
self.up4 = UpConvBlock (base_ch*8, base_ch*4) self.up4 = UpConvBlock (base_ch*8, base_ch*4)
self.uconv43 = ConvBlock(base_ch*12, base_ch*8) self.uconv43 = ConvBlock(base_ch*12, base_ch*8)
@ -65,8 +77,7 @@ class XSeg(nn.ModelBase):
self.uconv31 = ConvBlock(base_ch*8, base_ch*8) self.uconv31 = ConvBlock(base_ch*8, base_ch*8)
self.up2 = UpConvBlock (base_ch*8, base_ch*4) self.up2 = UpConvBlock (base_ch*8, base_ch*4)
self.uconv23 = ConvBlock(base_ch*8, base_ch*4) self.uconv22 = ConvBlock(base_ch*8, base_ch*4)
self.uconv22 = ConvBlock(base_ch*4, base_ch*4)
self.uconv21 = ConvBlock(base_ch*4, base_ch*4) self.uconv21 = ConvBlock(base_ch*4, base_ch*4)
self.up1 = UpConvBlock (base_ch*4, base_ch*2) self.up1 = UpConvBlock (base_ch*4, base_ch*2)
@ -78,7 +89,6 @@ class XSeg(nn.ModelBase):
self.uconv01 = ConvBlock(base_ch, base_ch) self.uconv01 = ConvBlock(base_ch, base_ch)
self.out_conv = nn.Conv2D (base_ch, out_ch, kernel_size=3, padding='SAME') self.out_conv = nn.Conv2D (base_ch, out_ch, kernel_size=3, padding='SAME')
self.conv_center = ConvBlock(base_ch*8, base_ch*8)
def forward(self, inp): def forward(self, inp):
x = inp x = inp
@ -92,8 +102,7 @@ class XSeg(nn.ModelBase):
x = self.bp1(x) x = self.bp1(x)
x = self.conv21(x) x = self.conv21(x)
x = self.conv22(x) x = x2 = self.conv22(x)
x = x2 = self.conv23(x)
x = self.bp2(x) x = self.bp2(x)
x = self.conv31(x) x = self.conv31(x)
@ -106,7 +115,20 @@ class XSeg(nn.ModelBase):
x = x4 = self.conv43(x) x = x4 = self.conv43(x)
x = self.bp4(x) x = self.bp4(x)
x = self.conv_center(x) x = self.conv51(x)
x = self.conv52(x)
x = x5 = self.conv53(x)
x = self.bp5(x)
x = nn.flatten(x)
x = self.dense1(x)
x = self.dense2(x)
x = nn.reshape_4D (x, 4, 4, self.base_ch*8 )
x = self.up5(x)
x = self.uconv53(tf.concat([x,x5],axis=nn.conv2d_ch_axis))
x = self.uconv52(x)
x = self.uconv51(x)
x = self.up4(x) x = self.up4(x)
x = self.uconv43(tf.concat([x,x4],axis=nn.conv2d_ch_axis)) x = self.uconv43(tf.concat([x,x4],axis=nn.conv2d_ch_axis))
@ -119,8 +141,7 @@ class XSeg(nn.ModelBase):
x = self.uconv31(x) x = self.uconv31(x)
x = self.up2(x) x = self.up2(x)
x = self.uconv23(tf.concat([x,x2],axis=nn.conv2d_ch_axis)) x = self.uconv22(tf.concat([x,x2],axis=nn.conv2d_ch_axis))
x = self.uconv22(x)
x = self.uconv21(x) x = self.uconv21(x)
x = self.up1(x) x = self.up1(x)

View file

@ -33,7 +33,7 @@ class nn():
tf = None tf = None
tf_sess = None tf_sess = None
tf_sess_config = None tf_sess_config = None
tf_default_device = None tf_default_device_name = None
data_format = None data_format = None
conv2d_ch_axis = None conv2d_ch_axis = None
@ -51,9 +51,6 @@ class nn():
# Manipulate environment variables before import tensorflow # Manipulate environment variables before import tensorflow
if 'CUDA_VISIBLE_DEVICES' in os.environ.keys():
os.environ.pop('CUDA_VISIBLE_DEVICES')
first_run = False first_run = False
if len(device_config.devices) != 0: if len(device_config.devices) != 0:
if sys.platform[0:3] == 'win': if sys.platform[0:3] == 'win':
@ -68,22 +65,19 @@ class nn():
compute_cache_path = Path(os.environ['APPDATA']) / 'NVIDIA' / ('ComputeCache' + devices_str) compute_cache_path = Path(os.environ['APPDATA']) / 'NVIDIA' / ('ComputeCache' + devices_str)
if not compute_cache_path.exists(): if not compute_cache_path.exists():
first_run = True first_run = True
compute_cache_path.mkdir(parents=True, exist_ok=True)
os.environ['CUDA_CACHE_PATH'] = str(compute_cache_path) os.environ['CUDA_CACHE_PATH'] = str(compute_cache_path)
os.environ['TF_MIN_GPU_MULTIPROCESSOR_COUNT'] = '2'
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # tf log errors only
if first_run: if first_run:
io.log_info("Caching GPU kernels...") io.log_info("Caching GPU kernels...")
import tensorflow import tensorflow
tf_version = getattr(tensorflow,'VERSION', None) tf_version = tensorflow.version.VERSION
if tf_version is None: #if tf_version is None:
tf_version = tensorflow.version.GIT_VERSION # tf_version = tensorflow.version.GIT_VERSION
if tf_version[0] == 'v': if tf_version[0] == 'v':
tf_version = tf_version[1:] tf_version = tf_version[1:]
if tf_version[0] == '2': if tf_version[0] == '2':
tf = tensorflow.compat.v1 tf = tensorflow.compat.v1
else: else:
@ -108,11 +102,12 @@ class nn():
# Configure tensorflow session-config # Configure tensorflow session-config
if len(device_config.devices) == 0: if len(device_config.devices) == 0:
nn.tf_default_device = "/CPU:0"
config = tf.ConfigProto(device_count={'GPU': 0}) config = tf.ConfigProto(device_count={'GPU': 0})
nn.tf_default_device_name = '/CPU:0'
else: else:
nn.tf_default_device = "/GPU:0" nn.tf_default_device_name = f'/{device_config.devices[0].tf_dev_type}:0'
config = tf.ConfigProto()
config = tf.ConfigProto(allow_soft_placement=True)
config.gpu_options.visible_device_list = ','.join([str(device.index) for device in device_config.devices]) config.gpu_options.visible_device_list = ','.join([str(device.index) for device in device_config.devices])
config.gpu_options.force_gpu_compatible = True config.gpu_options.force_gpu_compatible = True
@ -202,14 +197,6 @@ class nn():
nn.tf_sess.close() nn.tf_sess.close()
nn.tf_sess = None nn.tf_sess = None
@staticmethod
def get_current_device():
# Undocumented access to last tf.device(...)
objs = nn.tf.get_default_graph()._device_function_stack.peek_objs()
if len(objs) != 0:
return objs[0].display_name
return nn.tf_default_device
@staticmethod @staticmethod
def ask_choose_device_idxs(choose_only_one=False, allow_cpu=True, suggest_best_multi_gpu=False, suggest_all_gpu=False): def ask_choose_device_idxs(choose_only_one=False, allow_cpu=True, suggest_best_multi_gpu=False, suggest_all_gpu=False):
devices = Devices.getDevices() devices = Devices.getDevices()

View file

@ -204,7 +204,7 @@ def random_binomial(shape, p=0.0, dtype=None, seed=None):
seed = np.random.randint(10e6) seed = np.random.randint(10e6)
return array_ops.where( return array_ops.where(
random_ops.random_uniform(shape, dtype=tf.float16, seed=seed) < p, random_ops.random_uniform(shape, dtype=tf.float16, seed=seed) < p,
array_ops.ones(shape, dtype=dtype), array_ops.zeros(shape, dtype=dtype)) array_ops.ones(shape, dtype=dtype), array_ops.zeros(shape, dtype=dtype))
nn.random_binomial = random_binomial nn.random_binomial = random_binomial
def gaussian_blur(input, radius=2.0): def gaussian_blur(input, radius=2.0):
@ -237,6 +237,19 @@ def gaussian_blur(input, radius=2.0):
return x return x
nn.gaussian_blur = gaussian_blur nn.gaussian_blur = gaussian_blur
def get_gaussian_weights(batch_size, in_ch, resolution, num_scale=5, sigma=(0.5, 1., 2., 4., 8.)):
w = np.empty((num_scale, batch_size, in_ch, resolution, resolution))
for i in range(num_scale):
gaussian = np.exp(-1.*np.arange(-(resolution/2-0.5), resolution/2+0.5)**2/(2*sigma[i]**2))
gaussian = np.outer(gaussian, gaussian.reshape((resolution, 1))) # extend to 2D
gaussian = gaussian/np.sum(gaussian) # normalization
gaussian = np.reshape(gaussian, (1, 1, resolution, resolution)) # reshape to 3D
gaussian = np.tile(gaussian, (batch_size, in_ch, 1, 1))
w[i, :, :, :, :] = gaussian
return w
nn.get_gaussian_weights = get_gaussian_weights
def style_loss(target, style, gaussian_blur_radius=0.0, loss_weight=1.0, step_size=1): def style_loss(target, style, gaussian_blur_radius=0.0, loss_weight=1.0, step_size=1):
def sd(content, style, loss_weight): def sd(content, style, loss_weight):
content_nc = content.shape[ nn.conv2d_ch_axis ] content_nc = content.shape[ nn.conv2d_ch_axis ]
@ -333,7 +346,9 @@ def depth_to_space(x, size):
x = tf.reshape(x, (-1, oh, ow, oc, )) x = tf.reshape(x, (-1, oh, ow, oc, ))
return x return x
else: else:
return tf.depth_to_space(x, size, data_format=nn.data_format) cfg = nn.getCurrentDeviceConfig()
if not cfg.cpu_only:
return tf.depth_to_space(x, size, data_format=nn.data_format)
b,c,h,w = x.shape.as_list() b,c,h,w = x.shape.as_list()
oh, ow = h * size, w * size oh, ow = h * size, w * size
oc = c // (size * size) oc = c // (size * size)
@ -344,11 +359,6 @@ def depth_to_space(x, size):
return x return x
nn.depth_to_space = depth_to_space nn.depth_to_space = depth_to_space
def pixel_norm(x, power = 1.0):
return x * power * tf.rsqrt(tf.reduce_mean(tf.square(x), axis=nn.conv2d_spatial_axes, keepdims=True) + 1e-06)
nn.pixel_norm = pixel_norm
def rgb_to_lab(srgb): def rgb_to_lab(srgb):
srgb_pixels = tf.reshape(srgb, [-1, 3]) srgb_pixels = tf.reshape(srgb, [-1, 3])
linear_mask = tf.cast(srgb_pixels <= 0.04045, dtype=tf.float32) linear_mask = tf.cast(srgb_pixels <= 0.04045, dtype=tf.float32)
@ -391,6 +401,11 @@ def total_variation_mse(images):
return tot_var return tot_var
nn.total_variation_mse = total_variation_mse nn.total_variation_mse = total_variation_mse
def pixel_norm(x, axes):
return x * tf.rsqrt(tf.reduce_mean(tf.square(x), axis=axes, keepdims=True) + 1e-06)
nn.pixel_norm = pixel_norm
""" """
def tf_suppress_lower_mean(t, eps=0.00001): def tf_suppress_lower_mean(t, eps=0.00001):
if t.shape.ndims != 1: if t.shape.ndims != 1:

View file

@ -1,7 +1,12 @@
import numpy as np
import math import math
import cv2
import numpy as np
import numpy.linalg as npla
from .umeyama import umeyama from .umeyama import umeyama
def get_power_of_two(x): def get_power_of_two(x):
i = 0 i = 0
while (1 << i) < x: while (1 << i) < x:
@ -23,3 +28,70 @@ def rotationMatrixToEulerAngles(R) :
def polygon_area(x,y): def polygon_area(x,y):
return 0.5*np.abs(np.dot(x,np.roll(y,1))-np.dot(y,np.roll(x,1))) return 0.5*np.abs(np.dot(x,np.roll(y,1))-np.dot(y,np.roll(x,1)))
def rotate_point(origin, point, deg):
"""
Rotate a point counterclockwise by a given angle around a given origin.
The angle should be given in radians.
"""
ox, oy = origin
px, py = point
rad = deg * math.pi / 180.0
qx = ox + math.cos(rad) * (px - ox) - math.sin(rad) * (py - oy)
qy = oy + math.sin(rad) * (px - ox) + math.cos(rad) * (py - oy)
return np.float32([qx, qy])
def transform_points(points, mat, invert=False):
if invert:
mat = cv2.invertAffineTransform (mat)
points = np.expand_dims(points, axis=1)
points = cv2.transform(points, mat, points.shape)
points = np.squeeze(points)
return points
def transform_mat(mat, res, tx, ty, rotation, scale):
"""
transform mat in local space of res
scale -> translate -> rotate
tx,ty float
rotation int degrees
scale float
"""
lt, rt, lb, ct = transform_points ( np.float32([(0,0),(res,0),(0,res),(res / 2, res/2) ]),mat, True)
hor_v = (rt-lt).astype(np.float32)
hor_size = npla.norm(hor_v)
hor_v /= hor_size
ver_v = (lb-lt).astype(np.float32)
ver_size = npla.norm(ver_v)
ver_v /= ver_size
bt_diag_vec = (rt-ct).astype(np.float32)
half_diag_len = npla.norm(bt_diag_vec)
bt_diag_vec /= half_diag_len
tb_diag_vec = np.float32( [ -bt_diag_vec[1], bt_diag_vec[0] ] )
rt = ct + bt_diag_vec*half_diag_len*scale
lb = ct - bt_diag_vec*half_diag_len*scale
lt = ct - tb_diag_vec*half_diag_len*scale
rt[0] += tx*hor_size
lb[0] += tx*hor_size
lt[0] += tx*hor_size
rt[1] += ty*ver_size
lb[1] += ty*ver_size
lt[1] += ty*ver_size
rt = rotate_point(ct, rt, rotation)
lb = rotate_point(ct, lb, rotation)
lt = rotate_point(ct, lt, rotation)
return cv2.getAffineTransform( np.float32([lt, rt, lb]), np.float32([ [0,0], [res,0], [0,res] ]) )

Binary file not shown.

Before

Width:  |  Height:  |  Size: 63 KiB

BIN
doc/dfl_cover.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 326 KiB

View file

@ -0,0 +1,32 @@
# Background Power option
Allows you to train the model to include the background, which may help with areas around the mask.
Unlike **Background Style Power**, this does not use any additional VRAM, and does not require lowering the batch size.
- [DESCRIPTION](#description)
- [USAGE](#usage)
- [DIFFERENCE WITH BACKGROUND STYLE POWER](#difference-with-background-style-power)
*Examples trained with background power `0.3`:*
![](example.jpeg)![](example2.jpeg)
## DESCRIPTION
Applies the same loss calculation used for the area *inside* the mask, to the area *outside* the mask, multiplied with
the chosen background power value.
E.g. (simplified): Source Loss = Masked area image difference + Background Power * Non-masked area image difference
## USAGE
`[0.0] Background power ( 0.0..1.0 ?:help ) : 0.3`
## DIFFERENCE WITH BACKGROUND STYLE POWER
**Background Style Power** applies a loss to the source by comparing the background of the dest to that of the
predicted src/dest (5th column). This operation requires additional VRAM, due to the face that the predicted src/dest
outputs are not normally used in training (other then being viewable in the preview window).
**Background Power** does *not* use the src/dest images whatsoever, instead comparing the background of the predicted
source to that of the original source, and the same for the background of the dest images.

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

View file

@ -0,0 +1,50 @@
# GAN Options
Allows you to use one-sided label smoothing and noisy labels when training the discriminator.
- [ONE-SIDED LABEL SMOOTHING](#one-sided-label-smoothing)
- [NOISY LABELS](#noisy-labels)
## ONE-SIDED LABEL SMOOTHING
![](tutorial-on-theory-and-application-of-generative-adversarial-networks-54-638.jpg)
> Deep networks may suffer from overconfidence. For example, it uses very few features to classify an object. To
> mitigate the problem, deep learning uses regulation and dropout to avoid overconfidence.
>
> In GAN, if the discriminator depends on a small set of features to detect real images, the generator may just produce
> these features only to exploit the discriminator. The optimization may turn too greedy and produces no long term
> benefit. In GAN, overconfidence hurts badly. To avoid the problem, we penalize the discriminator when the prediction
> for any real images go beyond 0.9 (D(real image)>0.9). This is done by setting our target label value to be 0.9
> instead of 1.0.
- [GAN — Ways to improve GAN performance](https://towardsdatascience.com/gan-ways-to-improve-gan-performance-acf37f9f59b)
By setting the label smoothing value to any value > 0, the target label value used with the discriminator will be:
```
target label value = 1 - (label smoothing value)
```
### USAGE
```
[0.1] GAN label smoothing ( 0 - 0.5 ?:help ) : 0.1
```
## NOISY LABELS
> make the labels the noisy for the discriminator: occasionally flip the labels when training the discriminator
- [How to Train a GAN? Tips and tricks to make GANs work](https://github.com/soumith/ganhacks/blob/master/README.md#6-use-soft-and-noisy-labels)
By setting the noisy labels value to any value > 0, then the target labels used with the discriminator will be flipped
("fake" => "real" / "real" => "fake") with probability p (where p is the noisy label value).
E.g., if the value is 0.05, then ~5% of the labels will be flipped when training the discriminator
### USAGE
```
[0.05] GAN noisy labels ( 0 - 0.5 ?:help ) : 0.05
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

View file

@ -0,0 +1,43 @@
# Multiscale SSIM (MS-SSIM)
Allows you to train using the MS-SSIM (multiscale structural similarity index measure) as the main loss metric,
a perceptually more accurate measure of image quality than MSE (mean squared error).
As an added benefit, you may see a decrease in ms/iteration (when using the same batch size) with Multiscale loss
enabled. You may also be able to train with a larger batch size with it enabled.
- [DESCRIPTION](#description)
- [USAGE](#usage)
## DESCRIPTION
[SSIM](https://en.wikipedia.org/wiki/Structural_similarity) is metric for comparing the perceptial quality of an image:
> SSIM is a perception-based model that considers image degradation as perceived change in structural information,
> while also incorporating important perceptual phenomena, including both luminance masking and contrast masking terms.
> [...]
> Structural information is the idea that the pixels have strong inter-dependencies especially when they are spatially
> close. These dependencies carry important information about the structure of the objects in the visual scene.
> Luminance masking is a phenomenon whereby image distortions (in this context) tend to be less visible in bright
> regions, while contrast masking is a phenomenon whereby distortions become less visible where there is significant
> activity or "texture" in the image.
The current loss metric is a combination of SSIM (structural similarity index measure) and
[MSE](https://en.wikipedia.org/wiki/Mean_squared_error) (mean squared error).
[Multiscale SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Multi-Scale_SSIM) is a variant of SSIM that
improves upon SSIM by comparing the similarity at multiple scales (e.g.: full-size, half-size, 1/4 size, etc.)
By using MS-SSIM as our main loss metric, we should expect the image similarity to improve across each scale, improving
both the large scale and small scale detail of the predicted images.
Original paper: [Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik.
"Multiscale structural similarity for image quality assessment."
Signals, Systems and Computers, 2004.](https://www.cns.nyu.edu/pub/eero/wang03b.pdf)
## USAGE
```
[n] Use multiscale loss? ( y/n ?:help ) : y
```

View file

@ -0,0 +1,25 @@
# Random Color option
Helps train the model to generalize perceptual color and lightness, and improves color transfer between src and dst.
- [DESCRIPTION](#description)
- [USAGE](#usage)
![](example.jpeg)
## DESCRIPTION
Converts images to [CIE L\*a\*b* colorspace](https://en.wikipedia.org/wiki/CIELAB_color_space),
and then randomly rotates around the `L*` axis. While the perceptual lightness stays constant, only the `a*` and `b*`
color channels are modified. After rotation, converts back to BGR (blue/green/red) colorspace.
If visualized using the [CIE L\*a\*b* cylindical model](https://en.wikipedia.org/wiki/CIELAB_color_space#Cylindrical_model),
this is a random rotation of `h°` (hue angle, angle of the hue in the CIELAB color wheel),
maintaining the same `C*` (chroma, relative saturation).
## USAGE
```
[n] Random color ( y/n ?:help ) : y
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

View file

@ -0,0 +1,45 @@
# Web UI
View and interact with the training preview window with your web browser.
Allows you to view and control the preview remotely, and train on headless machines.
- [INSTALLATION](#installation)
- [DESCRIPTION](#description)
- [USAGE](#usage)
- [SSH PORT FORWARDING](#ssh-port-forwarding)
![](example.png)
## INSTALLATION
Requires additional Python dependencies to be installed:
- [Flask](https://palletsprojects.com/p/flask/),
version [1.1.1](https://pypi.org/project/Flask/1.1.1/)
- [Flask-SocketIO](https://github.com/miguelgrinberg/Flask-SocketIO/),
version [4.2.1](https://pypi.org/project/Flask-SocketIO/4.2.1/)
```
pip install Flask==1.1.1
pip install Flask-SocketIO==4.2.1
```
## DESCRIPTION
Launches a Flask web application which sends commands to the training thread
(save/exit/fetch new preview, etc.), and displays live updates for the log output
e.g.: `[09:50:53][#106913][0503ms][0.3109][0.2476]`, and updates the graph/preview image.
## USAGE
Enable the Web UI by appending `--flask-preview` to the `train` command.
Once training begins, Web UI will start, and can be accessed at http://localhost:5000/
## SSH PORT FORWARDING
When running on a remote/headless box, view the Web UI in your local browser simply by
adding the ssh option `-L 5000:localhost:5000`. Once connected, the Web UI can be viewed
locally at http://localhost:5000/
Several Android/iOS SSH apps (such as [JuiceSSH](https://juicessh.com/)
exist which support port forwarding, allowing you to interact with the preview pane
from anywhere with your phone.

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 MiB

View file

@ -0,0 +1,5 @@
# Example of bug:
![](preview_image_bug.jpeg)
# Demonstration of fix:
![](preview_image_fix.jpeg)

Binary file not shown.

After

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

BIN
doc/logo_directx.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

View file

@ -161,11 +161,11 @@ class FaceEnhancer(object):
if not model_path.exists(): if not model_path.exists():
raise Exception("Unable to load FaceEnhancer.npy") raise Exception("Unable to load FaceEnhancer.npy")
with tf.device ('/CPU:0' if place_model_on_cpu else '/GPU:0'): with tf.device ('/CPU:0' if place_model_on_cpu else nn.tf_default_device_name):
self.model = FaceEnhancer() self.model = FaceEnhancer()
self.model.load_weights (model_path) self.model.load_weights (model_path)
with tf.device ('/CPU:0' if run_on_cpu else '/GPU:0'): with tf.device ('/CPU:0' if run_on_cpu else nn.tf_default_device_name):
self.model.build_for_run ([ (tf.float32, nn.get4Dshape (192,192,3) ), self.model.build_for_run ([ (tf.float32, nn.get4Dshape (192,192,3) ),
(tf.float32, (None,1,) ), (tf.float32, (None,1,) ),
(tf.float32, (None,1,) ), (tf.float32, (None,1,) ),

View file

@ -39,7 +39,7 @@ class XSegNet(object):
self.target_t = tf.placeholder (nn.floatx, nn.get4Dshape(resolution,resolution,1) ) self.target_t = tf.placeholder (nn.floatx, nn.get4Dshape(resolution,resolution,1) )
# Initializing model classes # Initializing model classes
with tf.device ('/CPU:0' if place_model_on_cpu else '/GPU:0'): with tf.device ('/CPU:0' if place_model_on_cpu else nn.tf_default_device_name):
self.model = nn.XSeg(3, 32, 1, name=name) self.model = nn.XSeg(3, 32, 1, name=name)
self.model_weights = self.model.get_weights() self.model_weights = self.model.get_weights()
if training: if training:
@ -53,7 +53,7 @@ class XSegNet(object):
self.model_filename_list += [ [self.model, f'{model_name}.npy'] ] self.model_filename_list += [ [self.model, f'{model_name}.npy'] ]
if not training: if not training:
with tf.device ('/CPU:0' if run_on_cpu else '/GPU:0'): with tf.device ('/CPU:0' if run_on_cpu else nn.tf_default_device_name):
_, pred = self.model(self.input_t) _, pred = self.model(self.input_t)
def net_run(input_np): def net_run(input_np):

0
flaskr/__init__.py Normal file
View file

102
flaskr/app.py Normal file
View file

@ -0,0 +1,102 @@
from pathlib import Path
from flask import Flask, send_file, Response, render_template, render_template_string, request, g
from flask_socketio import SocketIO, emit
import logging
def create_flask_app(s2c, c2s, s2flask, kwargs):
app = Flask(__name__, template_folder="templates", static_folder="static")
log = logging.getLogger('werkzeug')
log.disabled = True
model_path = Path(kwargs.get('saved_models_path', ''))
filename = 'preview.png'
preview_file = str(model_path / filename)
def gen():
frame = open(preview_file, 'rb').read()
while True:
try:
frame = open(preview_file, 'rb').read()
except:
pass
yield b'--frame\r\nContent-Type: image/png\r\n\r\n'
yield frame
yield b'\r\n\r\n'
def send(queue, op):
queue.put({'op': op})
def send_and_wait(queue, op):
while not s2flask.empty():
s2flask.get()
queue.put({'op': op})
while s2flask.empty():
pass
s2flask.get()
@app.route('/save', methods=['POST'])
def save():
send(s2c, 'save')
return '', 204
@app.route('/exit', methods=['POST'])
def exit():
send(c2s, 'close')
request.environ.get('werkzeug.server.shutdown')()
return '', 204
@app.route('/update', methods=['POST'])
def update():
send(c2s, 'update')
return '', 204
@app.route('/next_preview', methods=['POST'])
def next_preview():
send(c2s, 'next_preview')
return '', 204
@app.route('/change_history_range', methods=['POST'])
def change_history_range():
send(c2s, 'change_history_range')
return '', 204
@app.route('/zoom_prev', methods=['POST'])
def zoom_prev():
send(c2s, 'zoom_prev')
return '', 204
@app.route('/zoom_next', methods=['POST'])
def zoom_next():
send(c2s, 'zoom_next')
return '', 204
@app.route('/')
def index():
return render_template('index.html')
# @app.route('/preview_image')
# def preview_image():
# return Response(gen(), mimetype='multipart/x-mixed-replace;boundary=frame')
@app.route('/preview_image')
def preview_image():
return send_file(preview_file, mimetype='image/png', cache_timeout=-1)
socketio = SocketIO(app)
@socketio.on('connect', namespace='/')
def test_connect():
emit('my response', {'data': 'Connected'})
@socketio.on('disconnect', namespace='/test')
def test_disconnect():
print('Client disconnected')
return socketio, app

BIN
flaskr/static/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 284 KiB

View file

@ -0,0 +1,95 @@
<head>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js"
integrity="sha256-CSXorXvZcTkaix6Yvo6HppcZGetbYMGWSFlBw8HfCJo="
crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.2.0/socket.io.js"
integrity="sha256-yr4fRk/GU1ehYJPAs8P4JlTgu0Hdsp4ZKrx8bDEDC3I="
crossorigin="anonymous"></script>
<link rel="stylesheet" href="https://fonts.googleapis.com/icon?family=Material+Icons">
<link rel="stylesheet" href="https://code.getmdl.io/1.3.0/material.indigo-pink.min.css">
<script defer src="https://code.getmdl.io/1.3.0/material.min.js"></script>
<title>Training Preview</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<script type="text/javascript">
$(function() {
const socket = io.connect();
socket.on('preview', function(msg) {
console.log(msg);
$('img#preview').attr("src", "{{ url_for('preview_image') }}?q=" + new Date().getTime());
});
socket.on('loss', function(loss_string) {
console.log(loss_string);
$('div#loss').html(loss_string);
});
function save() {
$.post("{{ url_for('save') }}");
}
function exit() {
$.post("{{ url_for('exit') }}");
socket.close();
}
function update() {
$.post("{{ url_for('update') }}");
}
function next_preview() {
$.post("{{ url_for('next_preview') }}");
}
function change_history_range() {
$.post("{{ url_for('change_history_range') }}");
}
function zoom_prev() {
$.post("{{ url_for('zoom_prev') }}");
}
function zoom_next() {
$.post("{{ url_for('zoom_next') }}");
}
$(document).keypress(function (event) {
switch (event.key) {
case "s" : save(); break;
case "Enter" : exit(); break;
case "p" : update(); break;
case " " : next_preview(); break;
case "l" : change_history_range(); break;
case "-" : zoom_prev(); break;
case "=" : zoom_next(); break;
}
// console.log('kp:', event);
});
$('button#save').click(save);
$('button#exit').click(exit);
$('button#update').click(update);
$('button#next_preview').click(next_preview);
$('button#change_history_range').click(change_history_range);
$('button#zoom_prev').click(zoom_prev);
$('button#zoom_next').click(zoom_next);
$('img#preview').click(update);
});
</script>
</head>
<body>
<div class="mdl-typography--headline">Training Preview</div>
<div id="loss"></div>
<div>
<button class='mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect' id='save'>Save</button>
<button class='mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect' id='exit'>Exit</button>
<button class='mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect' id='update'>Update</button>
<button class='mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect' id='next_preview'>Next preview</button>
<button class='mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect' id='change_history_range'>Change History Range</button>
<button class='mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect' id='zoom_prev'>Zoom -</button>
<button class='mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect' id='zoom_next'>Zoom +</button>
</div>
<img id='preview' src="{{ url_for('preview_image') }}" style="max-width: 100%">
</body>
</html>

18
main.py
View file

@ -128,7 +128,9 @@ if __name__ == "__main__":
'execute_programs' : [ [int(x[0]), x[1] ] for x in arguments.execute_program ], 'execute_programs' : [ [int(x[0]), x[1] ] for x in arguments.execute_program ],
'debug' : arguments.debug, 'debug' : arguments.debug,
'tensorboard_dir' : arguments.tensorboard_dir, 'tensorboard_dir' : arguments.tensorboard_dir,
'start_tensorboard' : arguments.start_tensorboard 'start_tensorboard' : arguments.start_tensorboard,
'dump_ckpt' : arguments.dump_ckpt,
'flask_preview' : arguments.flask_preview,
} }
from mainscripts import Trainer from mainscripts import Trainer
Trainer.main(**kwargs) Trainer.main(**kwargs)
@ -150,6 +152,10 @@ if __name__ == "__main__":
p.add_argument('--start-tensorboard', action="store_true", dest="start_tensorboard", default=False, help="Automatically start the tensorboard server preconfigured to the tensorboard-logdir") p.add_argument('--start-tensorboard', action="store_true", dest="start_tensorboard", default=False, help="Automatically start the tensorboard server preconfigured to the tensorboard-logdir")
p.add_argument('--dump-ckpt', action="store_true", dest="dump_ckpt", default=False, help="Dump the model to ckpt format.")
p.add_argument('--flask-preview', action="store_true", dest="flask_preview", default=False,
help="Launches a flask server to view the previews in a web browser")
p.add_argument('--execute-program', dest="execute_program", default=[], action='append', nargs='+') p.add_argument('--execute-program', dest="execute_program", default=[], action='append', nargs='+')
p.set_defaults (func=process_train) p.set_defaults (func=process_train)
@ -257,6 +263,16 @@ if __name__ == "__main__":
p.set_defaults(func=process_faceset_enhancer) p.set_defaults(func=process_faceset_enhancer)
p = facesettool_parser.add_parser ("resize", help="Resize DFL faceset.")
p.add_argument('--input-dir', required=True, action=fixPathAction, dest="input_dir", help="Input directory of aligned faces.")
def process_faceset_resizer(arguments):
osex.set_process_lowest_prio()
from mainscripts import FacesetResizer
FacesetResizer.process_folder ( Path(arguments.input_dir) )
p.set_defaults(func=process_faceset_resizer)
def process_dev_test(arguments): def process_dev_test(arguments):
osex.set_process_lowest_prio() osex.set_process_lowest_prio()
from mainscripts import dev_misc from mainscripts import dev_misc

View file

@ -97,9 +97,6 @@ class ExtractSubprocessor(Subprocessor):
h, w, c = image.shape h, w, c = image.shape
dflimg = DFLIMG.load (filepath)
extract_from_dflimg = (h == w and (dflimg is not None and dflimg.has_data()) )
if 'rects' in self.type or self.type == 'all': if 'rects' in self.type or self.type == 'all':
data = ExtractSubprocessor.Cli.rects_stage (data=data, data = ExtractSubprocessor.Cli.rects_stage (data=data,
image=image, image=image,
@ -110,7 +107,6 @@ class ExtractSubprocessor(Subprocessor):
if 'landmarks' in self.type or self.type == 'all': if 'landmarks' in self.type or self.type == 'all':
data = ExtractSubprocessor.Cli.landmarks_stage (data=data, data = ExtractSubprocessor.Cli.landmarks_stage (data=data,
image=image, image=image,
extract_from_dflimg=extract_from_dflimg,
landmarks_extractor=self.landmarks_extractor, landmarks_extractor=self.landmarks_extractor,
rects_extractor=self.rects_extractor, rects_extractor=self.rects_extractor,
) )
@ -121,7 +117,6 @@ class ExtractSubprocessor(Subprocessor):
face_type=self.face_type, face_type=self.face_type,
image_size=self.image_size, image_size=self.image_size,
jpeg_quality=self.jpeg_quality, jpeg_quality=self.jpeg_quality,
extract_from_dflimg=extract_from_dflimg,
output_debug_path=self.output_debug_path, output_debug_path=self.output_debug_path,
final_output_path=self.final_output_path, final_output_path=self.final_output_path,
) )
@ -161,7 +156,6 @@ class ExtractSubprocessor(Subprocessor):
@staticmethod @staticmethod
def landmarks_stage(data, def landmarks_stage(data,
image, image,
extract_from_dflimg,
landmarks_extractor, landmarks_extractor,
rects_extractor, rects_extractor,
): ):
@ -176,7 +170,7 @@ class ExtractSubprocessor(Subprocessor):
elif data.rects_rotation == 270: elif data.rects_rotation == 270:
rotated_image = image.swapaxes( 0,1 )[::-1,:,:] rotated_image = image.swapaxes( 0,1 )[::-1,:,:]
data.landmarks = landmarks_extractor.extract (rotated_image, data.rects, rects_extractor if (not extract_from_dflimg and data.landmarks_accurate) else None, is_bgr=True) data.landmarks = landmarks_extractor.extract (rotated_image, data.rects, rects_extractor if (data.landmarks_accurate) else None, is_bgr=True)
if data.rects_rotation != 0: if data.rects_rotation != 0:
for i, (rect, lmrks) in enumerate(zip(data.rects, data.landmarks)): for i, (rect, lmrks) in enumerate(zip(data.rects, data.landmarks)):
new_rect, new_lmrks = rect, lmrks new_rect, new_lmrks = rect, lmrks
@ -207,7 +201,6 @@ class ExtractSubprocessor(Subprocessor):
face_type, face_type,
image_size, image_size,
jpeg_quality, jpeg_quality,
extract_from_dflimg = False,
output_debug_path=None, output_debug_path=None,
final_output_path=None, final_output_path=None,
): ):
@ -219,72 +212,53 @@ class ExtractSubprocessor(Subprocessor):
if output_debug_path is not None: if output_debug_path is not None:
debug_image = image.copy() debug_image = image.copy()
if extract_from_dflimg and len(rects) != 1: face_idx = 0
#if re-extracting from dflimg and more than 1 or zero faces detected - dont process and just copy it for rect, image_landmarks in zip( rects, landmarks ):
print("extract_from_dflimg and len(rects) != 1", filepath ) if image_landmarks is None:
output_filepath = final_output_path / filepath.name continue
if filepath != str(output_file):
shutil.copy ( str(filepath), str(output_filepath) )
data.final_output_files.append (output_filepath)
else:
face_idx = 0
for rect, image_landmarks in zip( rects, landmarks ):
if extract_from_dflimg and face_idx > 1: rect = np.array(rect)
#cannot extract more than 1 face from dflimg
break
if image_landmarks is None: if face_type == FaceType.MARK_ONLY:
image_to_face_mat = None
face_image = image
face_image_landmarks = image_landmarks
else:
image_to_face_mat = LandmarksProcessor.get_transform_mat (image_landmarks, image_size, face_type)
face_image = cv2.warpAffine(image, image_to_face_mat, (image_size, image_size), cv2.INTER_LANCZOS4)
face_image_landmarks = LandmarksProcessor.transform_points (image_landmarks, image_to_face_mat)
landmarks_bbox = LandmarksProcessor.transform_points ( [ (0,0), (0,image_size-1), (image_size-1, image_size-1), (image_size-1,0) ], image_to_face_mat, True)
rect_area = mathlib.polygon_area(np.array(rect[[0,2,2,0]]).astype(np.float32), np.array(rect[[1,1,3,3]]).astype(np.float32))
landmarks_area = mathlib.polygon_area(landmarks_bbox[:,0].astype(np.float32), landmarks_bbox[:,1].astype(np.float32) )
if not data.manual and face_type <= FaceType.FULL_NO_ALIGN and landmarks_area > 4*rect_area: #get rid of faces which umeyama-landmark-area > 4*detector-rect-area
continue continue
rect = np.array(rect) if output_debug_path is not None:
LandmarksProcessor.draw_rect_landmarks (debug_image, rect, image_landmarks, face_type, image_size, transparent_mask=True)
if face_type == FaceType.MARK_ONLY: output_path = final_output_path
image_to_face_mat = None if data.force_output_path is not None:
face_image = image output_path = data.force_output_path
face_image_landmarks = image_landmarks
else:
image_to_face_mat = LandmarksProcessor.get_transform_mat (image_landmarks, image_size, face_type)
face_image = cv2.warpAffine(image, image_to_face_mat, (image_size, image_size), cv2.INTER_LANCZOS4) output_filepath = output_path / f"{filepath.stem}_{face_idx}.jpg"
face_image_landmarks = LandmarksProcessor.transform_points (image_landmarks, image_to_face_mat) cv2_imwrite(output_filepath, face_image, [int(cv2.IMWRITE_JPEG_QUALITY), jpeg_quality ] )
landmarks_bbox = LandmarksProcessor.transform_points ( [ (0,0), (0,image_size-1), (image_size-1, image_size-1), (image_size-1,0) ], image_to_face_mat, True) dflimg = DFLJPG.load(output_filepath)
dflimg.set_face_type(FaceType.toString(face_type))
dflimg.set_landmarks(face_image_landmarks.tolist())
dflimg.set_source_filename(filepath.name)
dflimg.set_source_rect(rect)
dflimg.set_source_landmarks(image_landmarks.tolist())
dflimg.set_image_to_face_mat(image_to_face_mat)
dflimg.save()
rect_area = mathlib.polygon_area(np.array(rect[[0,2,2,0]]).astype(np.float32), np.array(rect[[1,1,3,3]]).astype(np.float32)) data.final_output_files.append (output_filepath)
landmarks_area = mathlib.polygon_area(landmarks_bbox[:,0].astype(np.float32), landmarks_bbox[:,1].astype(np.float32) ) face_idx += 1
data.faces_detected = face_idx
if not data.manual and face_type <= FaceType.FULL_NO_ALIGN and landmarks_area > 4*rect_area: #get rid of faces which umeyama-landmark-area > 4*detector-rect-area
continue
if output_debug_path is not None:
LandmarksProcessor.draw_rect_landmarks (debug_image, rect, image_landmarks, face_type, image_size, transparent_mask=True)
output_path = final_output_path
if data.force_output_path is not None:
output_path = data.force_output_path
if extract_from_dflimg and filepath.suffix == '.jpg':
#if extracting from dflimg and jpg copy it in order not to lose quality
output_filepath = output_path / filepath.name
if filepath != output_filepath:
shutil.copy ( str(filepath), str(output_filepath) )
else:
output_filepath = output_path / f"{filepath.stem}_{face_idx}.jpg"
cv2_imwrite(output_filepath, face_image, [int(cv2.IMWRITE_JPEG_QUALITY), jpeg_quality ] )
dflimg = DFLJPG.load(output_filepath)
dflimg.set_face_type(FaceType.toString(face_type))
dflimg.set_landmarks(face_image_landmarks.tolist())
dflimg.set_source_filename(filepath.name)
dflimg.set_source_rect(rect)
dflimg.set_source_landmarks(image_landmarks.tolist())
dflimg.set_image_to_face_mat(image_to_face_mat)
dflimg.save()
data.final_output_files.append (output_filepath)
face_idx += 1
data.faces_detected = face_idx
if output_debug_path is not None: if output_debug_path is not None:
cv2_imwrite( output_debug_path / (filepath.stem+'.jpg'), debug_image, [int(cv2.IMWRITE_JPEG_QUALITY), 50] ) cv2_imwrite( output_debug_path / (filepath.stem+'.jpg'), debug_image, [int(cv2.IMWRITE_JPEG_QUALITY), 50] )

View file

@ -0,0 +1,209 @@
import multiprocessing
import shutil
import cv2
from core import pathex
from core.cv2ex import *
from core.interact import interact as io
from core.joblib import Subprocessor
from DFLIMG import *
from facelib import FaceType, LandmarksProcessor
class FacesetResizerSubprocessor(Subprocessor):
#override
def __init__(self, image_paths, output_dirpath, image_size, face_type=None):
self.image_paths = image_paths
self.output_dirpath = output_dirpath
self.image_size = image_size
self.face_type = face_type
self.result = []
super().__init__('FacesetResizer', FacesetResizerSubprocessor.Cli, 600)
#override
def on_clients_initialized(self):
io.progress_bar (None, len (self.image_paths))
#override
def on_clients_finalized(self):
io.progress_bar_close()
#override
def process_info_generator(self):
base_dict = {'output_dirpath':self.output_dirpath, 'image_size':self.image_size, 'face_type':self.face_type}
for device_idx in range( min(8, multiprocessing.cpu_count()) ):
client_dict = base_dict.copy()
device_name = f'CPU #{device_idx}'
client_dict['device_name'] = device_name
yield device_name, {}, client_dict
#override
def get_data(self, host_dict):
if len (self.image_paths) > 0:
return self.image_paths.pop(0)
#override
def on_data_return (self, host_dict, data):
self.image_paths.insert(0, data)
#override
def on_result (self, host_dict, data, result):
io.progress_bar_inc(1)
if result[0] == 1:
self.result +=[ (result[1], result[2]) ]
#override
def get_result(self):
return self.result
class Cli(Subprocessor.Cli):
#override
def on_initialize(self, client_dict):
self.output_dirpath = client_dict['output_dirpath']
self.image_size = client_dict['image_size']
self.face_type = client_dict['face_type']
self.log_info (f"Running on { client_dict['device_name'] }")
#override
def process_data(self, filepath):
try:
dflimg = DFLIMG.load (filepath)
if dflimg is None or not dflimg.has_data():
self.log_err (f"{filepath.name} is not a dfl image file")
else:
img = cv2_imread(filepath)
h,w = img.shape[:2]
if h != w:
raise Exception(f'w != h in {filepath}')
image_size = self.image_size
face_type = self.face_type
output_filepath = self.output_dirpath / filepath.name
if face_type is not None:
lmrks = dflimg.get_landmarks()
mat = LandmarksProcessor.get_transform_mat(lmrks, image_size, face_type)
img = cv2.warpAffine(img, mat, (image_size, image_size), flags=cv2.INTER_LANCZOS4 )
img = np.clip(img, 0, 255).astype(np.uint8)
cv2_imwrite ( str(output_filepath), img, [int(cv2.IMWRITE_JPEG_QUALITY), 100] )
dfl_dict = dflimg.get_dict()
dflimg = DFLIMG.load (output_filepath)
dflimg.set_dict(dfl_dict)
xseg_mask = dflimg.get_xseg_mask()
if xseg_mask is not None:
xseg_res = 256
xseg_lmrks = lmrks.copy()
xseg_lmrks *= (xseg_res / w)
xseg_mat = LandmarksProcessor.get_transform_mat(xseg_lmrks, xseg_res, face_type)
xseg_mask = cv2.warpAffine(xseg_mask, xseg_mat, (xseg_res, xseg_res), flags=cv2.INTER_LANCZOS4 )
xseg_mask[xseg_mask < 0.5] = 0
xseg_mask[xseg_mask >= 0.5] = 1
dflimg.set_xseg_mask(xseg_mask)
seg_ie_polys = dflimg.get_seg_ie_polys()
for poly in seg_ie_polys.get_polys():
poly_pts = poly.get_pts()
poly_pts = LandmarksProcessor.transform_points(poly_pts, mat)
poly.set_points(poly_pts)
dflimg.set_seg_ie_polys(seg_ie_polys)
lmrks = LandmarksProcessor.transform_points(lmrks, mat)
dflimg.set_landmarks(lmrks)
image_to_face_mat = dflimg.get_image_to_face_mat()
if image_to_face_mat is not None:
image_to_face_mat = LandmarksProcessor.get_transform_mat ( dflimg.get_source_landmarks(), image_size, face_type )
dflimg.set_image_to_face_mat(image_to_face_mat)
dflimg.set_face_type( FaceType.toString(face_type) )
dflimg.save()
else:
dfl_dict = dflimg.get_dict()
scale = w / image_size
img = cv2.resize(img, (image_size, image_size), interpolation=cv2.INTER_LANCZOS4)
cv2_imwrite ( str(output_filepath), img, [int(cv2.IMWRITE_JPEG_QUALITY), 100] )
dflimg = DFLIMG.load (output_filepath)
dflimg.set_dict(dfl_dict)
lmrks = dflimg.get_landmarks()
lmrks /= scale
dflimg.set_landmarks(lmrks)
seg_ie_polys = dflimg.get_seg_ie_polys()
seg_ie_polys.mult_points( 1.0 / scale)
dflimg.set_seg_ie_polys(seg_ie_polys)
image_to_face_mat = dflimg.get_image_to_face_mat()
if image_to_face_mat is not None:
face_type = FaceType.fromString ( dflimg.get_face_type() )
image_to_face_mat = LandmarksProcessor.get_transform_mat ( dflimg.get_source_landmarks(), image_size, face_type )
dflimg.set_image_to_face_mat(image_to_face_mat)
dflimg.save()
return (1, filepath, output_filepath)
except:
self.log_err (f"Exception occured while processing file {filepath}. Error: {traceback.format_exc()}")
return (0, filepath, None)
def process_folder ( dirpath):
image_size = io.input_int(f"New image size", 512, valid_range=[256,2048])
face_type = io.input_str ("Change face type", 'same', ['h','mf','f','wf','head','same']).lower()
if face_type == 'same':
face_type = None
else:
face_type = {'h' : FaceType.HALF,
'mf' : FaceType.MID_FULL,
'f' : FaceType.FULL,
'wf' : FaceType.WHOLE_FACE,
'head' : FaceType.HEAD}[face_type]
output_dirpath = dirpath.parent / (dirpath.name + '_resized')
output_dirpath.mkdir (exist_ok=True, parents=True)
dirpath_parts = '/'.join( dirpath.parts[-2:])
output_dirpath_parts = '/'.join( output_dirpath.parts[-2:] )
io.log_info (f"Resizing faceset in {dirpath_parts}")
io.log_info ( f"Processing to {output_dirpath_parts}")
output_images_paths = pathex.get_image_paths(output_dirpath)
if len(output_images_paths) > 0:
for filename in output_images_paths:
Path(filename).unlink()
image_paths = [Path(x) for x in pathex.get_image_paths( dirpath )]
result = FacesetResizerSubprocessor ( image_paths, output_dirpath, image_size, face_type).run()
is_merge = io.input_bool (f"\r\nMerge {output_dirpath_parts} to {dirpath_parts} ?", True)
if is_merge:
io.log_info (f"Copying processed files to {dirpath_parts}")
for (filepath, output_filepath) in result:
try:
shutil.copy (output_filepath, filepath)
except:
pass
io.log_info (f"Removing {output_dirpath_parts}")
shutil.rmtree(output_dirpath)

View file

@ -1,8 +1,11 @@
import sys import os
import sys
import traceback import traceback
import queue import queue
import threading import threading
import time import time
from enum import Enum
import numpy as np import numpy as np
import itertools import itertools
from pathlib import Path from pathlib import Path
@ -48,6 +51,7 @@ def log_tensorboard_model_previews(iter, model, train_summary_writer):
log_tensorboard_previews(iter, model.get_static_previews(), 'static_preview', train_summary_writer) log_tensorboard_previews(iter, model.get_static_previews(), 'static_preview', train_summary_writer)
def trainerThread (s2c, c2s, e, def trainerThread (s2c, c2s, e,
socketio=None,
model_class_name = None, model_class_name = None,
saved_models_path = None, saved_models_path = None,
training_data_src_path = None, training_data_src_path = None,
@ -63,6 +67,7 @@ def trainerThread (s2c, c2s, e,
debug=False, debug=False,
tensorboard_dir=None, tensorboard_dir=None,
start_tensorboard=False, start_tensorboard=False,
dump_ckpt=False,
**kwargs): **kwargs):
while True: while True:
try: try:
@ -80,8 +85,11 @@ def trainerThread (s2c, c2s, e,
if not saved_models_path.exists(): if not saved_models_path.exists():
saved_models_path.mkdir(exist_ok=True, parents=True) saved_models_path.mkdir(exist_ok=True, parents=True)
if dump_ckpt:
cpu_only=True
model = models.import_model(model_class_name)( model = models.import_model(model_class_name)(
is_training=True, is_training=not dump_ckpt,
saved_models_path=saved_models_path, saved_models_path=saved_models_path,
training_data_src_path=training_data_src_path, training_data_src_path=training_data_src_path,
training_data_dst_path=training_data_dst_path, training_data_dst_path=training_data_dst_path,
@ -92,8 +100,12 @@ def trainerThread (s2c, c2s, e,
force_gpu_idxs=force_gpu_idxs, force_gpu_idxs=force_gpu_idxs,
cpu_only=cpu_only, cpu_only=cpu_only,
silent_start=silent_start, silent_start=silent_start,
debug=debug, debug=debug)
)
if dump_ckpt:
e.set()
model.dump_ckpt()
break
is_reached_goal = model.is_reached_iter_goal() is_reached_goal = model.is_reached_iter_goal()
@ -107,11 +119,13 @@ def trainerThread (s2c, c2s, e,
}) })
shared_state = { 'after_save' : False } shared_state = { 'after_save' : False }
shared_state = {'after_save': False}
loss_string = "" loss_string = ""
save_iter = model.get_iter() save_iter = model.get_iter()
def model_save(): def model_save():
if not debug and not is_reached_goal: if not debug and not is_reached_goal:
io.log_info ("Saving....", end='\r') io.log_info("Saving....", end='\r')
model.save() model.save()
shared_state['after_save'] = True shared_state['after_save'] = True
@ -119,58 +133,41 @@ def trainerThread (s2c, c2s, e,
if not debug and not is_reached_goal: if not debug and not is_reached_goal:
model.create_backup() model.create_backup()
def log_step(step, step_time, src_loss, dst_loss):
c2s.put({
'op': 'tb',
'action': 'step',
'step': step,
'step_time': step_time,
'src_loss': src_loss,
'dst_loss': dst_loss
})
def log_previews(step, previews, static_previews):
c2s.put({
'op': 'tb',
'action': 'preview',
'step': step,
'previews': previews,
'static_previews': static_previews
})
def send_preview(): def send_preview():
if not debug: if not debug:
previews = model.get_previews() previews = model.get_previews()
c2s.put ( {'op':'show', 'previews': previews, 'iter':model.get_iter(), 'loss_history': model.get_loss_history().copy() } ) c2s.put({'op': 'show', 'previews': previews, 'iter': model.get_iter(),
'loss_history': model.get_loss_history().copy()})
else: else:
previews = [( 'debug, press update for new', model.debug_one_iter())] previews = [('debug, press update for new', model.debug_one_iter())]
c2s.put ( {'op':'show', 'previews': previews} ) c2s.put({'op': 'show', 'previews': previews})
e.set() #Set the GUI Thread as Ready e.set() # Set the GUI Thread as Ready
if model.get_target_iter() != 0: if model.get_target_iter() != 0:
if is_reached_goal: if is_reached_goal:
io.log_info('Model already trained to target iteration. You can use preview.') io.log_info('Model already trained to target iteration. You can use preview.')
else: else:
io.log_info('Starting. Target iteration: %d. Press "Enter" to stop training and save model.' % ( model.get_target_iter() ) ) io.log_info('Starting. Target iteration: %d. Press "Enter" to stop training and save model.' % (
model.get_target_iter()))
else: else:
io.log_info('Starting. Press "Enter" to stop training and save model.') io.log_info('Starting. Press "Enter" to stop training and save model.')
last_save_time = time.time() last_save_time = time.time()
last_preview_time = time.time() last_preview_time = time.time()
execute_programs = [ [x[0], x[1], time.time() ] for x in execute_programs ] execute_programs = [[x[0], x[1], time.time()] for x in execute_programs]
for i in itertools.count(0,1): for i in itertools.count(0, 1):
if not debug: if not debug:
cur_time = time.time() cur_time = time.time()
for x in execute_programs: for x in execute_programs:
prog_time, prog, last_time = x prog_time, prog, last_time = x
exec_prog = False exec_prog = False
if prog_time > 0 and (cur_time - start_time) >= prog_time: if 0 < prog_time <= (cur_time - start_time):
x[0] = 0 x[0] = 0
exec_prog = True exec_prog = True
elif prog_time < 0 and (cur_time - last_time) >= -prog_time: elif prog_time < 0 and (cur_time - last_time) >= -prog_time:
x[2] = cur_time x[2] = cur_time
exec_prog = True exec_prog = True
@ -178,18 +175,20 @@ def trainerThread (s2c, c2s, e,
try: try:
exec(prog) exec(prog)
except Exception as e: except Exception as e:
print("Unable to execute program: %s" % (prog) ) print("Unable to execute program: %s" % prog)
if not is_reached_goal: if not is_reached_goal:
if model.get_iter() == 0: if model.get_iter() == 0:
io.log_info("") io.log_info("")
io.log_info("Trying to do the first iteration. If an error occurs, reduce the model parameters.") io.log_info(
"Trying to do the first iteration. If an error occurs, reduce the model parameters.")
io.log_info("") io.log_info("")
if sys.platform[0:3] == 'win': if sys.platform[0:3] == 'win':
io.log_info("!!!") io.log_info("!!!")
io.log_info("Windows 10 users IMPORTANT notice. You should set this setting in order to work correctly.") io.log_info(
"Windows 10 users IMPORTANT notice. You should set this setting in order to work correctly.")
io.log_info("https://i.imgur.com/B7cmDCB.jpg") io.log_info("https://i.imgur.com/B7cmDCB.jpg")
io.log_info("!!!") io.log_info("!!!")
@ -198,19 +197,19 @@ def trainerThread (s2c, c2s, e,
loss_history = model.get_loss_history() loss_history = model.get_loss_history()
time_str = time.strftime("[%H:%M:%S]") time_str = time.strftime("[%H:%M:%S]")
if iter_time >= 10: if iter_time >= 10:
loss_string = "{0}[#{1:06d}][{2:.5s}s]".format ( time_str, iter, '{:0.4f}'.format(iter_time) ) loss_string = "{0}[#{1:06d}][{2:.5s}s]".format(time_str, iter, '{:0.4f}'.format(iter_time))
else: else:
loss_string = "{0}[#{1:06d}][{2:04d}ms]".format ( time_str, iter, int(iter_time*1000) ) loss_string = "{0}[#{1:06d}][{2:04d}ms]".format(time_str, iter, int(iter_time * 1000))
if shared_state['after_save']: if shared_state['after_save']:
shared_state['after_save'] = False shared_state['after_save'] = False
mean_loss = np.mean ( loss_history[save_iter:iter], axis=0) mean_loss = np.mean(loss_history[save_iter:iter], axis=0)
for loss_value in mean_loss: for loss_value in mean_loss:
loss_string += "[%.4f]" % (loss_value) loss_string += "[%.4f]" % (loss_value)
io.log_info (loss_string) io.log_info(loss_string)
save_iter = iter save_iter = iter
else: else:
@ -218,9 +217,12 @@ def trainerThread (s2c, c2s, e,
loss_string += "[%.4f]" % (loss_value) loss_string += "[%.4f]" % (loss_value)
if io.is_colab(): if io.is_colab():
io.log_info ('\r' + loss_string, end='') io.log_info('\r' + loss_string, end='')
else: else:
io.log_info (loss_string, end='\r') io.log_info(loss_string, end='\r')
if socketio is not None:
socketio.emit('loss', loss_string)
loss_entry = loss_history[-1] loss_entry = loss_history[-1]
log_step(iter, iter_time, loss_entry[0], loss_entry[1] if len(loss_entry) > 1 else None) log_step(iter, iter_time, loss_entry[0], loss_entry[1] if len(loss_entry) > 1 else None)
@ -229,10 +231,10 @@ def trainerThread (s2c, c2s, e,
model_save() model_save()
if model.get_target_iter() != 0 and model.is_reached_iter_goal(): if model.get_target_iter() != 0 and model.is_reached_iter_goal():
io.log_info ('Reached target iteration.') io.log_info('Reached target iteration.')
model_save() model_save()
is_reached_goal = True is_reached_goal = True
io.log_info ('You can use preview now.') io.log_info('You can use preview now.')
if not is_reached_goal and (time.time() - last_preview_time) >= tensorboard_preview_interval_min*60: if not is_reached_goal and (time.time() - last_preview_time) >= tensorboard_preview_interval_min*60:
last_preview_time += tensorboard_preview_interval_min*60 last_preview_time += tensorboard_preview_interval_min*60
@ -245,7 +247,7 @@ def trainerThread (s2c, c2s, e,
model_save() model_save()
send_preview() send_preview()
if i==0: if i == 0:
if is_reached_goal: if is_reached_goal:
model.pass_one_iter() model.pass_one_iter()
send_preview() send_preview()
@ -254,8 +256,8 @@ def trainerThread (s2c, c2s, e,
time.sleep(0.005) time.sleep(0.005)
while not s2c.empty(): while not s2c.empty():
input = s2c.get() item = s2c.get()
op = input['op'] op = item['op']
if op == 'save': if op == 'save':
model_save() model_save()
elif op == 'backup': elif op == 'backup':
@ -272,12 +274,10 @@ def trainerThread (s2c, c2s, e,
if i == -1: if i == -1:
break break
model.finalize() model.finalize()
except Exception as e: except Exception as e:
print ('Error: %s' % (str(e))) print('Error: %s' % (str(e)))
traceback.print_exc() traceback.print_exc()
break break
c2s.put ( {'op':'close'} ) c2s.put ( {'op':'close'} )
@ -330,20 +330,209 @@ def handle_tensorboard_op(input):
log_tensorboard_previews(step, previews, 'preview', train_summary_writer) log_tensorboard_previews(step, previews, 'preview', train_summary_writer)
if static_previews is not None: if static_previews is not None:
log_tensorboard_previews(step, static_previews, 'static_preview', train_summary_writer) log_tensorboard_previews(step, static_previews, 'static_preview', train_summary_writer)
c2s.put({'op': 'close'})
class Zoom(Enum):
ZOOM_25 = (1 / 4, '25%')
ZOOM_33 = (1 / 3, '33%')
ZOOM_50 = (1 / 2, '50%')
ZOOM_67 = (2 / 3, '67%')
ZOOM_75 = (3 / 4, '75%')
ZOOM_80 = (4 / 5, '80%')
ZOOM_90 = (9 / 10, '90%')
ZOOM_100 = (1, '100%')
ZOOM_110 = (11 / 10, '110%')
ZOOM_125 = (5 / 4, '125%')
ZOOM_150 = (3 / 2, '150%')
ZOOM_175 = (7 / 4, '175%')
ZOOM_200 = (2, '200%')
ZOOM_250 = (5 / 2, '250%')
ZOOM_300 = (3, '300%')
ZOOM_400 = (4, '400%')
ZOOM_500 = (5, '500%')
def __init__(self, scale, label):
self.scale = scale
self.label = label
def prev(self):
cls = self.__class__
members = list(cls)
index = members.index(self) - 1
if index < 0:
return self
return members[index]
def next(self):
cls = self.__class__
members = list(cls)
index = members.index(self) + 1
if index >= len(members):
return self
return members[index]
def scale_previews(previews, zoom=Zoom.ZOOM_100):
scaled = []
for preview in previews:
preview_name, preview_rgb = preview
scale_factor = zoom.scale
if scale_factor < 1:
scaled.append((preview_name, cv2.resize(preview_rgb, (0, 0),
fx=scale_factor,
fy=scale_factor,
interpolation=cv2.INTER_AREA)))
elif scale_factor > 1:
scaled.append((preview_name, cv2.resize(preview_rgb, (0, 0),
fx=scale_factor,
fy=scale_factor,
interpolation=cv2.INTER_LANCZOS4)))
else:
scaled.append((preview_name, preview_rgb))
return scaled
def create_preview_pane_image(previews, selected_preview, loss_history,
show_last_history_iters_count, iteration, batch_size, zoom=Zoom.ZOOM_100):
scaled_previews = scale_previews(previews, zoom)
selected_preview_name = scaled_previews[selected_preview][0]
selected_preview_rgb = scaled_previews[selected_preview][1]
h, w, c = selected_preview_rgb.shape
# HEAD
head_lines = [
'[s]:save [enter]:exit [-/+]:zoom: %s' % zoom.label,
'[p]:update [space]:next preview [l]:change history range',
'Preview: "%s" [%d/%d]' % (selected_preview_name, selected_preview + 1, len(previews))
]
head_line_height = int(15 * zoom.scale)
head_height = len(head_lines) * head_line_height
head = np.ones((head_height, w, c)) * 0.1
for i in range(0, len(head_lines)):
t = i * head_line_height
b = (i + 1) * head_line_height
head[t:b, 0:w] += imagelib.get_text_image((head_line_height, w, c), head_lines[i], color=[0.8] * c)
final = head
if loss_history is not None:
if show_last_history_iters_count == 0:
loss_history_to_show = loss_history
else:
loss_history_to_show = loss_history[-show_last_history_iters_count:]
lh_height = int(100 * zoom.scale)
lh_img = models.ModelBase.get_loss_history_preview(loss_history_to_show, iteration, w, c, lh_height)
final = np.concatenate([final, lh_img], axis=0)
final = np.concatenate([final, selected_preview_rgb], axis=0)
final = np.clip(final, 0, 1)
return (final * 255).astype(np.uint8)
def main(**kwargs): def main(**kwargs):
io.log_info ("Running trainer.\r\n") io.log_info("Running trainer.\r\n")
no_preview = kwargs.get('no_preview', False) no_preview = kwargs.get('no_preview', False)
flask_preview = kwargs.get('flask_preview', False)
s2c = queue.Queue() s2c = queue.Queue()
c2s = queue.Queue() c2s = queue.Queue()
e = threading.Event() e = threading.Event()
thread = threading.Thread(target=trainerThread, args=(s2c, c2s, e), kwargs=kwargs )
thread.start()
e.wait() #Wait for inital load to occur. previews = None
loss_history = None
selected_preview = 0
update_preview = False
is_waiting_preview = False
show_last_history_iters_count = 0
iteration = 0
batch_size = 1
zoom = Zoom.ZOOM_100
if flask_preview:
from flaskr.app import create_flask_app
s2flask = queue.Queue()
socketio, flask_app = create_flask_app(s2c, c2s, s2flask, kwargs)
thread = threading.Thread(target=trainerThread, args=(s2c, c2s, e, socketio), kwargs=kwargs)
thread.start()
e.wait() # Wait for inital load to occur.
flask_t = threading.Thread(target=socketio.run, args=(flask_app,),
kwargs={'debug': True, 'use_reloader': False})
flask_t.start()
while True:
if not c2s.empty():
item = c2s.get()
op = item['op']
if op == 'show':
is_waiting_preview = False
loss_history = item['loss_history'] if 'loss_history' in item.keys() else None
previews = item['previews'] if 'previews' in item.keys() else None
iteration = item['iter'] if 'iter' in item.keys() else 0
# batch_size = input['batch_size'] if 'iter' in input.keys() else 1
if previews is not None:
update_preview = True
elif op == 'update':
if not is_waiting_preview:
is_waiting_preview = True
s2c.put({'op': 'preview'})
elif op == 'next_preview':
selected_preview = (selected_preview + 1) % len(previews)
update_preview = True
elif op == 'change_history_range':
if show_last_history_iters_count == 0:
show_last_history_iters_count = 5000
elif show_last_history_iters_count == 5000:
show_last_history_iters_count = 10000
elif show_last_history_iters_count == 10000:
show_last_history_iters_count = 50000
elif show_last_history_iters_count == 50000:
show_last_history_iters_count = 100000
elif show_last_history_iters_count == 100000:
show_last_history_iters_count = 0
update_preview = True
elif op == 'close':
s2c.put({'op': 'close'})
break
elif op == 'zoom_prev':
zoom = zoom.prev()
update_preview = True
elif op == 'zoom_next':
zoom = zoom.next()
update_preview = True
if update_preview:
update_preview = False
selected_preview = selected_preview % len(previews)
preview_pane_image = create_preview_pane_image(previews,
selected_preview,
loss_history,
show_last_history_iters_count,
iteration,
batch_size,
zoom)
# io.show_image(wnd_name, preview_pane_image)
model_path = Path(kwargs.get('saved_models_path', ''))
filename = 'preview.png'
preview_file = str(model_path / filename)
cv2.imwrite(preview_file, preview_pane_image)
s2flask.put({'op': 'show'})
socketio.emit('preview', {'iter': iteration, 'loss': loss_history[-1]})
try:
io.process_messages(0.01)
except KeyboardInterrupt:
s2c.put({'op': 'close'})
else:
thread = threading.Thread(target=trainerThread, args=(s2c, c2s, e), kwargs=kwargs)
thread.start()
e.wait() # Wait for inital load to occur.
if no_preview: if no_preview:
while True: while True:
@ -357,7 +546,7 @@ def main(**kwargs):
try: try:
io.process_messages(0.1) io.process_messages(0.1)
except KeyboardInterrupt: except KeyboardInterrupt:
s2c.put ( {'op': 'close'} ) s2c.put({'op': 'close'})
else: else:
wnd_name = "Training preview" wnd_name = "Training preview"
io.named_window(wnd_name) io.named_window(wnd_name)
@ -373,33 +562,33 @@ def main(**kwargs):
iter = 0 iter = 0
while True: while True:
if not c2s.empty(): if not c2s.empty():
input = c2s.get() item = c2s.get()
op = input['op'] op = item['op']
if op == 'show': if op == 'show':
is_waiting_preview = False is_waiting_preview = False
loss_history = input['loss_history'] if 'loss_history' in input.keys() else None loss_history = item['loss_history'] if 'loss_history' in item.keys() else None
previews = input['previews'] if 'previews' in input.keys() else None previews = item['previews'] if 'previews' in item.keys() else None
iter = input['iter'] if 'iter' in input.keys() else 0 iter = item['iter'] if 'iter' in item.keys() else 0
if previews is not None: if previews is not None:
max_w = 0 max_w = 0
max_h = 0 max_h = 0
for (preview_name, preview_rgb) in previews: for (preview_name, preview_rgb) in previews:
(h, w, c) = preview_rgb.shape (h, w, c) = preview_rgb.shape
max_h = max (max_h, h) max_h = max(max_h, h)
max_w = max (max_w, w) max_w = max(max_w, w)
max_size = 800 max_size = 800
if max_h > max_size: if max_h > max_size:
max_w = int( max_w / (max_h / max_size) ) max_w = int(max_w / (max_h / max_size))
max_h = max_size max_h = max_size
#make all previews size equal # make all previews size equal
for preview in previews[:]: for preview in previews[:]:
(preview_name, preview_rgb) = preview (preview_name, preview_rgb) = preview
(h, w, c) = preview_rgb.shape (h, w, c) = preview_rgb.shape
if h != max_h or w != max_w: if h != max_h or w != max_w:
previews.remove(preview) previews.remove(preview)
previews.append ( (preview_name, cv2.resize(preview_rgb, (max_w, max_h))) ) previews.append((preview_name, cv2.resize(preview_rgb, (max_w, max_h))))
selected_preview = selected_preview % len(previews) selected_preview = selected_preview % len(previews)
update_preview = True update_preview = True
elif op == 'tb': elif op == 'tb':
@ -412,22 +601,22 @@ def main(**kwargs):
selected_preview_name = previews[selected_preview][0] selected_preview_name = previews[selected_preview][0]
selected_preview_rgb = previews[selected_preview][1] selected_preview_rgb = previews[selected_preview][1]
(h,w,c) = selected_preview_rgb.shape (h, w, c) = selected_preview_rgb.shape
# HEAD # HEAD
head_lines = [ head_lines = [
'[s]:save [b]:backup [enter]:exit', '[s]:save [b]:backup [enter]:exit',
'[p]:update [space]:next preview [l]:change history range', '[p]:update [space]:next preview [l]:change history range',
'Preview: "%s" [%d/%d]' % (selected_preview_name,selected_preview+1, len(previews) ) 'Preview: "%s" [%d/%d]' % (selected_preview_name, selected_preview + 1, len(previews))
] ]
head_line_height = 15 head_line_height = 15
head_height = len(head_lines) * head_line_height head_height = len(head_lines) * head_line_height
head = np.ones ( (head_height,w,c) ) * 0.1 head = np.ones((head_height, w, c)) * 0.1
for i in range(0, len(head_lines)): for i in range(0, len(head_lines)):
t = i*head_line_height t = i * head_line_height
b = (i+1)*head_line_height b = (i + 1) * head_line_height
head[t:b, 0:w] += imagelib.get_text_image ( (head_line_height,w,c) , head_lines[i], color=[0.8]*c ) head[t:b, 0:w] += imagelib.get_text_image((head_line_height, w, c), head_lines[i], color=[0.8] * c)
final = head final = head
@ -438,27 +627,28 @@ def main(**kwargs):
loss_history_to_show = loss_history[-show_last_history_iters_count:] loss_history_to_show = loss_history[-show_last_history_iters_count:]
lh_img = models.ModelBase.get_loss_history_preview(loss_history_to_show, iter, w, c) lh_img = models.ModelBase.get_loss_history_preview(loss_history_to_show, iter, w, c)
final = np.concatenate ( [final, lh_img], axis=0 ) final = np.concatenate([final, lh_img], axis=0)
final = np.concatenate ( [final, selected_preview_rgb], axis=0 ) final = np.concatenate([final, selected_preview_rgb], axis=0)
final = np.clip(final, 0, 1) final = np.clip(final, 0, 1)
io.show_image( wnd_name, (final*255).astype(np.uint8) ) io.show_image(wnd_name, (final * 255).astype(np.uint8))
is_showing = True is_showing = True
key_events = io.get_key_events(wnd_name) key_events = io.get_key_events(wnd_name)
key, chr_key, ctrl_pressed, alt_pressed, shift_pressed = key_events[-1] if len(key_events) > 0 else (0,0,False,False,False) key, chr_key, ctrl_pressed, alt_pressed, shift_pressed = key_events[-1] if len(key_events) > 0 else (
0, 0, False, False, False)
if key == ord('\n') or key == ord('\r'): if key == ord('\n') or key == ord('\r'):
s2c.put ( {'op': 'close'} ) s2c.put({'op': 'close'})
elif key == ord('s'): elif key == ord('s'):
s2c.put ( {'op': 'save'} ) s2c.put({'op': 'save'})
elif key == ord('b'): elif key == ord('b'):
s2c.put ( {'op': 'backup'} ) s2c.put({'op': 'backup'})
elif key == ord('p'): elif key == ord('p'):
if not is_waiting_preview: if not is_waiting_preview:
is_waiting_preview = True is_waiting_preview = True
s2c.put ( {'op': 'preview'} ) s2c.put({'op': 'preview'})
elif key == ord('l'): elif key == ord('l'):
if show_last_history_iters_count == 0: if show_last_history_iters_count == 0:
show_last_history_iters_count = 5000 show_last_history_iters_count = 5000
@ -478,6 +668,6 @@ def main(**kwargs):
try: try:
io.process_messages(0.1) io.process_messages(0.1)
except KeyboardInterrupt: except KeyboardInterrupt:
s2c.put ( {'op': 'close'} ) s2c.put({'op': 'close'})
io.destroy_all_windows() io.destroy_all_windows()

View file

@ -10,8 +10,8 @@ from core.cv2ex import *
from core.interact import interact as io from core.interact import interact as io
from core.leras import nn from core.leras import nn
from DFLIMG import * from DFLIMG import *
from facelib import XSegNet from facelib import XSegNet, LandmarksProcessor, FaceType
import pickle
def apply_xseg(input_path, model_path): def apply_xseg(input_path, model_path):
if not input_path.exists(): if not input_path.exists():
@ -20,17 +20,42 @@ def apply_xseg(input_path, model_path):
if not model_path.exists(): if not model_path.exists():
raise ValueError(f'{model_path} not found. Please ensure it exists.') raise ValueError(f'{model_path} not found. Please ensure it exists.')
face_type = None
model_dat = model_path / 'XSeg_data.dat'
if model_dat.exists():
dat = pickle.loads( model_dat.read_bytes() )
dat_options = dat.get('options', None)
if dat_options is not None:
face_type = dat_options.get('face_type', None)
if face_type is None:
face_type = io.input_str ("XSeg model face type", 'same', ['h','mf','f','wf','head','same'], help_message="Specify face type of trained XSeg model. For example if XSeg model trained as WF, but faceset is HEAD, specify WF to apply xseg only on WF part of HEAD. Default is 'same'").lower()
if face_type == 'same':
face_type = None
if face_type is not None:
face_type = {'h' : FaceType.HALF,
'mf' : FaceType.MID_FULL,
'f' : FaceType.FULL,
'wf' : FaceType.WHOLE_FACE,
'head' : FaceType.HEAD}[face_type]
io.log_info(f'Applying trained XSeg model to {input_path.name}/ folder.') io.log_info(f'Applying trained XSeg model to {input_path.name}/ folder.')
device_config = nn.DeviceConfig.ask_choose_device(choose_only_one=True) device_config = nn.DeviceConfig.ask_choose_device(choose_only_one=True)
nn.initialize(device_config) nn.initialize(device_config)
xseg = XSegNet(name='XSeg', xseg = XSegNet(name='XSeg',
load_weights=True, load_weights=True,
weights_file_root=model_path, weights_file_root=model_path,
data_format=nn.data_format, data_format=nn.data_format,
raise_on_no_model_files=True) raise_on_no_model_files=True)
res = xseg.get_resolution() xseg_res = xseg.get_resolution()
images_paths = pathex.get_image_paths(input_path, return_Path_class=True) images_paths = pathex.get_image_paths(input_path, return_Path_class=True)
@ -42,15 +67,36 @@ def apply_xseg(input_path, model_path):
img = cv2_imread(filepath).astype(np.float32) / 255.0 img = cv2_imread(filepath).astype(np.float32) / 255.0
h,w,c = img.shape h,w,c = img.shape
if w != res:
img = cv2.resize( img, (res,res), interpolation=cv2.INTER_CUBIC ) img_face_type = FaceType.fromString( dflimg.get_face_type() )
if len(img.shape) == 2: if face_type is not None and img_face_type != face_type:
img = img[...,None] lmrks = dflimg.get_source_landmarks()
fmat = LandmarksProcessor.get_transform_mat(lmrks, w, face_type)
imat = LandmarksProcessor.get_transform_mat(lmrks, w, img_face_type)
g_p = LandmarksProcessor.transform_points (np.float32([(0,0),(w,0),(0,w) ]), fmat, True)
g_p2 = LandmarksProcessor.transform_points (g_p, imat)
mat = cv2.getAffineTransform( g_p2, np.float32([(0,0),(w,0),(0,w) ]) )
img = cv2.warpAffine(img, mat, (w, w), cv2.INTER_LANCZOS4)
img = cv2.resize(img, (xseg_res, xseg_res), interpolation=cv2.INTER_LANCZOS4)
else:
if w != xseg_res:
img = cv2.resize( img, (xseg_res,xseg_res), interpolation=cv2.INTER_LANCZOS4 )
if len(img.shape) == 2:
img = img[...,None]
mask = xseg.extract(img) mask = xseg.extract(img)
if face_type is not None and img_face_type != face_type:
mask = cv2.resize(mask, (w, w), interpolation=cv2.INTER_LANCZOS4)
mask = cv2.warpAffine( mask, mat, (w,w), np.zeros( (h,w,c), dtype=np.float), cv2.WARP_INVERSE_MAP | cv2.INTER_LANCZOS4)
mask = cv2.resize(mask, (xseg_res, xseg_res), interpolation=cv2.INTER_LANCZOS4)
mask[mask < 0.5]=0 mask[mask < 0.5]=0
mask[mask >= 0.5]=1 mask[mask >= 0.5]=1
dflimg.set_xseg_mask(mask) dflimg.set_xseg_mask(mask)
dflimg.save() dflimg.save()
@ -67,7 +113,8 @@ def fetch_xseg(input_path):
images_paths = pathex.get_image_paths(input_path, return_Path_class=True) images_paths = pathex.get_image_paths(input_path, return_Path_class=True)
files_copied = 0
files_copied = []
for filepath in io.progress_bar_generator(images_paths, "Processing"): for filepath in io.progress_bar_generator(images_paths, "Processing"):
dflimg = DFLIMG.load(filepath) dflimg = DFLIMG.load(filepath)
if dflimg is None or not dflimg.has_data(): if dflimg is None or not dflimg.has_data():
@ -77,10 +124,16 @@ def fetch_xseg(input_path):
ie_polys = dflimg.get_seg_ie_polys() ie_polys = dflimg.get_seg_ie_polys()
if ie_polys.has_polys(): if ie_polys.has_polys():
files_copied += 1 files_copied.append(filepath)
shutil.copy ( str(filepath), str(output_path / filepath.name) ) shutil.copy ( str(filepath), str(output_path / filepath.name) )
io.log_info(f'Files copied: {files_copied}') io.log_info(f'Files copied: {len(files_copied)}')
is_delete = io.input_bool (f"\r\nDelete original files?", True)
if is_delete:
for filepath in files_copied:
Path(filepath).unlink()
def remove_xseg(input_path): def remove_xseg(input_path):
if not input_path.exists(): if not input_path.exists():

View file

@ -57,7 +57,9 @@ def MergeMaskedFace (predictor_func, predictor_input_shape,
prd_face_mask_a_0 = cv2.resize (prd_face_mask_a_0, (output_size, output_size), interpolation=cv2.INTER_CUBIC) prd_face_mask_a_0 = cv2.resize (prd_face_mask_a_0, (output_size, output_size), interpolation=cv2.INTER_CUBIC)
prd_face_dst_mask_a_0 = cv2.resize (prd_face_dst_mask_a_0, (output_size, output_size), interpolation=cv2.INTER_CUBIC) prd_face_dst_mask_a_0 = cv2.resize (prd_face_dst_mask_a_0, (output_size, output_size), interpolation=cv2.INTER_CUBIC)
if cfg.mask_mode == 1: #dst if cfg.mask_mode == 0: #full
wrk_face_mask_a_0 = np.ones_like(dst_face_mask_a_0)
elif cfg.mask_mode == 1: #dst
wrk_face_mask_a_0 = cv2.resize (dst_face_mask_a_0, (output_size,output_size), interpolation=cv2.INTER_CUBIC) wrk_face_mask_a_0 = cv2.resize (dst_face_mask_a_0, (output_size,output_size), interpolation=cv2.INTER_CUBIC)
elif cfg.mask_mode == 2: #learned-prd elif cfg.mask_mode == 2: #learned-prd
wrk_face_mask_a_0 = prd_face_mask_a_0 wrk_face_mask_a_0 = prd_face_mask_a_0
@ -142,7 +144,9 @@ def MergeMaskedFace (predictor_func, predictor_input_shape,
elif 'raw' in cfg.mode: elif 'raw' in cfg.mode:
if cfg.mode == 'raw-rgb': if cfg.mode == 'raw-rgb':
out_img = cv2.warpAffine( prd_face_bgr, face_output_mat, img_size, img_bgr.copy(), cv2.WARP_INVERSE_MAP | cv2.INTER_CUBIC ) out_img_face = cv2.warpAffine( prd_face_bgr, face_output_mat, img_size, np.empty_like(img_bgr), cv2.WARP_INVERSE_MAP | cv2.INTER_CUBIC)
out_img_face_mask = cv2.warpAffine( np.ones_like(prd_face_bgr), face_output_mat, img_size, np.empty_like(img_bgr), cv2.WARP_INVERSE_MAP | cv2.INTER_CUBIC)
out_img = img_bgr*(1-out_img_face_mask) + out_img_face*out_img_face_mask
out_merging_mask_a = img_face_mask_a out_merging_mask_a = img_face_mask_a
elif cfg.mode == 'raw-predict': elif cfg.mode == 'raw-predict':
out_img = prd_face_bgr out_img = prd_face_bgr

View file

@ -81,7 +81,8 @@ mode_dict = {0:'original',
mode_str_dict = { mode_dict[key] : key for key in mode_dict.keys() } mode_str_dict = { mode_dict[key] : key for key in mode_dict.keys() }
mask_mode_dict = {1:'dst', mask_mode_dict = {0:'full',
1:'dst',
2:'learned-prd', 2:'learned-prd',
3:'learned-dst', 3:'learned-dst',
4:'learned-prd*learned-dst', 4:'learned-prd*learned-dst',

View file

@ -8,6 +8,7 @@ import pickle
import shutil import shutil
import tempfile import tempfile
import time import time
import datetime
from pathlib import Path from pathlib import Path
import cv2 import cv2
@ -180,11 +181,14 @@ class ModelBase(object):
if self.is_first_run(): if self.is_first_run():
# save as default options only for first run model initialize # save as default options only for first run model initialize
self.default_options_path.write_bytes( pickle.dumps (self.options) ) self.default_options_path.write_bytes( pickle.dumps (self.options) )
self.session_name = self.options.get('session_name', "")
self.autobackup_hour = self.options.get('autobackup_hour', 0) self.autobackup_hour = self.options.get('autobackup_hour', 0)
self.maximum_n_backups = self.options.get('maximum_n_backups', 24)
self.write_preview_history = self.options.get('write_preview_history', False) self.write_preview_history = self.options.get('write_preview_history', False)
self.target_iter = self.options.get('target_iter',0) self.target_iter = self.options.get('target_iter',0)
self.random_flip = self.options.get('random_flip',True) self.random_flip = self.options.get('random_flip',True)
self.random_src_flip = self.options.get('random_src_flip', False)
self.random_dst_flip = self.options.get('random_dst_flip', True)
self.on_initialize() self.on_initialize()
self.options['batch_size'] = self.batch_size self.options['batch_size'] = self.batch_size
@ -276,13 +280,21 @@ class ModelBase(object):
def ask_override(self): def ask_override(self):
return self.is_training and self.iter != 0 and io.input_in_time ("Press enter in 2 seconds to override model settings.", 5 if io.is_colab() else 2 ) return self.is_training and self.iter != 0 and io.input_in_time ("Press enter in 2 seconds to override model settings.", 5 if io.is_colab() else 2 )
def ask_session_name(self, default_value=""):
default_session_name = self.options['session_name'] = self.load_or_def_option('session_name', default_value)
self.options['session_name'] = io.input_str("Session name", default_session_name, help_message="String to refer back to in summary.txt and in autobackup foldername")
def ask_autobackup_hour(self, default_value=0): def ask_autobackup_hour(self, default_value=0):
default_autobackup_hour = self.options['autobackup_hour'] = self.load_or_def_option('autobackup_hour', default_value) default_autobackup_hour = self.options['autobackup_hour'] = self.load_or_def_option('autobackup_hour', default_value)
self.options['autobackup_hour'] = io.input_int(f"Autobackup every N hour", default_autobackup_hour, add_info="0..24", help_message="Autobackup model files with preview every N hour. Latest backup located in model/<>_autobackups/01") self.options['autobackup_hour'] = io.input_int(f"Autobackup every N hour", default_autobackup_hour, add_info="0..24", help_message="Autobackup model files with preview every N hour. Latest backup is the last folder when sorted by name ascending located in model/<>_autobackups")
def ask_maximum_n_backups(self, default_value=24):
default_maximum_n_backups = self.options['maximum_n_backups'] = self.load_or_def_option('maximum_n_backups', default_value)
self.options['maximum_n_backups'] = io.input_int(f"Maximum N backups", default_maximum_n_backups, help_message="Maximum amount of backups that are located in model/<>_autobackups. Inputting 0 here would allow it to autobackup as many times as it occurs.")
def ask_write_preview_history(self, default_value=False): def ask_write_preview_history(self, default_value=False):
default_write_preview_history = self.load_or_def_option('write_preview_history', default_value) default_write_preview_history = self.load_or_def_option('write_preview_history', default_value)
self.options['write_preview_history'] = io.input_bool(f"Write preview history", default_write_preview_history, help_message="Preview history will be writed to <ModelName>_history folder.") self.options['write_preview_history'] = io.input_bool(f"Write preview history", default_write_preview_history, help_message="Preview history will be written to <ModelName>_history folder.")
if self.options['write_preview_history']: if self.options['write_preview_history']:
if io.is_support_windows(): if io.is_support_windows():
@ -298,6 +310,14 @@ class ModelBase(object):
default_random_flip = self.load_or_def_option('random_flip', True) default_random_flip = self.load_or_def_option('random_flip', True)
self.options['random_flip'] = io.input_bool("Flip faces randomly", default_random_flip, help_message="Predicted face will look more naturally without this option, but src faceset should cover all face directions as dst faceset.") self.options['random_flip'] = io.input_bool("Flip faces randomly", default_random_flip, help_message="Predicted face will look more naturally without this option, but src faceset should cover all face directions as dst faceset.")
def ask_random_src_flip(self):
default_random_src_flip = self.load_or_def_option('random_src_flip', False)
self.options['random_src_flip'] = io.input_bool("Flip SRC faces randomly", default_random_src_flip, help_message="Random horizontal flip SRC faceset. Covers more angles, but the face may look less naturally.")
def ask_random_dst_flip(self):
default_random_dst_flip = self.load_or_def_option('random_dst_flip', True)
self.options['random_dst_flip'] = io.input_bool("Flip DST faces randomly", default_random_dst_flip, help_message="Random horizontal flip DST faceset. Makes generalization of src->dst better, if src random flip is not enabled.")
def ask_batch_size(self, suggest_batch_size=None, range=None): def ask_batch_size(self, suggest_batch_size=None, range=None):
default_batch_size = self.load_or_def_option('batch_size', suggest_batch_size or self.batch_size) default_batch_size = self.load_or_def_option('batch_size', suggest_batch_size or self.batch_size)
@ -405,33 +425,32 @@ class ModelBase(object):
bckp_filename_list = [ self.get_strpath_storage_for_file(filename) for _, filename in self.get_model_filename_list() ] bckp_filename_list = [ self.get_strpath_storage_for_file(filename) for _, filename in self.get_model_filename_list() ]
bckp_filename_list += [ str(self.get_summary_path()), str(self.model_data_path) ] bckp_filename_list += [ str(self.get_summary_path()), str(self.model_data_path) ]
for i in range(24,0,-1): # Create new backup
idx_str = '%.2d' % i session_suffix = f'_{self.session_name}' if self.session_name else ''
next_idx_str = '%.2d' % (i+1) idx_str = datetime.datetime.now().strftime('%Y%m%dT%H%M%S') + session_suffix
idx_backup_path = self.autobackups_path / idx_str
idx_backup_path.mkdir()
for filename in bckp_filename_list:
shutil.copy(str(filename), str(idx_backup_path / Path(filename).name))\
idx_backup_path = self.autobackups_path / idx_str previews = self.get_previews()
next_idx_packup_path = self.autobackups_path / next_idx_str
if idx_backup_path.exists(): # Generate previews and save in new backup
if i == 24: plist = []
pathex.delete_all_files(idx_backup_path) for i in range(len(previews)):
else: name, bgr = previews[i]
next_idx_packup_path.mkdir(exist_ok=True) plist += [ (bgr, idx_backup_path / ( ('preview_%s.jpg') % (name)) ) ]
pathex.move_all_files (idx_backup_path, next_idx_packup_path)
if i == 1: if len(plist) != 0:
idx_backup_path.mkdir(exist_ok=True) self.get_preview_history_writer().post(plist, self.loss_history, self.iter)
for filename in bckp_filename_list:
shutil.copy ( str(filename), str(idx_backup_path / Path(filename).name) )
previews = self.get_previews() # Check if we've exceeded the max number of backups
plist = [] if self.maximum_n_backups != 0:
for i in range(len(previews)): all_backups = sorted([x for x in self.autobackups_path.iterdir() if x.is_dir()])
name, bgr = previews[i] while len(all_backups) > self.maximum_n_backups:
plist += [ (bgr, idx_backup_path / ( ('preview_%s.jpg') % (name)) ) ] oldest_backup = all_backups.pop(0)
pathex.delete_all_files(oldest_backup)
if len(plist) != 0: oldest_backup.rmdir()
self.get_preview_history_writer().post(plist, self.loss_history, self.iter)
def debug_one_iter(self): def debug_one_iter(self):
images = [] images = []
@ -574,10 +593,9 @@ class ModelBase(object):
return summary_text return summary_text
@staticmethod @staticmethod
def get_loss_history_preview(loss_history, iter, w, c): def get_loss_history_preview(loss_history, iter, w, c, lh_height=100):
loss_history = np.array (loss_history.copy()) loss_history = np.array (loss_history.copy())
lh_height = 100
lh_img = np.ones ( (lh_height,w,c) ) * 0.1 lh_img = np.ones ( (lh_height,w,c) ) * 0.1
if len(loss_history) != 0: if len(loss_history) != 0:

892
models/Model_AMP/Model.py Normal file
View file

@ -0,0 +1,892 @@
import multiprocessing
import operator
from functools import partial
import numpy as np
from core import mathlib
from core.interact import interact as io
from core.leras import nn
from facelib import FaceType
from models import ModelBase
from samplelib import *
from core.cv2ex import *
class AMPModel(ModelBase):
#override
def on_initialize_options(self):
device_config = nn.getCurrentDeviceConfig()
lowest_vram = 2
if len(device_config.devices) != 0:
lowest_vram = device_config.devices.get_worst_device().total_mem_gb
if lowest_vram >= 4:
suggest_batch_size = 8
else:
suggest_batch_size = 4
yn_str = {True:'y',False:'n'}
min_res = 64
max_res = 640
default_resolution = self.options['resolution'] = self.load_or_def_option('resolution', 224)
default_face_type = self.options['face_type'] = self.load_or_def_option('face_type', 'wf')
default_models_opt_on_gpu = self.options['models_opt_on_gpu'] = self.load_or_def_option('models_opt_on_gpu', True)
default_ae_dims = self.options['ae_dims'] = self.load_or_def_option('ae_dims', 256)
default_e_dims = self.options['e_dims'] = self.load_or_def_option('e_dims', 64)
default_d_dims = self.options['d_dims'] = self.options.get('d_dims', None)
default_d_mask_dims = self.options['d_mask_dims'] = self.options.get('d_mask_dims', None)
default_morph_factor = self.options['morph_factor'] = self.options.get('morph_factor', 0.33)
default_masked_training = self.options['masked_training'] = self.load_or_def_option('masked_training', True)
default_eyes_mouth_prio = self.options['eyes_mouth_prio'] = self.load_or_def_option('eyes_mouth_prio', True)
default_uniform_yaw = self.options['uniform_yaw'] = self.load_or_def_option('uniform_yaw', False)
lr_dropout = self.load_or_def_option('lr_dropout', 'n')
lr_dropout = {True:'y', False:'n'}.get(lr_dropout, lr_dropout) #backward comp
default_lr_dropout = self.options['lr_dropout'] = lr_dropout
default_loss_function = self.options['loss_function'] = self.load_or_def_option('loss_function', 'SSIM')
default_random_warp = self.options['random_warp'] = self.load_or_def_option('random_warp', True)
default_random_downsample = self.options['random_downsample'] = self.load_or_def_option('random_downsample', False)
default_random_noise = self.options['random_noise'] = self.load_or_def_option('random_noise', False)
default_random_blur = self.options['random_blur'] = self.load_or_def_option('random_blur', False)
default_random_jpeg = self.options['random_jpeg'] = self.load_or_def_option('random_jpeg', False)
default_background_power = self.options['background_power'] = self.load_or_def_option('background_power', 0.0)
default_ct_mode = self.options['ct_mode'] = self.load_or_def_option('ct_mode', 'none')
default_random_color = self.options['random_color'] = self.load_or_def_option('random_color', False)
default_clipgrad = self.options['clipgrad'] = self.load_or_def_option('clipgrad', False)
default_pretrain = self.options['pretrain'] = self.load_or_def_option('pretrain', False)
ask_override = self.ask_override()
if self.is_first_run() or ask_override:
self.ask_autobackup_hour()
self.ask_write_preview_history()
self.ask_target_iter()
self.ask_random_src_flip()
self.ask_random_dst_flip()
self.ask_batch_size(suggest_batch_size)
if self.is_first_run():
resolution = io.input_int("Resolution", default_resolution, add_info="64-640", help_message="More resolution requires more VRAM and time to train. Value will be adjusted to multiple of 32 .")
resolution = np.clip ( (resolution // 32) * 32, min_res, max_res)
self.options['resolution'] = resolution
self.options['face_type'] = io.input_str ("Face type", default_face_type, ['wf','head'], help_message="whole face / head").lower()
default_d_dims = self.options['d_dims'] = self.load_or_def_option('d_dims', 64)
default_d_mask_dims = default_d_dims // 3
default_d_mask_dims += default_d_mask_dims % 2
default_d_mask_dims = self.options['d_mask_dims'] = self.load_or_def_option('d_mask_dims', default_d_mask_dims)
if self.is_first_run():
self.options['ae_dims'] = np.clip ( io.input_int("AutoEncoder dimensions", default_ae_dims, add_info="32-1024", help_message="All face information will packed to AE dims. If amount of AE dims are not enough, then for example closed eyes will not be recognized. More dims are better, but require more VRAM. You can fine-tune model size to fit your GPU." ), 32, 1024 )
e_dims = np.clip ( io.input_int("Encoder dimensions", default_e_dims, add_info="16-256", help_message="More dims help to recognize more facial features and achieve sharper result, but require more VRAM. You can fine-tune model size to fit your GPU." ), 16, 256 )
self.options['e_dims'] = e_dims + e_dims % 2
d_dims = np.clip ( io.input_int("Decoder dimensions", default_d_dims, add_info="16-256", help_message="More dims help to recognize more facial features and achieve sharper result, but require more VRAM. You can fine-tune model size to fit your GPU." ), 16, 256 )
self.options['d_dims'] = d_dims + d_dims % 2
d_mask_dims = np.clip ( io.input_int("Decoder mask dimensions", default_d_mask_dims, add_info="16-256", help_message="Typical mask dimensions = decoder dimensions / 3. If you manually cut out obstacles from the dst mask, you can increase this parameter to achieve better quality." ), 16, 256 )
self.options['d_mask_dims'] = d_mask_dims + d_mask_dims % 2
if self.is_first_run() or ask_override:
morph_factor = np.clip ( io.input_number ("Morph factor.", default_morph_factor, add_info="0.1 .. 0.5", help_message="The smaller the value, the more src-like facial expressions will appear. The larger the value, the less space there is to train a large dst faceset in the neural network. Typical fine value is 0.33"), 0.1, 0.5 )
self.options['morph_factor'] = morph_factor
if self.options['face_type'] == 'wf' or self.options['face_type'] == 'head':
self.options['masked_training'] = io.input_bool ("Masked training", default_masked_training, help_message="This option is available only for 'whole_face' or 'head' type. Masked training clips training area to full_face mask or XSeg mask, thus network will train the faces properly.")
self.options['eyes_mouth_prio'] = io.input_bool ("Eyes and mouth priority", default_eyes_mouth_prio, help_message='Helps to fix eye problems during training like "alien eyes" and wrong eyes direction. Also makes the detail of the teeth higher.')
self.options['uniform_yaw'] = io.input_bool ("Uniform yaw distribution of samples", default_uniform_yaw, help_message='Helps to fix blurry side faces due to small amount of them in the faceset.')
default_gan_power = self.options['gan_power'] = self.load_or_def_option('gan_power', 0.0)
default_gan_patch_size = self.options['gan_patch_size'] = self.load_or_def_option('gan_patch_size', self.options['resolution'] // 8)
default_gan_dims = self.options['gan_dims'] = self.load_or_def_option('gan_dims', 16)
if self.is_first_run() or ask_override:
self.options['models_opt_on_gpu'] = io.input_bool ("Place models and optimizer on GPU", default_models_opt_on_gpu, help_message="When you train on one GPU, by default model and optimizer weights are placed on GPU to accelerate the process. You can place they on CPU to free up extra VRAM, thus set bigger dimensions.")
self.options['lr_dropout'] = io.input_str (f"Use learning rate dropout", default_lr_dropout, ['n','y','cpu'], help_message="When the face is trained enough, you can enable this option to get extra sharpness and reduce subpixel shake for less amount of iterations. Enabled it before `disable random warp` and before GAN. \nn - disabled.\ny - enabled\ncpu - enabled on CPU. This allows not to use extra VRAM, sacrificing 20% time of iteration.")
self.options['loss_function'] = io.input_str(f"Loss function", default_loss_function, ['SSIM', 'MS-SSIM', 'MS-SSIM+L1'],
help_message="Change loss function used for image quality assessment.")
self.options['random_warp'] = io.input_bool ("Enable random warp of samples", default_random_warp, help_message="Random warp is required to generalize facial expressions of both faces. When the face is trained enough, you can disable it to get extra sharpness and reduce subpixel shake for less amount of iterations.")
self.options['random_downsample'] = io.input_bool("Enable random downsample of samples", default_random_downsample, help_message="")
self.options['random_noise'] = io.input_bool("Enable random noise added to samples", default_random_noise, help_message="")
self.options['random_blur'] = io.input_bool("Enable random blur of samples", default_random_blur, help_message="")
self.options['random_jpeg'] = io.input_bool("Enable random jpeg compression of samples", default_random_jpeg, help_message="")
self.options['gan_power'] = np.clip ( io.input_number ("GAN power", default_gan_power, add_info="0.0 .. 1.0", help_message="Forces the neural network to learn small details of the face. Enable it only when the face is trained enough with lr_dropout(on) and random_warp(off), and don't disable. The higher the value, the higher the chances of artifacts. Typical fine value is 0.1"), 0.0, 1.0 )
if self.options['gan_power'] != 0.0:
gan_patch_size = np.clip ( io.input_int("GAN patch size", default_gan_patch_size, add_info="3-640", help_message="The higher patch size, the higher the quality, the more VRAM is required. You can get sharper edges even at the lowest setting. Typical fine value is resolution / 8." ), 3, 640 )
self.options['gan_patch_size'] = gan_patch_size
gan_dims = np.clip ( io.input_int("GAN dimensions", default_gan_dims, add_info="4-64", help_message="The dimensions of the GAN network. The higher dimensions, the more VRAM is required. You can get sharper edges even at the lowest setting. Typical fine value is 16." ), 4, 64 )
self.options['gan_dims'] = gan_dims
self.options['background_power'] = np.clip ( io.input_number("Background power", default_background_power, add_info="0.0..1.0", help_message="Learn the area outside of the mask. Helps smooth out area near the mask boundaries. Can be used at any time"), 0.0, 1.0 )
self.options['ct_mode'] = io.input_str (f"Color transfer for src faceset", default_ct_mode, ['none','rct','lct','mkl','idt','sot', 'fs-aug'], help_message="Change color distribution of src samples close to dst samples. Try all modes to find the best.")
self.options['random_color'] = io.input_bool ("Random color", default_random_color, help_message="Samples are randomly rotated around the L axis in LAB colorspace, helps generalize training")
self.options['clipgrad'] = io.input_bool ("Enable gradient clipping", default_clipgrad, help_message="Gradient clipping reduces chance of model collapse, sacrificing speed of training.")
self.options['pretrain'] = io.input_bool ("Enable pretraining mode", default_pretrain, help_message="Pretrain the model with large amount of various faces. After that, model can be used to train the fakes more quickly. Forces random_warp=N, random_flips=Y, gan_power=0.0, lr_dropout=N, uniform_yaw=Y")
self.gan_model_changed = (default_gan_patch_size != self.options['gan_patch_size']) or (default_gan_dims != self.options['gan_dims'])
self.pretrain_just_disabled = (default_pretrain == True and self.options['pretrain'] == False)
#override
def on_initialize(self):
device_config = nn.getCurrentDeviceConfig()
devices = device_config.devices
self.model_data_format = "NCHW"
nn.initialize(data_format=self.model_data_format)
tf = nn.tf
self.resolution = resolution = self.options['resolution']
lowest_dense_res = self.lowest_dense_res = resolution // 32
class Downscale(nn.ModelBase):
def __init__(self, in_ch, out_ch, kernel_size=5, *kwargs ):
self.in_ch = in_ch
self.out_ch = out_ch
self.kernel_size = kernel_size
super().__init__(*kwargs)
def on_build(self, *args, **kwargs ):
self.conv1 = nn.Conv2D( self.in_ch, self.out_ch, kernel_size=self.kernel_size, strides=2, padding='SAME')
def forward(self, x):
x = self.conv1(x)
x = tf.nn.leaky_relu(x, 0.1)
return x
def get_out_ch(self):
return self.out_ch
class Upscale(nn.ModelBase):
def on_build(self, in_ch, out_ch, kernel_size=3 ):
self.conv1 = nn.Conv2D( in_ch, out_ch*4, kernel_size=kernel_size, padding='SAME')
def forward(self, x):
x = self.conv1(x)
x = tf.nn.leaky_relu(x, 0.1)
x = nn.depth_to_space(x, 2)
return x
class ResidualBlock(nn.ModelBase):
def on_build(self, ch, kernel_size=3 ):
self.conv1 = nn.Conv2D( ch, ch, kernel_size=kernel_size, padding='SAME')
self.conv2 = nn.Conv2D( ch, ch, kernel_size=kernel_size, padding='SAME')
def forward(self, inp):
x = self.conv1(inp)
x = tf.nn.leaky_relu(x, 0.2)
x = self.conv2(x)
x = tf.nn.leaky_relu(inp+x, 0.2)
return x
class Encoder(nn.ModelBase):
def on_build(self, in_ch, e_ch, ae_ch):
self.down1 = Downscale(in_ch, e_ch, kernel_size=5)
self.res1 = ResidualBlock(e_ch)
self.down2 = Downscale(e_ch, e_ch*2, kernel_size=5)
self.down3 = Downscale(e_ch*2, e_ch*4, kernel_size=5)
self.down4 = Downscale(e_ch*4, e_ch*8, kernel_size=5)
self.down5 = Downscale(e_ch*8, e_ch*8, kernel_size=5)
self.res5 = ResidualBlock(e_ch*8)
self.dense1 = nn.Dense( lowest_dense_res*lowest_dense_res*e_ch*8, ae_ch )
def forward(self, inp):
x = inp
x = self.down1(x)
x = self.res1(x)
x = self.down2(x)
x = self.down3(x)
x = self.down4(x)
x = self.down5(x)
x = self.res5(x)
x = nn.flatten(x)
x = nn.pixel_norm(x, axes=-1)
x = self.dense1(x)
return x
class Inter(nn.ModelBase):
def __init__(self, ae_ch, ae_out_ch, **kwargs):
self.ae_ch, self.ae_out_ch = ae_ch, ae_out_ch
super().__init__(**kwargs)
def on_build(self):
ae_ch, ae_out_ch = self.ae_ch, self.ae_out_ch
self.dense2 = nn.Dense( ae_ch, lowest_dense_res * lowest_dense_res * ae_out_ch )
def forward(self, inp):
x = inp
x = self.dense2(x)
x = nn.reshape_4D (x, lowest_dense_res, lowest_dense_res, self.ae_out_ch)
return x
def get_out_ch(self):
return self.ae_out_ch
class Decoder(nn.ModelBase):
def on_build(self, in_ch, d_ch, d_mask_ch ):
self.upscale0 = Upscale(in_ch, d_ch*8, kernel_size=3)
self.upscale1 = Upscale(d_ch*8, d_ch*8, kernel_size=3)
self.upscale2 = Upscale(d_ch*8, d_ch*4, kernel_size=3)
self.upscale3 = Upscale(d_ch*4, d_ch*2, kernel_size=3)
self.res0 = ResidualBlock(d_ch*8, kernel_size=3)
self.res1 = ResidualBlock(d_ch*8, kernel_size=3)
self.res2 = ResidualBlock(d_ch*4, kernel_size=3)
self.res3 = ResidualBlock(d_ch*2, kernel_size=3)
self.upscalem0 = Upscale(in_ch, d_mask_ch*8, kernel_size=3)
self.upscalem1 = Upscale(d_mask_ch*8, d_mask_ch*8, kernel_size=3)
self.upscalem2 = Upscale(d_mask_ch*8, d_mask_ch*4, kernel_size=3)
self.upscalem3 = Upscale(d_mask_ch*4, d_mask_ch*2, kernel_size=3)
self.upscalem4 = Upscale(d_mask_ch*2, d_mask_ch*1, kernel_size=3)
self.out_convm = nn.Conv2D( d_mask_ch*1, 1, kernel_size=1, padding='SAME')
self.out_conv = nn.Conv2D( d_ch*2, 3, kernel_size=1, padding='SAME')
self.out_conv1 = nn.Conv2D( d_ch*2, 3, kernel_size=3, padding='SAME')
self.out_conv2 = nn.Conv2D( d_ch*2, 3, kernel_size=3, padding='SAME')
self.out_conv3 = nn.Conv2D( d_ch*2, 3, kernel_size=3, padding='SAME')
def forward(self, inp):
z = inp
x = self.upscale0(z)
x = self.res0(x)
x = self.upscale1(x)
x = self.res1(x)
x = self.upscale2(x)
x = self.res2(x)
x = self.upscale3(x)
x = self.res3(x)
x = tf.nn.sigmoid( nn.depth_to_space(tf.concat( (self.out_conv(x),
self.out_conv1(x),
self.out_conv2(x),
self.out_conv3(x)), nn.conv2d_ch_axis), 2) )
m = self.upscalem0(z)
m = self.upscalem1(m)
m = self.upscalem2(m)
m = self.upscalem3(m)
m = self.upscalem4(m)
m = tf.nn.sigmoid(self.out_convm(m))
return x, m
self.face_type = {'wf' : FaceType.WHOLE_FACE,
'head' : FaceType.HEAD}[ self.options['face_type'] ]
if 'eyes_prio' in self.options:
self.options.pop('eyes_prio')
eyes_mouth_prio = self.options['eyes_mouth_prio']
ae_dims = self.ae_dims = self.options['ae_dims']
e_dims = self.options['e_dims']
d_dims = self.options['d_dims']
d_mask_dims = self.options['d_mask_dims']
morph_factor = self.options['morph_factor']
pretrain = self.pretrain = self.options['pretrain']
if self.pretrain_just_disabled:
self.set_iter(0)
self.gan_power = gan_power = 0.0 if self.pretrain else self.options['gan_power']
random_warp = False if self.pretrain else self.options['random_warp']
random_src_flip = self.random_src_flip if not self.pretrain else True
random_dst_flip = self.random_dst_flip if not self.pretrain else True
if self.pretrain:
self.options_show_override['gan_power'] = 0.0
self.options_show_override['random_warp'] = False
self.options_show_override['lr_dropout'] = 'n'
self.options_show_override['uniform_yaw'] = True
masked_training = self.options['masked_training']
ct_mode = self.options['ct_mode']
if ct_mode == 'none':
ct_mode = None
models_opt_on_gpu = False if len(devices) == 0 else self.options['models_opt_on_gpu']
models_opt_device = nn.tf_default_device_name if models_opt_on_gpu and self.is_training else '/CPU:0'
optimizer_vars_on_cpu = models_opt_device=='/CPU:0'
input_ch=3
bgr_shape = self.bgr_shape = nn.get4Dshape(resolution,resolution,input_ch)
mask_shape = nn.get4Dshape(resolution,resolution,1)
self.model_filename_list = []
with tf.device ('/CPU:0'):
#Place holders on CPU
self.warped_src = tf.placeholder (nn.floatx, bgr_shape, name='warped_src')
self.warped_dst = tf.placeholder (nn.floatx, bgr_shape, name='warped_dst')
self.target_src = tf.placeholder (nn.floatx, bgr_shape, name='target_src')
self.target_dst = tf.placeholder (nn.floatx, bgr_shape, name='target_dst')
self.target_srcm = tf.placeholder (nn.floatx, mask_shape, name='target_srcm')
self.target_srcm_em = tf.placeholder (nn.floatx, mask_shape, name='target_srcm_em')
self.target_dstm = tf.placeholder (nn.floatx, mask_shape, name='target_dstm')
self.target_dstm_em = tf.placeholder (nn.floatx, mask_shape, name='target_dstm_em')
self.morph_value_t = tf.placeholder (nn.floatx, (1,), name='morph_value_t')
# Initializing model classes
with tf.device (models_opt_device):
self.encoder = Encoder(in_ch=input_ch, e_ch=e_dims, ae_ch=ae_dims, name='encoder')
self.inter_src = Inter(ae_ch=ae_dims, ae_out_ch=ae_dims, name='inter_src')
self.inter_dst = Inter(ae_ch=ae_dims, ae_out_ch=ae_dims, name='inter_dst')
self.decoder = Decoder(in_ch=ae_dims, d_ch=d_dims, d_mask_ch=d_mask_dims, name='decoder')
self.model_filename_list += [ [self.encoder, 'encoder.npy'],
[self.inter_src, 'inter_src.npy'],
[self.inter_dst , 'inter_dst.npy'],
[self.decoder , 'decoder.npy'] ]
if self.is_training:
if gan_power != 0:
self.GAN = nn.UNetPatchDiscriminator(patch_size=self.options['gan_patch_size'], in_ch=input_ch, base_ch=self.options['gan_dims'], name="GAN")
self.model_filename_list += [ [self.GAN, 'GAN.npy'] ]
# Initialize optimizers
lr=5e-5
lr_dropout = 0.3 if self.options['lr_dropout'] in ['y','cpu'] and not self.pretrain else 1.0
clipnorm = 1.0 if self.options['clipgrad'] else 0.0
self.all_weights = self.encoder.get_weights() + self.inter_src.get_weights() + self.inter_dst.get_weights() + self.decoder.get_weights()
if pretrain:
self.trainable_weights = self.encoder.get_weights() + self.inter_dst.get_weights() + self.decoder.get_weights()
else:
self.trainable_weights = self.encoder.get_weights() + self.inter_src.get_weights() + self.inter_dst.get_weights() + self.decoder.get_weights()
self.src_dst_opt = nn.AdaBelief(lr=lr, lr_dropout=lr_dropout, clipnorm=clipnorm, name='src_dst_opt')
self.src_dst_opt.initialize_variables (self.all_weights, vars_on_cpu=optimizer_vars_on_cpu, lr_dropout_on_cpu=self.options['lr_dropout']=='cpu')
self.model_filename_list += [ (self.src_dst_opt, 'src_dst_opt.npy') ]
if gan_power != 0:
self.GAN_opt = nn.AdaBelief(lr=lr, lr_dropout=lr_dropout, clipnorm=clipnorm, name='GAN_opt')
self.GAN_opt.initialize_variables ( self.GAN.get_weights(), vars_on_cpu=optimizer_vars_on_cpu, lr_dropout_on_cpu=self.options['lr_dropout']=='cpu')#+self.D_src_x2.get_weights()
self.model_filename_list += [ (self.GAN_opt, 'GAN_opt.npy') ]
if self.is_training:
# Adjust batch size for multiple GPU
gpu_count = max(1, len(devices) )
bs_per_gpu = max(1, self.get_batch_size() // gpu_count)
self.set_batch_size( gpu_count*bs_per_gpu)
# Compute losses per GPU
gpu_pred_src_src_list = []
gpu_pred_dst_dst_list = []
gpu_pred_src_dst_list = []
gpu_pred_src_srcm_list = []
gpu_pred_dst_dstm_list = []
gpu_pred_src_dstm_list = []
gpu_src_losses = []
gpu_dst_losses = []
gpu_G_loss_gvs = []
gpu_GAN_loss_gvs = []
gpu_D_code_loss_gvs = []
gpu_D_src_dst_loss_gvs = []
for gpu_id in range(gpu_count):
with tf.device( f'/{devices[gpu_id].tf_dev_type}:{gpu_id}' if len(devices) != 0 else f'/CPU:0' ):
with tf.device(f'/CPU:0'):
# slice on CPU, otherwise all batch data will be transfered to GPU first
batch_slice = slice( gpu_id*bs_per_gpu, (gpu_id+1)*bs_per_gpu )
gpu_warped_src = self.warped_src [batch_slice,:,:,:]
gpu_warped_dst = self.warped_dst [batch_slice,:,:,:]
gpu_target_src = self.target_src [batch_slice,:,:,:]
gpu_target_dst = self.target_dst [batch_slice,:,:,:]
gpu_target_srcm = self.target_srcm[batch_slice,:,:,:]
gpu_target_srcm_em = self.target_srcm_em[batch_slice,:,:,:]
gpu_target_dstm = self.target_dstm[batch_slice,:,:,:]
gpu_target_dstm_em = self.target_dstm_em[batch_slice,:,:,:]
# process model tensors
gpu_src_code = self.encoder (gpu_warped_src)
gpu_dst_code = self.encoder (gpu_warped_dst)
if pretrain:
gpu_src_inter_src_code = self.inter_src (gpu_src_code)
gpu_dst_inter_dst_code = self.inter_dst (gpu_dst_code)
gpu_src_code = gpu_src_inter_src_code * nn.random_binomial( [bs_per_gpu, gpu_src_inter_src_code.shape.as_list()[1], 1,1] , p=morph_factor)
gpu_dst_code = gpu_src_dst_code = gpu_dst_inter_dst_code * nn.random_binomial( [bs_per_gpu, gpu_dst_inter_dst_code.shape.as_list()[1], 1,1] , p=0.25)
else:
gpu_src_inter_src_code = self.inter_src (gpu_src_code)
gpu_src_inter_dst_code = self.inter_dst (gpu_src_code)
gpu_dst_inter_src_code = self.inter_src (gpu_dst_code)
gpu_dst_inter_dst_code = self.inter_dst (gpu_dst_code)
inter_rnd_binomial = nn.random_binomial( [bs_per_gpu, gpu_src_inter_src_code.shape.as_list()[1], 1,1] , p=morph_factor)
gpu_src_code = gpu_src_inter_src_code * inter_rnd_binomial + gpu_src_inter_dst_code * (1-inter_rnd_binomial)
gpu_dst_code = gpu_dst_inter_dst_code
ae_dims_slice = tf.cast(ae_dims*self.morph_value_t[0], tf.int32)
gpu_src_dst_code = tf.concat( (tf.slice(gpu_dst_inter_src_code, [0,0,0,0], [-1, ae_dims_slice , lowest_dense_res, lowest_dense_res]),
tf.slice(gpu_dst_inter_dst_code, [0,ae_dims_slice,0,0], [-1,ae_dims-ae_dims_slice, lowest_dense_res,lowest_dense_res]) ), 1 )
gpu_pred_src_src, gpu_pred_src_srcm = self.decoder(gpu_src_code)
gpu_pred_dst_dst, gpu_pred_dst_dstm = self.decoder(gpu_dst_code)
gpu_pred_src_dst, gpu_pred_src_dstm = self.decoder(gpu_src_dst_code)
gpu_pred_src_src_list.append(gpu_pred_src_src)
gpu_pred_dst_dst_list.append(gpu_pred_dst_dst)
gpu_pred_src_dst_list.append(gpu_pred_src_dst)
gpu_pred_src_srcm_list.append(gpu_pred_src_srcm)
gpu_pred_dst_dstm_list.append(gpu_pred_dst_dstm)
gpu_pred_src_dstm_list.append(gpu_pred_src_dstm)
gpu_target_srcm_blur = nn.gaussian_blur(gpu_target_srcm, max(1, resolution // 32) )
gpu_target_srcm_blur = tf.clip_by_value(gpu_target_srcm_blur, 0, 0.5) * 2
gpu_target_dstm_blur = nn.gaussian_blur(gpu_target_dstm, max(1, resolution // 32) )
gpu_target_dstm_blur = tf.clip_by_value(gpu_target_dstm_blur, 0, 0.5) * 2
gpu_target_dst_anti_masked = gpu_target_dst*(1.0-gpu_target_dstm_blur)
gpu_target_src_anti_masked = gpu_target_src*(1.0-gpu_target_srcm_blur)
gpu_target_src_masked_opt = gpu_target_src*gpu_target_srcm_blur if masked_training else gpu_target_src
gpu_target_dst_masked_opt = gpu_target_dst*gpu_target_dstm_blur if masked_training else gpu_target_dst
gpu_pred_src_src_masked_opt = gpu_pred_src_src*gpu_target_srcm_blur if masked_training else gpu_pred_src_src
gpu_pred_src_src_anti_masked = gpu_pred_src_src*(1.0-gpu_target_srcm_blur)
gpu_pred_dst_dst_masked_opt = gpu_pred_dst_dst*gpu_target_dstm_blur if masked_training else gpu_pred_dst_dst
gpu_pred_dst_dst_anti_masked = gpu_pred_dst_dst*(1.0-gpu_target_dstm_blur)
if self.options['loss_function'] == 'MS-SSIM':
gpu_dst_loss = 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution)(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0)
gpu_dst_loss += tf.reduce_mean ( 10*tf.square ( gpu_target_dst_masked_opt - gpu_pred_dst_dst_masked_opt ), axis=[1,2,3])
elif self.options['loss_function'] == 'MS-SSIM+L1':
gpu_dst_loss = 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution, use_l1=True)(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0)
else:
if resolution < 256:
gpu_dst_loss = tf.reduce_mean ( 10*nn.dssim(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0, filter_size=int(resolution/11.6) ), axis=[1])
else:
gpu_dst_loss = tf.reduce_mean ( 5*nn.dssim(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0, filter_size=int(resolution/11.6) ), axis=[1])
gpu_dst_loss += tf.reduce_mean ( 5*nn.dssim(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0, filter_size=int(resolution/23.2) ), axis=[1])
gpu_dst_loss += tf.reduce_mean ( 10*tf.square( gpu_target_dst_masked_opt- gpu_pred_dst_dst_masked_opt ), axis=[1,2,3])
if eyes_mouth_prio:
gpu_dst_loss += tf.reduce_mean ( 300*tf.abs ( gpu_target_dst*gpu_target_dstm_em - gpu_pred_dst_dst*gpu_target_dstm_em ), axis=[1,2,3])
gpu_dst_loss += tf.reduce_mean ( 10*tf.square( gpu_target_dstm - gpu_pred_dst_dstm ),axis=[1,2,3] )
gpu_dst_loss += 0.1*tf.reduce_mean(tf.square(gpu_pred_dst_dst_anti_masked-gpu_target_dst_anti_masked),axis=[1,2,3] )
if self.options['background_power'] > 0:
bg_factor = self.options['background_power']
if self.options['loss_function'] == 'MS-SSIM':
gpu_dst_loss += bg_factor * 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution)(gpu_target_dst, gpu_pred_dst_dst, max_val=1.0)
gpu_dst_loss += bg_factor * tf.reduce_mean ( 10*tf.square ( gpu_target_dst - gpu_pred_dst_dst ), axis=[1,2,3])
elif self.options['loss_function'] == 'MS-SSIM+L1':
gpu_dst_loss += bg_factor * 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution, use_l1=True)(gpu_target_dst, gpu_pred_dst_dst, max_val=1.0)
else:
if resolution < 256:
gpu_dst_loss += bg_factor * tf.reduce_mean ( 10*nn.dssim(gpu_target_dst, gpu_pred_dst_dst, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
else:
gpu_dst_loss += bg_factor * tf.reduce_mean ( 5*nn.dssim(gpu_target_dst, gpu_pred_dst_dst, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
gpu_dst_loss += bg_factor * tf.reduce_mean ( 5*nn.dssim(gpu_target_dst, gpu_pred_dst_dst, max_val=1.0, filter_size=int(resolution/23.2)), axis=[1])
gpu_dst_loss += bg_factor * tf.reduce_mean ( 10*tf.square ( gpu_target_dst - gpu_pred_dst_dst ), axis=[1,2,3])
gpu_dst_losses += [gpu_dst_loss]
if not pretrain:
if self.options['loss_function'] == 'MS-SSIM':
gpu_src_loss = 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution)(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0)
gpu_src_loss += tf.reduce_mean ( 10*tf.square ( gpu_target_src_masked_opt - gpu_pred_src_src_masked_opt ), axis=[1,2,3])
elif self.options['loss_function'] == 'MS-SSIM+L1':
gpu_src_loss = 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution, use_l1=True)(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0)
else:
if resolution < 256:
gpu_src_loss = tf.reduce_mean ( 10*nn.dssim(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
else:
gpu_src_loss = tf.reduce_mean ( 5*nn.dssim(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
gpu_src_loss += tf.reduce_mean ( 5*nn.dssim(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0, filter_size=int(resolution/23.2)), axis=[1])
gpu_src_loss += tf.reduce_mean ( 10*tf.square ( gpu_target_src_masked_opt - gpu_pred_src_src_masked_opt ), axis=[1,2,3])
if eyes_mouth_prio:
gpu_src_loss += tf.reduce_mean ( 300*tf.abs ( gpu_target_src*gpu_target_srcm_em - gpu_pred_src_src*gpu_target_srcm_em ), axis=[1,2,3])
gpu_src_loss += tf.reduce_mean ( 10*tf.square( gpu_target_srcm - gpu_pred_src_srcm ),axis=[1,2,3] )
if self.options['background_power'] > 0:
bg_factor = self.options['background_power']
if self.options['loss_function'] == 'MS-SSIM':
gpu_src_loss += bg_factor * 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution)(gpu_target_src, gpu_pred_src_src, max_val=1.0)
gpu_src_loss += bg_factor * tf.reduce_mean ( 10*tf.square ( gpu_target_src - gpu_pred_src_src ), axis=[1,2,3])
elif self.options['loss_function'] == 'MS-SSIM+L1':
gpu_src_loss += bg_factor * 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution, use_l1=True)(gpu_target_src, gpu_pred_src_src, max_val=1.0)
else:
if resolution < 256:
gpu_src_loss += bg_factor * tf.reduce_mean ( 10*nn.dssim(gpu_target_src, gpu_pred_src_src, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
else:
gpu_src_loss += bg_factor * tf.reduce_mean ( 5*nn.dssim(gpu_target_src, gpu_pred_src_src, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
gpu_src_loss += bg_factor * tf.reduce_mean ( 5*nn.dssim(gpu_target_src, gpu_pred_src_src, max_val=1.0, filter_size=int(resolution/23.2)), axis=[1])
gpu_src_loss += bg_factor * tf.reduce_mean ( 10*tf.square ( gpu_target_src - gpu_pred_src_src ), axis=[1,2,3])
else:
gpu_src_loss = gpu_dst_loss
gpu_src_losses += [gpu_src_loss]
if pretrain:
gpu_G_loss = gpu_dst_loss
else:
gpu_G_loss = gpu_src_loss + gpu_dst_loss
def DLossOnes(logits):
return tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.ones_like(logits), logits=logits), axis=[1,2,3])
def DLossZeros(logits):
return tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.zeros_like(logits), logits=logits), axis=[1,2,3])
if gan_power != 0:
gpu_pred_src_src_d, gpu_pred_src_src_d2 = self.GAN(gpu_pred_src_src_masked_opt)
gpu_pred_dst_dst_d, gpu_pred_dst_dst_d2 = self.GAN(gpu_pred_dst_dst_masked_opt)
gpu_target_src_d, gpu_target_src_d2 = self.GAN(gpu_target_src_masked_opt)
gpu_target_dst_d, gpu_target_dst_d2 = self.GAN(gpu_target_dst_masked_opt)
gpu_D_src_dst_loss = (DLossOnes (gpu_target_src_d) + DLossOnes (gpu_target_src_d2) + \
DLossZeros(gpu_pred_src_src_d) + DLossZeros(gpu_pred_src_src_d2) + \
DLossOnes (gpu_target_dst_d) + DLossOnes (gpu_target_dst_d2) + \
DLossZeros(gpu_pred_dst_dst_d) + DLossZeros(gpu_pred_dst_dst_d2)
) * ( 1.0 / 8)
gpu_D_src_dst_loss_gvs += [ nn.gradients (gpu_D_src_dst_loss, self.GAN.get_weights() ) ]
gpu_G_loss += (DLossOnes(gpu_pred_src_src_d) + DLossOnes(gpu_pred_src_src_d2) + \
DLossOnes(gpu_pred_dst_dst_d) + DLossOnes(gpu_pred_dst_dst_d2)
) * gan_power
if masked_training:
# Minimal src-src-bg rec with total_variation_mse to suppress random bright dots from gan
gpu_G_loss += 0.000001*nn.total_variation_mse(gpu_pred_src_src)
gpu_G_loss += 0.02*tf.reduce_mean(tf.square(gpu_pred_src_src_anti_masked-gpu_target_src_anti_masked),axis=[1,2,3] )
gpu_G_loss_gvs += [ nn.gradients ( gpu_G_loss, self.trainable_weights ) ]
# Average losses and gradients, and create optimizer update ops
with tf.device(f'/CPU:0'):
pred_src_src = nn.concat(gpu_pred_src_src_list, 0)
pred_dst_dst = nn.concat(gpu_pred_dst_dst_list, 0)
pred_src_dst = nn.concat(gpu_pred_src_dst_list, 0)
pred_src_srcm = nn.concat(gpu_pred_src_srcm_list, 0)
pred_dst_dstm = nn.concat(gpu_pred_dst_dstm_list, 0)
pred_src_dstm = nn.concat(gpu_pred_src_dstm_list, 0)
with tf.device (models_opt_device):
src_loss = tf.concat(gpu_src_losses, 0)
dst_loss = tf.concat(gpu_dst_losses, 0)
src_dst_loss_gv_op = self.src_dst_opt.get_update_op (nn.average_gv_list (gpu_G_loss_gvs))
if gan_power != 0:
src_D_src_dst_loss_gv_op = self.GAN_opt.get_update_op (nn.average_gv_list(gpu_D_src_dst_loss_gvs) )
#GAN_loss_gv_op = self.src_dst_opt.get_update_op (nn.average_gv_list(gpu_GAN_loss_gvs) )
# Initializing training and view functions
def src_dst_train(warped_src, target_src, target_srcm, target_srcm_em, \
warped_dst, target_dst, target_dstm, target_dstm_em, ):
s, d, _ = nn.tf_sess.run ( [ src_loss, dst_loss, src_dst_loss_gv_op],
feed_dict={self.warped_src :warped_src,
self.target_src :target_src,
self.target_srcm:target_srcm,
self.target_srcm_em:target_srcm_em,
self.warped_dst :warped_dst,
self.target_dst :target_dst,
self.target_dstm:target_dstm,
self.target_dstm_em:target_dstm_em,
})
return s, d
self.src_dst_train = src_dst_train
if gan_power != 0:
def D_src_dst_train(warped_src, target_src, target_srcm, target_srcm_em, \
warped_dst, target_dst, target_dstm, target_dstm_em, ):
nn.tf_sess.run ([src_D_src_dst_loss_gv_op], feed_dict={self.warped_src :warped_src,
self.target_src :target_src,
self.target_srcm:target_srcm,
self.target_srcm_em:target_srcm_em,
self.warped_dst :warped_dst,
self.target_dst :target_dst,
self.target_dstm:target_dstm,
self.target_dstm_em:target_dstm_em})
self.D_src_dst_train = D_src_dst_train
def AE_view(warped_src, warped_dst, morph_value):
return nn.tf_sess.run ( [pred_src_src, pred_dst_dst, pred_dst_dstm, pred_src_dst, pred_src_dstm],
feed_dict={self.warped_src:warped_src, self.warped_dst:warped_dst, self.morph_value_t:[morph_value] })
self.AE_view = AE_view
else:
#Initializing merge function
with tf.device( nn.tf_default_device_name if len(devices) != 0 else f'/CPU:0'):
gpu_dst_code = self.encoder (self.warped_dst)
gpu_dst_inter_src_code = self.inter_src ( gpu_dst_code)
gpu_dst_inter_dst_code = self.inter_dst ( gpu_dst_code)
ae_dims_slice = tf.cast(ae_dims*self.morph_value_t[0], tf.int32)
gpu_src_dst_code = tf.concat( ( tf.slice(gpu_dst_inter_src_code, [0,0,0,0], [-1, ae_dims_slice , lowest_dense_res, lowest_dense_res]),
tf.slice(gpu_dst_inter_dst_code, [0,ae_dims_slice,0,0], [-1,ae_dims-ae_dims_slice, lowest_dense_res,lowest_dense_res]) ), 1 )
gpu_pred_src_dst, gpu_pred_src_dstm = self.decoder(gpu_src_dst_code)
_, gpu_pred_dst_dstm = self.decoder(gpu_dst_inter_dst_code)
def AE_merge(warped_dst, morph_value):
return nn.tf_sess.run ( [gpu_pred_src_dst, gpu_pred_dst_dstm, gpu_pred_src_dstm], feed_dict={self.warped_dst:warped_dst, self.morph_value_t:[morph_value] })
self.AE_merge = AE_merge
# Loading/initializing all models/optimizers weights
for model, filename in io.progress_bar_generator(self.model_filename_list, "Initializing models"):
if self.pretrain_just_disabled:
do_init = False
if model == self.inter_src or model == self.inter_dst:
do_init = True
else:
do_init = self.is_first_run()
if self.is_training and gan_power != 0 and model == self.GAN:
if self.gan_model_changed:
do_init = True
if not do_init:
do_init = not model.load_weights( self.get_strpath_storage_for_file(filename) )
if do_init:
model.init_weights()
###############
# initializing sample generators
if self.is_training:
training_data_src_path = self.training_data_src_path if not self.pretrain else self.get_pretraining_data_path()
training_data_dst_path = self.training_data_dst_path if not self.pretrain else self.get_pretraining_data_path()
random_ct_samples_path=training_data_dst_path if ct_mode is not None and not self.pretrain else None
cpu_count = min(multiprocessing.cpu_count(), 8)
src_generators_count = cpu_count // 2
dst_generators_count = cpu_count // 2
if ct_mode is not None:
src_generators_count = int(src_generators_count * 1.5)
fs_aug = None
if ct_mode == 'fs-aug':
fs_aug = 'fs-aug'
channel_type = SampleProcessor.ChannelType.LAB_RAND_TRANSFORM if self.options['random_color'] else SampleProcessor.ChannelType.BGR
self.set_training_data_generators ([
SampleGeneratorFace(training_data_src_path, random_ct_samples_path=random_ct_samples_path, debug=self.is_debug(), batch_size=self.get_batch_size(),
sample_process_options=SampleProcessor.Options(random_flip=random_src_flip),
output_sample_types = [ {'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':random_warp,
'random_downsample': self.options['random_downsample'],
'random_noise': self.options['random_noise'],
'random_blur': self.options['random_blur'],
'random_jpeg': self.options['random_jpeg'],
'transform':True, 'channel_type' : channel_type, 'ct_mode': ct_mode,
'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':False , 'transform':True, 'channel_type' : channel_type, 'ct_mode': ct_mode, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE_EYES, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
],
uniform_yaw_distribution=self.options['uniform_yaw'] or self.pretrain,
generators_count=src_generators_count ),
SampleGeneratorFace(training_data_dst_path, debug=self.is_debug(), batch_size=self.get_batch_size(),
sample_process_options=SampleProcessor.Options(random_flip=random_dst_flip),
output_sample_types = [ {'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':random_warp,
'random_downsample': self.options['random_downsample'],
'random_noise': self.options['random_noise'],
'random_blur': self.options['random_blur'],
'random_jpeg': self.options['random_jpeg'],
'transform':True, 'channel_type' : channel_type, 'ct_mode': fs_aug,
'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':False , 'transform':True, 'channel_type' : channel_type, 'ct_mode': fs_aug, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE_EYES, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
],
uniform_yaw_distribution=self.options['uniform_yaw'] or self.pretrain,
generators_count=dst_generators_count )
])
self.last_src_samples_loss = []
self.last_dst_samples_loss = []
if self.pretrain_just_disabled:
self.update_sample_for_preview(force_new=True)
def dump_ckpt(self):
tf = nn.tf
with tf.device (nn.tf_default_device_name):
warped_dst = tf.placeholder (nn.floatx, (None, self.resolution, self.resolution, 3), name='in_face')
warped_dst = tf.transpose(warped_dst, (0,3,1,2))
morph_value = tf.placeholder (nn.floatx, (1,), name='morph_value')
gpu_dst_code = self.encoder (warped_dst)
gpu_dst_inter_src_code = self.inter_src ( gpu_dst_code)
gpu_dst_inter_dst_code = self.inter_dst ( gpu_dst_code)
ae_dims_slice = tf.cast(self.ae_dims*morph_value[0], tf.int32)
gpu_src_dst_code = tf.concat( (tf.slice(gpu_dst_inter_src_code, [0,0,0,0], [-1, ae_dims_slice , self.lowest_dense_res, self.lowest_dense_res]),
tf.slice(gpu_dst_inter_dst_code, [0,ae_dims_slice,0,0], [-1,self.ae_dims-ae_dims_slice, self.lowest_dense_res,self.lowest_dense_res]) ), 1 )
gpu_pred_src_dst, gpu_pred_src_dstm = self.decoder(gpu_src_dst_code)
_, gpu_pred_dst_dstm = self.decoder(gpu_dst_inter_dst_code)
gpu_pred_src_dst = tf.transpose(gpu_pred_src_dst, (0,2,3,1))
gpu_pred_dst_dstm = tf.transpose(gpu_pred_dst_dstm, (0,2,3,1))
gpu_pred_src_dstm = tf.transpose(gpu_pred_src_dstm, (0,2,3,1))
tf.identity(gpu_pred_dst_dstm, name='out_face_mask')
tf.identity(gpu_pred_src_dst, name='out_celeb_face')
tf.identity(gpu_pred_src_dstm, name='out_celeb_face_mask')
output_graph_def = tf.graph_util.convert_variables_to_constants(
nn.tf_sess,
tf.get_default_graph().as_graph_def(),
['out_face_mask','out_celeb_face','out_celeb_face_mask']
)
pb_filepath = self.get_strpath_storage_for_file('.pb')
with tf.gfile.GFile(pb_filepath, "wb") as f:
f.write(output_graph_def.SerializeToString())
#override
def get_model_filename_list(self):
return self.model_filename_list
#override
def onSave(self):
for model, filename in io.progress_bar_generator(self.get_model_filename_list(), "Saving", leave=False):
model.save_weights ( self.get_strpath_storage_for_file(filename) )
#override
def should_save_preview_history(self):
return (not io.is_colab() and self.iter % ( 10*(max(1,self.resolution // 64)) ) == 0) or \
(io.is_colab() and self.iter % 100 == 0)
#override
def onTrainOneIter(self):
bs = self.get_batch_size()
( (warped_src, target_src, target_srcm, target_srcm_em), \
(warped_dst, target_dst, target_dstm, target_dstm_em) ) = self.generate_next_samples()
src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
for i in range(bs):
self.last_src_samples_loss.append ( (target_src[i], target_srcm[i], target_srcm_em[i], src_loss[i] ) )
self.last_dst_samples_loss.append ( (target_dst[i], target_dstm[i], target_dstm_em[i], dst_loss[i] ) )
if len(self.last_src_samples_loss) >= bs*16:
src_samples_loss = sorted(self.last_src_samples_loss, key=operator.itemgetter(3), reverse=True)
dst_samples_loss = sorted(self.last_dst_samples_loss, key=operator.itemgetter(3), reverse=True)
target_src = np.stack( [ x[0] for x in src_samples_loss[:bs] ] )
target_srcm = np.stack( [ x[1] for x in src_samples_loss[:bs] ] )
target_srcm_em = np.stack( [ x[2] for x in src_samples_loss[:bs] ] )
target_dst = np.stack( [ x[0] for x in dst_samples_loss[:bs] ] )
target_dstm = np.stack( [ x[1] for x in dst_samples_loss[:bs] ] )
target_dstm_em = np.stack( [ x[2] for x in dst_samples_loss[:bs] ] )
src_loss, dst_loss = self.src_dst_train (target_src, target_src, target_srcm, target_srcm_em, target_dst, target_dst, target_dstm, target_dstm_em)
self.last_src_samples_loss = []
self.last_dst_samples_loss = []
if self.gan_power != 0:
self.D_src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
return ( ('src_loss', np.mean(src_loss) ), ('dst_loss', np.mean(dst_loss) ), )
#override
def onGetPreview(self, samples):
( (warped_src, target_src, target_srcm, target_srcm_em),
(warped_dst, target_dst, target_dstm, target_dstm_em) ) = samples
S, D, SS, DD, DDM_000, _, _ = [ np.clip( nn.to_data_format(x,"NHWC", self.model_data_format), 0.0, 1.0) for x in ([target_src,target_dst] + self.AE_view (target_src, target_dst, 0.0) ) ]
_, _, DDM_025, SD_025, SDM_025 = [ np.clip( nn.to_data_format(x,"NHWC", self.model_data_format), 0.0, 1.0) for x in self.AE_view (target_src, target_dst, 0.25) ]
_, _, DDM_050, SD_050, SDM_050 = [ np.clip( nn.to_data_format(x,"NHWC", self.model_data_format), 0.0, 1.0) for x in self.AE_view (target_src, target_dst, 0.50) ]
_, _, DDM_065, SD_065, SDM_065 = [ np.clip( nn.to_data_format(x,"NHWC", self.model_data_format), 0.0, 1.0) for x in self.AE_view (target_src, target_dst, 0.65) ]
_, _, DDM_075, SD_075, SDM_075 = [ np.clip( nn.to_data_format(x,"NHWC", self.model_data_format), 0.0, 1.0) for x in self.AE_view (target_src, target_dst, 0.75) ]
_, _, DDM_100, SD_100, SDM_100 = [ np.clip( nn.to_data_format(x,"NHWC", self.model_data_format), 0.0, 1.0) for x in self.AE_view (target_src, target_dst, 1.00) ]
(DDM_000,
DDM_025, SDM_025,
DDM_050, SDM_050,
DDM_065, SDM_065,
DDM_075, SDM_075,
DDM_100, SDM_100) = [ np.repeat (x, (3,), -1) for x in (DDM_000,
DDM_025, SDM_025,
DDM_050, SDM_050,
DDM_065, SDM_065,
DDM_075, SDM_075,
DDM_100, SDM_100) ]
target_srcm, target_dstm = [ nn.to_data_format(x,"NHWC", self.model_data_format) for x in ([target_srcm, target_dstm] )]
n_samples = min(4, self.get_batch_size(), 800 // self.resolution )
result = []
i = np.random.randint(n_samples)
st = [ np.concatenate ((S[i], D[i], DD[i]*DDM_000[i]), axis=1) ]
st += [ np.concatenate ((SS[i], DD[i], SD_075[i] ), axis=1) ]
result += [ ('AMP morph 0.75', np.concatenate (st, axis=0 )), ]
st = [ np.concatenate ((DD[i], SD_025[i], SD_050[i]), axis=1) ]
st += [ np.concatenate ((SD_065[i], SD_075[i], SD_100[i]), axis=1) ]
result += [ ('AMP morph list', np.concatenate (st, axis=0 )), ]
st = [ np.concatenate ((DD[i], SD_025[i]*DDM_025[i]*SDM_025[i], SD_050[i]*DDM_050[i]*SDM_050[i]), axis=1) ]
st += [ np.concatenate ((SD_065[i]*DDM_065[i]*SDM_065[i], SD_075[i]*DDM_075[i]*SDM_075[i], SD_100[i]*DDM_100[i]*SDM_100[i]), axis=1) ]
result += [ ('AMP morph list masked', np.concatenate (st, axis=0 )), ]
return result
def predictor_func (self, face, morph_value):
face = nn.to_data_format(face[None,...], self.model_data_format, "NHWC")
bgr, mask_dst_dstm, mask_src_dstm = [ nn.to_data_format(x,"NHWC", self.model_data_format).astype(np.float32) for x in self.AE_merge (face, morph_value) ]
return bgr[0], mask_src_dstm[0][...,0], mask_dst_dstm[0][...,0]
#override
def get_MergerConfig(self):
morph_factor = np.clip ( io.input_number ("Morph factor", 0.75, add_info="0.0 .. 1.0"), 0.0, 1.0 )
def predictor_morph(face):
return self.predictor_func(face, morph_factor)
import merger
return predictor_morph, (self.options['resolution'], self.options['resolution'], 3), merger.MergerConfigMasked(face_type=self.face_type, default_mode = 'overlay')
Model = AMPModel

View file

@ -0,0 +1 @@
from .Model import Model

View file

@ -31,7 +31,7 @@ class QModel(ModelBase):
masked_training = True masked_training = True
models_opt_on_gpu = len(devices) >= 1 and all([dev.total_mem_gb >= 4 for dev in devices]) models_opt_on_gpu = len(devices) >= 1 and all([dev.total_mem_gb >= 4 for dev in devices])
models_opt_device = '/GPU:0' if models_opt_on_gpu and self.is_training else '/CPU:0' models_opt_device = nn.tf_default_device_name if models_opt_on_gpu and self.is_training else '/CPU:0'
optimizer_vars_on_cpu = models_opt_device=='/CPU:0' optimizer_vars_on_cpu = models_opt_device=='/CPU:0'
input_ch = 3 input_ch = 3
@ -96,7 +96,7 @@ class QModel(ModelBase):
gpu_src_dst_loss_gvs = [] gpu_src_dst_loss_gvs = []
for gpu_id in range(gpu_count): for gpu_id in range(gpu_count):
with tf.device( f'/GPU:{gpu_id}' if len(devices) != 0 else f'/CPU:0' ): with tf.device( f'/{devices[gpu_id].tf_dev_type}:{gpu_id}' if len(devices) != 0 else f'/CPU:0' ):
batch_slice = slice( gpu_id*bs_per_gpu, (gpu_id+1)*bs_per_gpu ) batch_slice = slice( gpu_id*bs_per_gpu, (gpu_id+1)*bs_per_gpu )
with tf.device(f'/CPU:0'): with tf.device(f'/CPU:0'):
# slice on CPU, otherwise all batch data will be transfered to GPU first # slice on CPU, otherwise all batch data will be transfered to GPU first
@ -190,7 +190,7 @@ class QModel(ModelBase):
self.AE_view = AE_view self.AE_view = AE_view
else: else:
# Initializing merge function # Initializing merge function
with tf.device( f'/GPU:0' if len(devices) != 0 else f'/CPU:0'): with tf.device( nn.tf_default_device_name if len(devices) != 0 else f'/CPU:0'):
gpu_dst_code = self.inter(self.encoder(self.warped_dst)) gpu_dst_code = self.inter(self.encoder(self.warped_dst))
gpu_pred_src_dst, gpu_pred_src_dstm = self.decoder_src(gpu_dst_code) gpu_pred_src_dst, gpu_pred_src_dstm = self.decoder_src(gpu_dst_code)
_, gpu_pred_dst_dstm = self.decoder_dst(gpu_dst_code) _, gpu_pred_dst_dstm = self.decoder_dst(gpu_dst_code)

View file

@ -53,20 +53,32 @@ class SAEHDModel(ModelBase):
lr_dropout = {True:'y', False:'n'}.get(lr_dropout, lr_dropout) #backward comp lr_dropout = {True:'y', False:'n'}.get(lr_dropout, lr_dropout) #backward comp
default_lr_dropout = self.options['lr_dropout'] = lr_dropout default_lr_dropout = self.options['lr_dropout'] = lr_dropout
default_loss_function = self.options['loss_function'] = self.load_or_def_option('loss_function', 'SSIM')
default_random_warp = self.options['random_warp'] = self.load_or_def_option('random_warp', True) default_random_warp = self.options['random_warp'] = self.load_or_def_option('random_warp', True)
default_random_downsample = self.options['random_downsample'] = self.load_or_def_option('random_downsample', False)
default_random_noise = self.options['random_noise'] = self.load_or_def_option('random_noise', False)
default_random_blur = self.options['random_blur'] = self.load_or_def_option('random_blur', False)
default_random_jpeg = self.options['random_jpeg'] = self.load_or_def_option('random_jpeg', False)
default_background_power = self.options['background_power'] = self.load_or_def_option('background_power', 0.0)
default_true_face_power = self.options['true_face_power'] = self.load_or_def_option('true_face_power', 0.0) default_true_face_power = self.options['true_face_power'] = self.load_or_def_option('true_face_power', 0.0)
default_face_style_power = self.options['face_style_power'] = self.load_or_def_option('face_style_power', 0.0) default_face_style_power = self.options['face_style_power'] = self.load_or_def_option('face_style_power', 0.0)
default_bg_style_power = self.options['bg_style_power'] = self.load_or_def_option('bg_style_power', 0.0) default_bg_style_power = self.options['bg_style_power'] = self.load_or_def_option('bg_style_power', 0.0)
default_ct_mode = self.options['ct_mode'] = self.load_or_def_option('ct_mode', 'none') default_ct_mode = self.options['ct_mode'] = self.load_or_def_option('ct_mode', 'none')
default_random_color = self.options['random_color'] = self.load_or_def_option('random_color', False)
default_clipgrad = self.options['clipgrad'] = self.load_or_def_option('clipgrad', False) default_clipgrad = self.options['clipgrad'] = self.load_or_def_option('clipgrad', False)
default_pretrain = self.options['pretrain'] = self.load_or_def_option('pretrain', False) default_pretrain = self.options['pretrain'] = self.load_or_def_option('pretrain', False)
ask_override = self.ask_override() ask_override = self.ask_override()
if self.is_first_run() or ask_override: if self.is_first_run() or ask_override:
self.ask_session_name()
self.ask_autobackup_hour() self.ask_autobackup_hour()
self.ask_maximum_n_backups()
self.ask_write_preview_history() self.ask_write_preview_history()
self.ask_target_iter() self.ask_target_iter()
self.ask_random_flip() self.ask_random_src_flip()
self.ask_random_dst_flip()
self.ask_batch_size(suggest_batch_size) self.ask_batch_size(suggest_batch_size)
if self.is_first_run(): if self.is_first_run():
@ -136,9 +148,12 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
self.options['uniform_yaw'] = io.input_bool ("Uniform yaw distribution of samples", default_uniform_yaw, help_message='Helps to fix blurry side faces due to small amount of them in the faceset.') self.options['uniform_yaw'] = io.input_bool ("Uniform yaw distribution of samples", default_uniform_yaw, help_message='Helps to fix blurry side faces due to small amount of them in the faceset.')
default_gan_version = self.options['gan_version'] = self.load_or_def_option('gan_version', 2)
default_gan_power = self.options['gan_power'] = self.load_or_def_option('gan_power', 0.0) default_gan_power = self.options['gan_power'] = self.load_or_def_option('gan_power', 0.0)
default_gan_patch_size = self.options['gan_patch_size'] = self.load_or_def_option('gan_patch_size', self.options['resolution'] // 8) default_gan_patch_size = self.options['gan_patch_size'] = self.load_or_def_option('gan_patch_size', self.options['resolution'] // 8)
default_gan_dims = self.options['gan_dims'] = self.load_or_def_option('gan_dims', 16) default_gan_dims = self.options['gan_dims'] = self.load_or_def_option('gan_dims', 16)
default_gan_smoothing = self.options['gan_smoothing'] = self.load_or_def_option('gan_smoothing', 0.1)
default_gan_noise = self.options['gan_noise'] = self.load_or_def_option('gan_noise', 0.0)
if self.is_first_run() or ask_override: if self.is_first_run() or ask_override:
self.options['models_opt_on_gpu'] = io.input_bool ("Place models and optimizer on GPU", default_models_opt_on_gpu, help_message="When you train on one GPU, by default model and optimizer weights are placed on GPU to accelerate the process. You can place they on CPU to free up extra VRAM, thus set bigger dimensions.") self.options['models_opt_on_gpu'] = io.input_bool ("Place models and optimizer on GPU", default_models_opt_on_gpu, help_message="When you train on one GPU, by default model and optimizer weights are placed on GPU to accelerate the process. You can place they on CPU to free up extra VRAM, thus set bigger dimensions.")
@ -147,29 +162,49 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
self.options['lr_dropout'] = io.input_str (f"Use learning rate dropout", default_lr_dropout, ['n','y','cpu'], help_message="When the face is trained enough, you can enable this option to get extra sharpness and reduce subpixel shake for less amount of iterations. Enabled it before `disable random warp` and before GAN. \nn - disabled.\ny - enabled\ncpu - enabled on CPU. This allows not to use extra VRAM, sacrificing 20% time of iteration.") self.options['lr_dropout'] = io.input_str (f"Use learning rate dropout", default_lr_dropout, ['n','y','cpu'], help_message="When the face is trained enough, you can enable this option to get extra sharpness and reduce subpixel shake for less amount of iterations. Enabled it before `disable random warp` and before GAN. \nn - disabled.\ny - enabled\ncpu - enabled on CPU. This allows not to use extra VRAM, sacrificing 20% time of iteration.")
self.options['loss_function'] = io.input_str(f"Loss function", default_loss_function, ['SSIM', 'MS-SSIM', 'MS-SSIM+L1'],
help_message="Change loss function used for image quality assessment.")
self.options['random_warp'] = io.input_bool ("Enable random warp of samples", default_random_warp, help_message="Random warp is required to generalize facial expressions of both faces. When the face is trained enough, you can disable it to get extra sharpness and reduce subpixel shake for less amount of iterations.") self.options['random_warp'] = io.input_bool ("Enable random warp of samples", default_random_warp, help_message="Random warp is required to generalize facial expressions of both faces. When the face is trained enough, you can disable it to get extra sharpness and reduce subpixel shake for less amount of iterations.")
self.options['gan_power'] = np.clip ( io.input_number ("GAN power", default_gan_power, add_info="0.0 .. 1.0", help_message="Forces the neural network to learn small details of the face. Enable it only when the face is trained enough with lr_dropout(on) and random_warp(off), and don't disable. The higher the value, the higher the chances of artifacts. Typical fine value is 0.1"), 0.0, 1.0 ) self.options['random_downsample'] = io.input_bool("Enable random downsample of samples", default_random_downsample, help_message="")
self.options['random_noise'] = io.input_bool("Enable random noise added to samples", default_random_noise, help_message="")
self.options['random_blur'] = io.input_bool("Enable random blur of samples", default_random_blur, help_message="")
self.options['random_jpeg'] = io.input_bool("Enable random jpeg compression of samples", default_random_jpeg, help_message="")
self.options['gan_version'] = np.clip (io.input_int("GAN version", default_gan_version, add_info="2 or 3", help_message="Choose GAN version (v2: 7/16/2020, v3: 1/3/2021):"), 2, 3)
if self.options['gan_version'] == 2:
self.options['gan_power'] = np.clip ( io.input_number ("GAN power", default_gan_power, add_info="0.0 .. 10.0", help_message="Train the network in Generative Adversarial manner. Forces the neural network to learn small details of the face. Enable it only when the face is trained enough and don't disable. Typical value is 0.1"), 0.0, 10.0 )
else:
self.options['gan_power'] = np.clip ( io.input_number ("GAN power", default_gan_power, add_info="0.0 .. 1.0", help_message="Forces the neural network to learn small details of the face. Enable it only when the face is trained enough with lr_dropout(on) and random_warp(off), and don't disable. The higher the value, the higher the chances of artifacts. Typical fine value is 0.1"), 0.0, 1.0 )
if self.options['gan_power'] != 0.0: if self.options['gan_power'] != 0.0:
gan_patch_size = np.clip ( io.input_int("GAN patch size", default_gan_patch_size, add_info="3-640", help_message="The higher patch size, the higher the quality, the more VRAM is required. You can get sharper edges even at the lowest setting. Typical fine value is resolution / 8." ), 3, 640 ) if self.options['gan_version'] == 3:
self.options['gan_patch_size'] = gan_patch_size gan_patch_size = np.clip ( io.input_int("GAN patch size", default_gan_patch_size, add_info="3-640", help_message="The higher patch size, the higher the quality, the more VRAM is required. You can get sharper edges even at the lowest setting. Typical fine value is resolution / 8." ), 3, 640 )
self.options['gan_patch_size'] = gan_patch_size
gan_dims = np.clip ( io.input_int("GAN dimensions", default_gan_dims, add_info="4-64", help_message="The dimensions of the GAN network. The higher dimensions, the more VRAM is required. You can get sharper edges even at the lowest setting. Typical fine value is 16." ), 4, 64 ) gan_dims = np.clip ( io.input_int("GAN dimensions", default_gan_dims, add_info="4-64", help_message="The dimensions of the GAN network. The higher dimensions, the more VRAM is required. You can get sharper edges even at the lowest setting. Typical fine value is 16." ), 4, 64 )
self.options['gan_dims'] = gan_dims self.options['gan_dims'] = gan_dims
self.options['gan_smoothing'] = np.clip ( io.input_number("GAN label smoothing", default_gan_smoothing, add_info="0 - 0.5", help_message="Uses soft labels with values slightly off from 0/1 for GAN, has a regularizing effect"), 0, 0.5)
self.options['gan_noise'] = np.clip ( io.input_number("GAN noisy labels", default_gan_noise, add_info="0 - 0.5", help_message="Marks some images with the wrong label, helps prevent collapse"), 0, 0.5)
if 'df' in self.options['archi']: if 'df' in self.options['archi']:
self.options['true_face_power'] = np.clip ( io.input_number ("'True face' power.", default_true_face_power, add_info="0.0000 .. 1.0", help_message="Experimental option. Discriminates result face to be more like src face. Higher value - stronger discrimination. Typical value is 0.01 . Comparison - https://i.imgur.com/czScS9q.png"), 0.0, 1.0 ) self.options['true_face_power'] = np.clip ( io.input_number ("'True face' power.", default_true_face_power, add_info="0.0000 .. 1.0", help_message="Experimental option. Discriminates result face to be more like src face. Higher value - stronger discrimination. Typical value is 0.01 . Comparison - https://i.imgur.com/czScS9q.png"), 0.0, 1.0 )
else: else:
self.options['true_face_power'] = 0.0 self.options['true_face_power'] = 0.0
self.options['background_power'] = np.clip ( io.input_number("Background power", default_background_power, add_info="0.0..1.0", help_message="Learn the area outside of the mask. Helps smooth out area near the mask boundaries. Can be used at any time"), 0.0, 1.0 )
self.options['face_style_power'] = np.clip ( io.input_number("Face style power", default_face_style_power, add_info="0.0..100.0", help_message="Learn the color of the predicted face to be the same as dst inside mask. If you want to use this option with 'whole_face' you have to use XSeg trained mask. Warning: Enable it only after 10k iters, when predicted face is clear enough to start learn style. Start from 0.001 value and check history changes. Enabling this option increases the chance of model collapse."), 0.0, 100.0 ) self.options['face_style_power'] = np.clip ( io.input_number("Face style power", default_face_style_power, add_info="0.0..100.0", help_message="Learn the color of the predicted face to be the same as dst inside mask. If you want to use this option with 'whole_face' you have to use XSeg trained mask. Warning: Enable it only after 10k iters, when predicted face is clear enough to start learn style. Start from 0.001 value and check history changes. Enabling this option increases the chance of model collapse."), 0.0, 100.0 )
self.options['bg_style_power'] = np.clip ( io.input_number("Background style power", default_bg_style_power, add_info="0.0..100.0", help_message="Learn the area outside mask of the predicted face to be the same as dst. If you want to use this option with 'whole_face' you have to use XSeg trained mask. For whole_face you have to use XSeg trained mask. This can make face more like dst. Enabling this option increases the chance of model collapse. Typical value is 2.0"), 0.0, 100.0 ) self.options['bg_style_power'] = np.clip ( io.input_number("Background style power", default_bg_style_power, add_info="0.0..100.0", help_message="Learn the area outside mask of the predicted face to be the same as dst. If you want to use this option with 'whole_face' you have to use XSeg trained mask. For whole_face you have to use XSeg trained mask. This can make face more like dst. Enabling this option increases the chance of model collapse. Typical value is 2.0"), 0.0, 100.0 )
self.options['ct_mode'] = io.input_str (f"Color transfer for src faceset", default_ct_mode, ['none','rct','lct','mkl','idt','sot', 'fs-aug'], help_message="Change color distribution of src samples close to dst samples. Try all modes to find the best. FS aug adds random color to dst and src") self.options['ct_mode'] = io.input_str (f"Color transfer for src faceset", default_ct_mode, ['none','rct','lct','mkl','idt','sot', 'fs-aug'], help_message="Change color distribution of src samples close to dst samples. Try all modes to find the best. FS aug adds random color to dst and src")
self.options['random_color'] = io.input_bool ("Random color", default_random_color, help_message="Samples are randomly rotated around the L axis in LAB colorspace, helps generalize training")
self.options['clipgrad'] = io.input_bool ("Enable gradient clipping", default_clipgrad, help_message="Gradient clipping reduces chance of model collapse, sacrificing speed of training.") self.options['clipgrad'] = io.input_bool ("Enable gradient clipping", default_clipgrad, help_message="Gradient clipping reduces chance of model collapse, sacrificing speed of training.")
self.options['pretrain'] = io.input_bool ("Enable pretraining mode", default_pretrain, help_message="Pretrain the model with large amount of various faces. After that, model can be used to train the fakes more quickly.") self.options['pretrain'] = io.input_bool ("Enable pretraining mode", default_pretrain, help_message="Pretrain the model with large amount of various faces. After that, model can be used to train the fakes more quickly. Forces random_warp=N, random_flips=Y, gan_power=0.0, lr_dropout=N, styles=0.0, uniform_yaw=Y")
if self.options['pretrain'] and self.get_pretraining_data_path() is None: if self.options['pretrain'] and self.get_pretraining_data_path() is None:
raise Exception("pretraining_data_path is not defined") raise Exception("pretraining_data_path is not defined")
@ -204,6 +239,8 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
elif len(archi_split) == 1: elif len(archi_split) == 1:
archi_type, archi_opts = archi_split[0], None archi_type, archi_opts = archi_split[0], None
self.archi_type = archi_type
ae_dims = self.options['ae_dims'] ae_dims = self.options['ae_dims']
e_dims = self.options['e_dims'] e_dims = self.options['e_dims']
d_dims = self.options['d_dims'] d_dims = self.options['d_dims']
@ -216,6 +253,8 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
self.gan_power = gan_power = 0.0 if self.pretrain else self.options['gan_power'] self.gan_power = gan_power = 0.0 if self.pretrain else self.options['gan_power']
random_warp = False if self.pretrain else self.options['random_warp'] random_warp = False if self.pretrain else self.options['random_warp']
random_src_flip = self.random_src_flip if not self.pretrain else True
random_dst_flip = self.random_dst_flip if not self.pretrain else True
if self.pretrain: if self.pretrain:
self.options_show_override['gan_power'] = 0.0 self.options_show_override['gan_power'] = 0.0
@ -230,27 +269,28 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
if ct_mode == 'none': if ct_mode == 'none':
ct_mode = None ct_mode = None
models_opt_on_gpu = False if len(devices) == 0 else self.options['models_opt_on_gpu'] models_opt_on_gpu = False if len(devices) == 0 else self.options['models_opt_on_gpu']
models_opt_device = '/GPU:0' if models_opt_on_gpu and self.is_training else '/CPU:0' models_opt_device = nn.tf_default_device_name if models_opt_on_gpu and self.is_training else '/CPU:0'
optimizer_vars_on_cpu = models_opt_device=='/CPU:0' optimizer_vars_on_cpu = models_opt_device=='/CPU:0'
input_ch=3 input_ch=3
bgr_shape = nn.get4Dshape(resolution,resolution,input_ch) bgr_shape = self.bgr_shape = nn.get4Dshape(resolution,resolution,input_ch)
mask_shape = nn.get4Dshape(resolution,resolution,1) mask_shape = nn.get4Dshape(resolution,resolution,1)
self.model_filename_list = [] self.model_filename_list = []
with tf.device ('/CPU:0'): with tf.device ('/CPU:0'):
#Place holders on CPU #Place holders on CPU
self.warped_src = tf.placeholder (nn.floatx, bgr_shape) self.warped_src = tf.placeholder (nn.floatx, bgr_shape, name='warped_src')
self.warped_dst = tf.placeholder (nn.floatx, bgr_shape) self.warped_dst = tf.placeholder (nn.floatx, bgr_shape, name='warped_dst')
self.target_src = tf.placeholder (nn.floatx, bgr_shape) self.target_src = tf.placeholder (nn.floatx, bgr_shape, name='target_src')
self.target_dst = tf.placeholder (nn.floatx, bgr_shape) self.target_dst = tf.placeholder (nn.floatx, bgr_shape, name='target_dst')
self.target_srcm = tf.placeholder (nn.floatx, mask_shape) self.target_srcm = tf.placeholder (nn.floatx, mask_shape, name='target_srcm')
self.target_srcm_em = tf.placeholder (nn.floatx, mask_shape) self.target_srcm_em = tf.placeholder (nn.floatx, mask_shape, name='target_srcm_em')
self.target_dstm = tf.placeholder (nn.floatx, mask_shape) self.target_dstm = tf.placeholder (nn.floatx, mask_shape, name='target_dstm')
self.target_dstm_em = tf.placeholder (nn.floatx, mask_shape) self.target_dstm_em = tf.placeholder (nn.floatx, mask_shape, name='target_dstm_em')
# Initializing model classes # Initializing model classes
model_archi = nn.DeepFakeArchi(resolution, opts=archi_opts) model_archi = nn.DeepFakeArchi(resolution, opts=archi_opts)
@ -294,8 +334,12 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
if self.is_training: if self.is_training:
if gan_power != 0: if gan_power != 0:
self.D_src = nn.UNetPatchDiscriminator(patch_size=self.options['gan_patch_size'], in_ch=input_ch, base_ch=self.options['gan_dims'], name="D_src") if self.options['gan_version'] == 2:
self.model_filename_list += [ [self.D_src, 'GAN.npy'] ] self.D_src = nn.UNetPatchDiscriminatorV2(patch_size=resolution//16, in_ch=input_ch, name="D_src")
self.model_filename_list += [ [self.D_src, 'D_src_v2.npy'] ]
else:
self.D_src = nn.UNetPatchDiscriminator(patch_size=self.options['gan_patch_size'], in_ch=input_ch, base_ch=self.options['gan_dims'], name="D_src")
self.model_filename_list += [ [self.D_src, 'GAN.npy'] ]
# Initialize optimizers # Initialize optimizers
lr=5e-5 lr=5e-5
@ -320,9 +364,14 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
self.model_filename_list += [ (self.D_code_opt, 'D_code_opt.npy') ] self.model_filename_list += [ (self.D_code_opt, 'D_code_opt.npy') ]
if gan_power != 0: if gan_power != 0:
self.D_src_dst_opt = OptimizerClass(lr=lr, lr_dropout=lr_dropout, clipnorm=clipnorm, name='GAN_opt') if self.options['gan_version'] == 2:
self.D_src_dst_opt.initialize_variables ( self.D_src.get_weights(), vars_on_cpu=optimizer_vars_on_cpu, lr_dropout_on_cpu=self.options['lr_dropout']=='cpu')#+self.D_src_x2.get_weights() self.D_src_dst_opt = OptimizerClass(lr=lr, lr_dropout=lr_dropout, clipnorm=clipnorm, name='D_src_dst_opt')
self.model_filename_list += [ (self.D_src_dst_opt, 'GAN_opt.npy') ] self.D_src_dst_opt.initialize_variables ( self.D_src.get_weights(), vars_on_cpu=optimizer_vars_on_cpu, lr_dropout_on_cpu=self.options['lr_dropout']=='cpu')#+self.D_src_x2.get_weights()
self.model_filename_list += [ (self.D_src_dst_opt, 'D_src_v2_opt.npy') ]
else:
self.D_src_dst_opt = OptimizerClass(lr=lr, lr_dropout=lr_dropout, clipnorm=clipnorm, name='GAN_opt')
self.D_src_dst_opt.initialize_variables ( self.D_src.get_weights(), vars_on_cpu=optimizer_vars_on_cpu, lr_dropout_on_cpu=self.options['lr_dropout']=='cpu')#+self.D_src_x2.get_weights()
self.model_filename_list += [ (self.D_src_dst_opt, 'GAN_opt.npy') ]
if self.is_training: if self.is_training:
# Adjust batch size for multiple GPU # Adjust batch size for multiple GPU
@ -330,7 +379,6 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
bs_per_gpu = max(1, self.get_batch_size() // gpu_count) bs_per_gpu = max(1, self.get_batch_size() // gpu_count)
self.set_batch_size( gpu_count*bs_per_gpu) self.set_batch_size( gpu_count*bs_per_gpu)
# Compute losses per GPU # Compute losses per GPU
gpu_pred_src_src_list = [] gpu_pred_src_src_list = []
gpu_pred_dst_dst_list = [] gpu_pred_dst_dst_list = []
@ -344,9 +392,9 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
gpu_G_loss_gvs = [] gpu_G_loss_gvs = []
gpu_D_code_loss_gvs = [] gpu_D_code_loss_gvs = []
gpu_D_src_dst_loss_gvs = [] gpu_D_src_dst_loss_gvs = []
for gpu_id in range(gpu_count):
with tf.device( f'/GPU:{gpu_id}' if len(devices) != 0 else f'/CPU:0' ):
for gpu_id in range(gpu_count):
with tf.device( f'/{devices[gpu_id].tf_dev_type}:{gpu_id}' if len(devices) != 0 else f'/CPU:0' ):
with tf.device(f'/CPU:0'): with tf.device(f'/CPU:0'):
# slice on CPU, otherwise all batch data will be transfered to GPU first # slice on CPU, otherwise all batch data will be transfered to GPU first
batch_slice = slice( gpu_id*bs_per_gpu, (gpu_id+1)*bs_per_gpu ) batch_slice = slice( gpu_id*bs_per_gpu, (gpu_id+1)*bs_per_gpu )
@ -355,9 +403,9 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
gpu_target_src = self.target_src [batch_slice,:,:,:] gpu_target_src = self.target_src [batch_slice,:,:,:]
gpu_target_dst = self.target_dst [batch_slice,:,:,:] gpu_target_dst = self.target_dst [batch_slice,:,:,:]
gpu_target_srcm_all = self.target_srcm[batch_slice,:,:,:] gpu_target_srcm_all = self.target_srcm[batch_slice,:,:,:]
gpu_target_srcm_em = self.target_srcm_em[batch_slice,:,:,:] gpu_target_srcm_em = self.target_srcm_em[batch_slice,:,:,:]
gpu_target_dstm_all = self.target_dstm[batch_slice,:,:,:] gpu_target_dstm_all = self.target_dstm[batch_slice,:,:,:]
gpu_target_dstm_em = self.target_dstm_em[batch_slice,:,:,:] gpu_target_dstm_em = self.target_dstm_em[batch_slice,:,:,:]
# process model tensors # process model tensors
if 'df' in archi_type: if 'df' in archi_type:
@ -421,12 +469,18 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
gpu_psd_target_dst_style_masked = gpu_pred_src_dst*gpu_target_dstm_style_blur gpu_psd_target_dst_style_masked = gpu_pred_src_dst*gpu_target_dstm_style_blur
gpu_psd_target_dst_style_anti_masked = gpu_pred_src_dst*(1.0 - gpu_target_dstm_style_blur) gpu_psd_target_dst_style_anti_masked = gpu_pred_src_dst*(1.0 - gpu_target_dstm_style_blur)
if resolution < 256: if self.options['loss_function'] == 'MS-SSIM':
gpu_src_loss = tf.reduce_mean ( 10*nn.dssim(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1]) gpu_src_loss = 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution)(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0)
gpu_src_loss += tf.reduce_mean ( 10*tf.square ( gpu_target_src_masked_opt - gpu_pred_src_src_masked_opt ), axis=[1,2,3])
elif self.options['loss_function'] == 'MS-SSIM+L1':
gpu_src_loss = 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution, use_l1=True)(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0)
else: else:
gpu_src_loss = tf.reduce_mean ( 5*nn.dssim(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1]) if resolution < 256:
gpu_src_loss += tf.reduce_mean ( 5*nn.dssim(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0, filter_size=int(resolution/23.2)), axis=[1]) gpu_src_loss = tf.reduce_mean ( 10*nn.dssim(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
gpu_src_loss += tf.reduce_mean ( 10*tf.square ( gpu_target_src_masked_opt - gpu_pred_src_src_masked_opt ), axis=[1,2,3]) else:
gpu_src_loss = tf.reduce_mean ( 5*nn.dssim(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
gpu_src_loss += tf.reduce_mean ( 5*nn.dssim(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0, filter_size=int(resolution/23.2)), axis=[1])
gpu_src_loss += tf.reduce_mean ( 10*tf.square ( gpu_target_src_masked_opt - gpu_pred_src_src_masked_opt ), axis=[1,2,3])
if eyes_prio or mouth_prio: if eyes_prio or mouth_prio:
if eyes_prio and mouth_prio: if eyes_prio and mouth_prio:
@ -440,6 +494,22 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
gpu_src_loss += tf.reduce_mean ( 10*tf.square( gpu_target_srcm - gpu_pred_src_srcm ),axis=[1,2,3] ) gpu_src_loss += tf.reduce_mean ( 10*tf.square( gpu_target_srcm - gpu_pred_src_srcm ),axis=[1,2,3] )
if self.options['background_power'] > 0:
bg_factor = self.options['background_power']
if self.options['loss_function'] == 'MS-SSIM':
gpu_src_loss += bg_factor * 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution)(gpu_target_src, gpu_pred_src_src, max_val=1.0)
gpu_src_loss += bg_factor * tf.reduce_mean ( 10*tf.square ( gpu_target_src - gpu_pred_src_src ), axis=[1,2,3])
elif self.options['loss_function'] == 'MS-SSIM+L1':
gpu_src_loss += bg_factor * 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution, use_l1=True)(gpu_target_src, gpu_pred_src_src, max_val=1.0)
else:
if resolution < 256:
gpu_src_loss += bg_factor * tf.reduce_mean ( 10*nn.dssim(gpu_target_src, gpu_pred_src_src, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
else:
gpu_src_loss += bg_factor * tf.reduce_mean ( 5*nn.dssim(gpu_target_src, gpu_pred_src_src, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
gpu_src_loss += bg_factor * tf.reduce_mean ( 5*nn.dssim(gpu_target_src, gpu_pred_src_src, max_val=1.0, filter_size=int(resolution/23.2)), axis=[1])
gpu_src_loss += bg_factor * tf.reduce_mean ( 10*tf.square ( gpu_target_src - gpu_pred_src_src ), axis=[1,2,3])
face_style_power = self.options['face_style_power'] / 100.0 face_style_power = self.options['face_style_power'] / 100.0
if face_style_power != 0 and not self.pretrain: if face_style_power != 0 and not self.pretrain:
gpu_src_loss += nn.style_loss(gpu_psd_target_dst_style_masked, gpu_target_dst_style_masked, gaussian_blur_radius=resolution//16, loss_weight=10000*face_style_power) gpu_src_loss += nn.style_loss(gpu_psd_target_dst_style_masked, gpu_target_dst_style_masked, gaussian_blur_radius=resolution//16, loss_weight=10000*face_style_power)
@ -449,12 +519,18 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
gpu_src_loss += tf.reduce_mean( (10*bg_style_power)*nn.dssim( gpu_psd_target_dst_style_anti_masked, gpu_target_dst_style_anti_masked, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1]) gpu_src_loss += tf.reduce_mean( (10*bg_style_power)*nn.dssim( gpu_psd_target_dst_style_anti_masked, gpu_target_dst_style_anti_masked, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
gpu_src_loss += tf.reduce_mean( (10*bg_style_power)*tf.square(gpu_psd_target_dst_style_anti_masked - gpu_target_dst_style_anti_masked), axis=[1,2,3] ) gpu_src_loss += tf.reduce_mean( (10*bg_style_power)*tf.square(gpu_psd_target_dst_style_anti_masked - gpu_target_dst_style_anti_masked), axis=[1,2,3] )
if resolution < 256: if self.options['loss_function'] == 'MS-SSIM':
gpu_dst_loss = tf.reduce_mean ( 10*nn.dssim(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0, filter_size=int(resolution/11.6) ), axis=[1]) gpu_dst_loss = 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution)(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0)
gpu_dst_loss += tf.reduce_mean ( 10*tf.square( gpu_target_dst_masked_opt- gpu_pred_dst_dst_masked_opt ), axis=[1,2,3])
elif self.options['loss_function'] == 'MS-SSIM+L1':
gpu_dst_loss = 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution, use_l1=True)(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0)
else: else:
gpu_dst_loss = tf.reduce_mean ( 5*nn.dssim(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0, filter_size=int(resolution/11.6) ), axis=[1]) if resolution < 256:
gpu_dst_loss += tf.reduce_mean ( 5*nn.dssim(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0, filter_size=int(resolution/23.2) ), axis=[1]) gpu_dst_loss = tf.reduce_mean ( 10*nn.dssim(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0, filter_size=int(resolution/11.6) ), axis=[1])
gpu_dst_loss += tf.reduce_mean ( 10*tf.square( gpu_target_dst_masked_opt- gpu_pred_dst_dst_masked_opt ), axis=[1,2,3]) else:
gpu_dst_loss = tf.reduce_mean ( 5*nn.dssim(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0, filter_size=int(resolution/11.6) ), axis=[1])
gpu_dst_loss += tf.reduce_mean ( 5*nn.dssim(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0, filter_size=int(resolution/23.2) ), axis=[1])
gpu_dst_loss += tf.reduce_mean ( 10*tf.square( gpu_target_dst_masked_opt- gpu_pred_dst_dst_masked_opt ), axis=[1,2,3])
if eyes_prio or mouth_prio: if eyes_prio or mouth_prio:
@ -467,6 +543,22 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
gpu_dst_loss += tf.reduce_mean ( 300*tf.abs ( gpu_target_dst*gpu_target_part_mask - gpu_pred_dst_dst*gpu_target_part_mask ), axis=[1,2,3]) gpu_dst_loss += tf.reduce_mean ( 300*tf.abs ( gpu_target_dst*gpu_target_part_mask - gpu_pred_dst_dst*gpu_target_part_mask ), axis=[1,2,3])
if self.options['background_power'] > 0:
bg_factor = self.options['background_power']
if self.options['loss_function'] == 'MS-SSIM':
gpu_dst_loss += bg_factor * 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution)(gpu_target_dst, gpu_pred_dst_dst, max_val=1.0)
gpu_dst_loss += bg_factor * tf.reduce_mean ( 10*tf.square ( gpu_target_dst - gpu_pred_dst_dst ), axis=[1,2,3])
elif self.options['loss_function'] == 'MS-SSIM+L1':
gpu_dst_loss += bg_factor * 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution, use_l1=True)(gpu_target_dst, gpu_pred_dst_dst, max_val=1.0)
else:
if resolution < 256:
gpu_dst_loss += bg_factor * tf.reduce_mean ( 10*nn.dssim(gpu_target_dst, gpu_pred_dst_dst, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
else:
gpu_dst_loss += bg_factor * tf.reduce_mean ( 5*nn.dssim(gpu_target_dst, gpu_pred_dst_dst, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
gpu_dst_loss += bg_factor * tf.reduce_mean ( 5*nn.dssim(gpu_target_dst, gpu_pred_dst_dst, max_val=1.0, filter_size=int(resolution/23.2)), axis=[1])
gpu_dst_loss += bg_factor * tf.reduce_mean ( 10*tf.square ( gpu_target_dst - gpu_pred_dst_dst ), axis=[1,2,3])
gpu_dst_loss += tf.reduce_mean ( 10*tf.square( gpu_target_dstm - gpu_pred_dst_dstm ),axis=[1,2,3] ) gpu_dst_loss += tf.reduce_mean ( 10*tf.square( gpu_target_dstm - gpu_pred_dst_dstm ),axis=[1,2,3] )
gpu_src_losses += [gpu_src_loss] gpu_src_losses += [gpu_src_loss]
@ -495,22 +587,37 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
gpu_pred_src_src_d, \ gpu_pred_src_src_d, \
gpu_pred_src_src_d2 = self.D_src(gpu_pred_src_src_masked_opt) gpu_pred_src_src_d2 = self.D_src(gpu_pred_src_src_masked_opt)
gpu_pred_src_src_d_ones = tf.ones_like (gpu_pred_src_src_d) def get_smooth_noisy_labels(label, tensor, smoothing=0.1, noise=0.05):
gpu_pred_src_src_d_zeros = tf.zeros_like(gpu_pred_src_src_d) num_labels = self.batch_size
for d in tensor.get_shape().as_list()[1:]:
num_labels *= d
gpu_pred_src_src_d2_ones = tf.ones_like (gpu_pred_src_src_d2) probs = tf.math.log([[noise, 1-noise]]) if label == 1 else tf.math.log([[1-noise, noise]])
gpu_pred_src_src_d2_zeros = tf.zeros_like(gpu_pred_src_src_d2) x = tf.random.categorical(probs, num_labels)
x = tf.cast(x, tf.float32)
x = tf.math.scalar_mul(1-smoothing, x)
# x = x + (smoothing/num_labels)
x = tf.reshape(x, (self.batch_size,) + tuple(tensor.get_shape().as_list()[1:]))
return x
gpu_target_src_d, \ smoothing = self.options['gan_smoothing']
gpu_target_src_d2 = self.D_src(gpu_target_src_masked_opt) noise = self.options['gan_noise']
gpu_target_src_d_ones = tf.ones_like(gpu_target_src_d) gpu_pred_src_src_d_ones = tf.ones_like(gpu_pred_src_src_d)
gpu_target_src_d2_ones = tf.ones_like(gpu_target_src_d2) gpu_pred_src_src_d2_ones = tf.ones_like(gpu_pred_src_src_d2)
gpu_D_src_dst_loss = (DLoss(gpu_target_src_d_ones , gpu_target_src_d) + \ gpu_pred_src_src_d_smooth_zeros = get_smooth_noisy_labels(0, gpu_pred_src_src_d, smoothing=smoothing, noise=noise)
DLoss(gpu_pred_src_src_d_zeros , gpu_pred_src_src_d) ) * 0.5 + \ gpu_pred_src_src_d2_smooth_zeros = get_smooth_noisy_labels(0, gpu_pred_src_src_d2, smoothing=smoothing, noise=noise)
(DLoss(gpu_target_src_d2_ones , gpu_target_src_d2) + \
DLoss(gpu_pred_src_src_d2_zeros , gpu_pred_src_src_d2) ) * 0.5 gpu_target_src_d, gpu_target_src_d2 = self.D_src(gpu_target_src_masked_opt)
gpu_target_src_d_smooth_ones = get_smooth_noisy_labels(1, gpu_target_src_d, smoothing=smoothing, noise=noise)
gpu_target_src_d2_smooth_ones = get_smooth_noisy_labels(1, gpu_target_src_d2, smoothing=smoothing, noise=noise)
gpu_D_src_dst_loss = DLoss(gpu_target_src_d_smooth_ones, gpu_target_src_d) \
+ DLoss(gpu_pred_src_src_d_smooth_zeros, gpu_pred_src_src_d) \
+ DLoss(gpu_target_src_d2_smooth_ones, gpu_target_src_d2) \
+ DLoss(gpu_pred_src_src_d2_smooth_zeros, gpu_pred_src_src_d2)
gpu_D_src_dst_loss_gvs += [ nn.gradients (gpu_D_src_dst_loss, self.D_src.get_weights() ) ]#+self.D_src_x2.get_weights() gpu_D_src_dst_loss_gvs += [ nn.gradients (gpu_D_src_dst_loss, self.D_src.get_weights() ) ]#+self.D_src_x2.get_weights()
@ -583,13 +690,13 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
def AE_view(warped_src, warped_dst): def AE_view(warped_src, warped_dst):
return nn.tf_sess.run ( [pred_src_src, pred_dst_dst, pred_dst_dstm, pred_src_dst, pred_src_dstm], return nn.tf_sess.run ( [pred_src_src, pred_src_srcm, pred_dst_dst, pred_dst_dstm, pred_src_dst, pred_src_dstm],
feed_dict={self.warped_src:warped_src, feed_dict={self.warped_src:warped_src,
self.warped_dst:warped_dst}) self.warped_dst:warped_dst})
self.AE_view = AE_view self.AE_view = AE_view
else: else:
# Initializing merge function # Initializing merge function
with tf.device( f'/GPU:0' if len(devices) != 0 else f'/CPU:0'): with tf.device( nn.tf_default_device_name if len(devices) != 0 else f'/CPU:0'):
if 'df' in archi_type: if 'df' in archi_type:
gpu_dst_code = self.inter(self.encoder(self.warped_dst)) gpu_dst_code = self.inter(self.encoder(self.warped_dst))
gpu_pred_src_dst, gpu_pred_src_dstm = self.decoder_src(gpu_dst_code) gpu_pred_src_dst, gpu_pred_src_dstm = self.decoder_src(gpu_dst_code)
@ -633,6 +740,9 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
if do_init: if do_init:
model.init_weights() model.init_weights()
###############
# initializing sample generators # initializing sample generators
if self.is_training: if self.is_training:
training_data_src_path = self.training_data_src_path if not self.pretrain else self.get_pretraining_data_path() training_data_src_path = self.training_data_src_path if not self.pretrain else self.get_pretraining_data_path()
@ -650,11 +760,19 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
if ct_mode == 'fs-aug': if ct_mode == 'fs-aug':
fs_aug = 'fs-aug' fs_aug = 'fs-aug'
channel_type = SampleProcessor.ChannelType.LAB_RAND_TRANSFORM if self.options['random_color'] else SampleProcessor.ChannelType.BGR
self.set_training_data_generators ([ self.set_training_data_generators ([
SampleGeneratorFace(training_data_src_path, random_ct_samples_path=random_ct_samples_path, debug=self.is_debug(), batch_size=self.get_batch_size(), SampleGeneratorFace(training_data_src_path, random_ct_samples_path=random_ct_samples_path, debug=self.is_debug(), batch_size=self.get_batch_size(),
sample_process_options=SampleProcessor.Options(random_flip=self.random_flip), sample_process_options=SampleProcessor.Options(random_flip=random_src_flip),
output_sample_types = [ {'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':random_warp, 'transform':True, 'channel_type' : SampleProcessor.ChannelType.BGR, 'ct_mode': ct_mode, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution}, output_sample_types = [ {'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':random_warp,
{'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.BGR, 'ct_mode': ct_mode, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution}, 'random_downsample': self.options['random_downsample'],
'random_noise': self.options['random_noise'],
'random_blur': self.options['random_blur'],
'random_jpeg': self.options['random_jpeg'],
'transform':True, 'channel_type' : channel_type, 'ct_mode': ct_mode,
'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':False , 'transform':True, 'channel_type' : channel_type, 'ct_mode': ct_mode, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution}, {'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE_EYES, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution}, {'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE_EYES, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
], ],
@ -662,9 +780,15 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
generators_count=src_generators_count ), generators_count=src_generators_count ),
SampleGeneratorFace(training_data_dst_path, debug=self.is_debug(), batch_size=self.get_batch_size(), SampleGeneratorFace(training_data_dst_path, debug=self.is_debug(), batch_size=self.get_batch_size(),
sample_process_options=SampleProcessor.Options(random_flip=self.random_flip), sample_process_options=SampleProcessor.Options(random_flip=random_dst_flip),
output_sample_types = [ {'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':random_warp, 'transform':True, 'channel_type' : SampleProcessor.ChannelType.BGR, 'ct_mode': fs_aug, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution}, output_sample_types = [ {'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':random_warp,
{'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.BGR, 'ct_mode': fs_aug, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution}, 'random_downsample': self.options['random_downsample'],
'random_noise': self.options['random_noise'],
'random_blur': self.options['random_blur'],
'random_jpeg': self.options['random_jpeg'],
'transform':True, 'channel_type' : channel_type, 'ct_mode': fs_aug,
'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':False , 'transform':True, 'channel_type' : channel_type, 'ct_mode': fs_aug, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution}, {'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE_EYES, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution}, {'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE_EYES, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
], ],
@ -678,6 +802,43 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
if self.pretrain_just_disabled: if self.pretrain_just_disabled:
self.update_sample_for_preview(force_new=True) self.update_sample_for_preview(force_new=True)
def dump_ckpt(self):
tf = nn.tf
with tf.device ('/CPU:0'):
warped_dst = tf.placeholder (nn.floatx, (None, self.resolution, self.resolution, 3), name='in_face')
warped_dst = tf.transpose(warped_dst, (0,3,1,2))
if 'df' in self.archi_type:
gpu_dst_code = self.inter(self.encoder(warped_dst))
gpu_pred_src_dst, gpu_pred_src_dstm = self.decoder_src(gpu_dst_code)
_, gpu_pred_dst_dstm = self.decoder_dst(gpu_dst_code)
elif 'liae' in self.archi_type:
gpu_dst_code = self.encoder (warped_dst)
gpu_dst_inter_B_code = self.inter_B (gpu_dst_code)
gpu_dst_inter_AB_code = self.inter_AB (gpu_dst_code)
gpu_dst_code = tf.concat([gpu_dst_inter_B_code,gpu_dst_inter_AB_code], nn.conv2d_ch_axis)
gpu_src_dst_code = tf.concat([gpu_dst_inter_AB_code,gpu_dst_inter_AB_code], nn.conv2d_ch_axis)
gpu_pred_src_dst, gpu_pred_src_dstm = self.decoder(gpu_src_dst_code)
_, gpu_pred_dst_dstm = self.decoder(gpu_dst_code)
gpu_pred_src_dst = tf.transpose(gpu_pred_src_dst, (0,2,3,1))
gpu_pred_dst_dstm = tf.transpose(gpu_pred_dst_dstm, (0,2,3,1))
gpu_pred_src_dstm = tf.transpose(gpu_pred_src_dstm, (0,2,3,1))
saver = tf.train.Saver()
tf.identity(gpu_pred_dst_dstm, name='out_face_mask')
tf.identity(gpu_pred_src_dst, name='out_celeb_face')
tf.identity(gpu_pred_src_dstm, name='out_celeb_face_mask')
saver.save(nn.tf_sess, self.get_strpath_storage_for_file('.ckpt') )
#override #override
def get_model_filename_list(self): def get_model_filename_list(self):
return self.model_filename_list return self.model_filename_list
@ -737,8 +898,9 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
( (warped_src, target_src, target_srcm, target_srcm_em), ( (warped_src, target_src, target_srcm, target_srcm_em),
(warped_dst, target_dst, target_dstm, target_dstm_em) ) = samples (warped_dst, target_dst, target_dstm, target_dstm_em) ) = samples
S, D, SS, DD, DDM, SD, SDM = [ np.clip( nn.to_data_format(x,"NHWC", self.model_data_format), 0.0, 1.0) for x in ([target_src,target_dst] + self.AE_view (target_src, target_dst) ) ] S, D, SS, SSM, DD, DDM, SD, SDM = [ np.clip( nn.to_data_format(x,"NHWC", self.model_data_format), 0.0, 1.0) for x in ([target_src,target_dst] + self.AE_view (target_src, target_dst) ) ]
DDM, SDM, = [ np.repeat (x, (3,), -1) for x in [DDM, SDM] ] SW, DW = [ np.clip( nn.to_data_format(x,"NHWC", self.model_data_format), 0.0, 1.0) for x in ([warped_src,warped_dst]) ]
SSM, DDM, SDM, = [ np.repeat (x, (3,), -1) for x in [SSM, DDM, SDM] ]
target_srcm, target_dstm = [ nn.to_data_format(x,"NHWC", self.model_data_format) for x in ([target_srcm, target_dstm] )] target_srcm, target_dstm = [ nn.to_data_format(x,"NHWC", self.model_data_format) for x in ([target_srcm, target_dstm] )]
@ -753,12 +915,17 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
st.append ( np.concatenate ( ar, axis=1) ) st.append ( np.concatenate ( ar, axis=1) )
result += [ ('SAEHD', np.concatenate (st, axis=0 )), ] result += [ ('SAEHD', np.concatenate (st, axis=0 )), ]
wt = []
for i in range(n_samples):
ar = SW[i], SS[i], DW[i], DD[i], SD[i]
wt.append ( np.concatenate ( ar, axis=1) )
result += [ ('SAEHD warped', np.concatenate (wt, axis=0 )), ]
st_m = [] st_m = []
for i in range(n_samples): for i in range(n_samples):
SD_mask = DDM[i]*SDM[i] if self.face_type < FaceType.HEAD else SDM[i] SD_mask = DDM[i]*SDM[i] if self.face_type < FaceType.HEAD else SDM[i]
ar = S[i]*target_srcm[i], SS[i], D[i]*target_dstm[i], DD[i]*DDM[i], SD[i]*SD_mask ar = S[i]*target_srcm[i], SS[i]*SSM[i], D[i]*target_dstm[i], DD[i]*DDM[i], SD[i]*SD_mask
st_m.append ( np.concatenate ( ar, axis=1) ) st_m.append ( np.concatenate ( ar, axis=1) )
result += [ ('SAEHD masked', np.concatenate (st_m, axis=0 )), ] result += [ ('SAEHD masked', np.concatenate (st_m, axis=0 )), ]
@ -783,10 +950,27 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
st.append ( np.concatenate ( ar, axis=1) ) st.append ( np.concatenate ( ar, axis=1) )
result += [ ('SAEHD pred', np.concatenate (st, axis=0 )), ] result += [ ('SAEHD pred', np.concatenate (st, axis=0 )), ]
wt = []
for i in range(n_samples):
ar = SW[i], SS[i]
wt.append ( np.concatenate ( ar, axis=1) )
result += [ ('SAEHD warped src-src', np.concatenate (wt, axis=0 )), ]
wt = []
for i in range(n_samples):
ar = DW[i], DD[i]
wt.append ( np.concatenate ( ar, axis=1) )
result += [ ('SAEHD warped dst-dst', np.concatenate (wt, axis=0 )), ]
wt = []
for i in range(n_samples):
ar = DW[i], SD[i]
wt.append ( np.concatenate ( ar, axis=1) )
result += [ ('SAEHD warped pred', np.concatenate (wt, axis=0 )), ]
st_m = [] st_m = []
for i in range(n_samples): for i in range(n_samples):
ar = S[i]*target_srcm[i], SS[i] ar = S[i]*target_srcm[i], SS[i]*SSM[i]
st_m.append ( np.concatenate ( ar, axis=1) ) st_m.append ( np.concatenate ( ar, axis=1) )
result += [ ('SAEHD masked src-src', np.concatenate (st_m, axis=0 )), ] result += [ ('SAEHD masked src-src', np.concatenate (st_m, axis=0 )), ]

View file

@ -52,7 +52,7 @@ class XSegModel(ModelBase):
'head' : FaceType.HEAD}[ self.options['face_type'] ] 'head' : FaceType.HEAD}[ self.options['face_type'] ]
place_model_on_cpu = len(devices) == 0 place_model_on_cpu = len(devices) == 0
models_opt_device = '/CPU:0' if place_model_on_cpu else '/GPU:0' models_opt_device = '/CPU:0' if place_model_on_cpu else nn.tf_default_device_name
bgr_shape = nn.get4Dshape(resolution,resolution,3) bgr_shape = nn.get4Dshape(resolution,resolution,3)
mask_shape = nn.get4Dshape(resolution,resolution,1) mask_shape = nn.get4Dshape(resolution,resolution,1)
@ -83,7 +83,7 @@ class XSegModel(ModelBase):
for gpu_id in range(gpu_count): for gpu_id in range(gpu_count):
with tf.device( f'/GPU:{gpu_id}' if len(devices) != 0 else f'/CPU:0' ): with tf.device(f'/{devices[gpu_id].tf_dev_type}:{gpu_id}' if len(devices) != 0 else f'/CPU:0' ):
with tf.device(f'/CPU:0'): with tf.device(f'/CPU:0'):
# slice on CPU, otherwise all batch data will be transfered to GPU first # slice on CPU, otherwise all batch data will be transfered to GPU first
batch_slice = slice( gpu_id*bs_per_gpu, (gpu_id+1)*bs_per_gpu ) batch_slice = slice( gpu_id*bs_per_gpu, (gpu_id+1)*bs_per_gpu )
@ -95,6 +95,7 @@ class XSegModel(ModelBase):
gpu_pred_list.append(gpu_pred_t) gpu_pred_list.append(gpu_pred_t)
gpu_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(labels=gpu_target_t, logits=gpu_pred_logits_t), axis=[1,2,3]) gpu_loss = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(labels=gpu_target_t, logits=gpu_pred_logits_t), axis=[1,2,3])
gpu_losses += [gpu_loss] gpu_losses += [gpu_loss]
gpu_loss_gvs += [ nn.gradients ( gpu_loss, self.model.get_weights() ) ] gpu_loss_gvs += [ nn.gradients ( gpu_loss, self.model.get_weights() ) ]

View file

@ -1,10 +1,12 @@
tqdm tqdm
numpy==1.19.3 numpy==1.19.3
h5py==2.9.0 h5py==2.10.0
opencv-python==4.1.0.25 opencv-python==4.1.0.25
ffmpeg-python==0.1.17 ffmpeg-python==0.1.17
scikit-image==0.14.2 scikit-image==0.14.2
scipy==1.4.1 scipy==1.4.1
colorama colorama
tensorflow-gpu==2.3.1 tensorflow-gpu==2.4.0
pyqt5 pyqt5
Flask==1.1.1
flask-socketio==4.2.1

View file

@ -85,16 +85,17 @@ class PackedFaceset():
of.seek(0,2) of.seek(0,2)
of.close() of.close()
for filename in io.progress_bar_generator(image_paths, "Deleting files"): if io.input_bool(f"Delete original files?", True):
Path(filename).unlink() for filename in io.progress_bar_generator(image_paths, "Deleting files"):
Path(filename).unlink()
if as_person_faceset: if as_person_faceset:
for dir_name in io.progress_bar_generator(dir_names, "Deleting dirs"): for dir_name in io.progress_bar_generator(dir_names, "Deleting dirs"):
dir_path = samples_path / dir_name dir_path = samples_path / dir_name
try: try:
shutil.rmtree(dir_path) shutil.rmtree(dir_path)
except: except:
io.log_info (f"unable to remove: {dir_path} ") io.log_info (f"unable to remove: {dir_path} ")
@staticmethod @staticmethod
def unpack(samples_path): def unpack(samples_path):

View file

@ -6,7 +6,7 @@ from enum import IntEnum
import cv2 import cv2
import numpy as np import numpy as np
from pathlib import Path
from core import imagelib, mplib, pathex from core import imagelib, mplib, pathex
from core.imagelib import sd from core.imagelib import sd
from core.cv2ex import * from core.cv2ex import *
@ -40,11 +40,11 @@ class SampleGeneratorFaceXSeg(SampleGeneratorBase):
else: else:
self.generators_count = max(1, generators_count) self.generators_count = max(1, generators_count)
args = (samples, seg_sample_idxs, resolution, face_type, data_format)
if self.debug: if self.debug:
self.generators = [ThisThreadGenerator ( self.batch_func, (samples, seg_sample_idxs, resolution, face_type, data_format) )] self.generators = [ThisThreadGenerator ( self.batch_func, args )]
else: else:
self.generators = [SubprocessGenerator ( self.batch_func, (samples, seg_sample_idxs, resolution, face_type, data_format), start_now=False ) \ self.generators = [SubprocessGenerator ( self.batch_func, args, start_now=False ) for i in range(self.generators_count) ]
for i in range(self.generators_count) ]
SubprocessGenerator.start_in_parallel( self.generators ) SubprocessGenerator.start_in_parallel( self.generators )
@ -77,8 +77,10 @@ class SampleGeneratorFaceXSeg(SampleGeneratorBase):
ty_range=[-0.05, 0.05] ty_range=[-0.05, 0.05]
random_bilinear_resize_chance, random_bilinear_resize_max_size_per = 25,75 random_bilinear_resize_chance, random_bilinear_resize_max_size_per = 25,75
sharpen_chance, sharpen_kernel_max_size = 25, 5
motion_blur_chance, motion_blur_mb_max_size = 25, 5 motion_blur_chance, motion_blur_mb_max_size = 25, 5
gaussian_blur_chance, gaussian_blur_kernel_max_size = 25, 5 gaussian_blur_chance, gaussian_blur_kernel_max_size = 25, 5
random_jpeg_compress_chance = 25
def gen_img_mask(sample): def gen_img_mask(sample):
img = sample.load_bgr() img = sample.load_bgr()
@ -121,7 +123,6 @@ class SampleGeneratorFaceXSeg(SampleGeneratorBase):
img, mask = gen_img_mask(sample) img, mask = gen_img_mask(sample)
if np.random.randint(2) == 0: if np.random.randint(2) == 0:
if len(bg_shuffle_idxs) == 0: if len(bg_shuffle_idxs) == 0:
bg_shuffle_idxs = seg_sample_idxs.copy() bg_shuffle_idxs = seg_sample_idxs.copy()
np.random.shuffle(bg_shuffle_idxs) np.random.shuffle(bg_shuffle_idxs)
@ -130,14 +131,20 @@ class SampleGeneratorFaceXSeg(SampleGeneratorBase):
bg_img, bg_mask = gen_img_mask(bg_sample) bg_img, bg_mask = gen_img_mask(bg_sample)
bg_wp = imagelib.gen_warp_params(resolution, True, rotation_range=[-180,180], scale_range=[-0.10, 0.10], tx_range=[-0.10, 0.10], ty_range=[-0.10, 0.10] ) bg_wp = imagelib.gen_warp_params(resolution, True, rotation_range=[-180,180], scale_range=[-0.10, 0.10], tx_range=[-0.10, 0.10], ty_range=[-0.10, 0.10] )
bg_img = imagelib.warp_by_params (bg_wp, bg_img, can_warp=False, can_transform=True, can_flip=True, border_replicate=False) bg_img = imagelib.warp_by_params (bg_wp, bg_img, can_warp=False, can_transform=True, can_flip=True, border_replicate=True)
bg_mask = imagelib.warp_by_params (bg_wp, bg_mask, can_warp=False, can_transform=True, can_flip=True, border_replicate=False) bg_mask = imagelib.warp_by_params (bg_wp, bg_mask, can_warp=False, can_transform=True, can_flip=True, border_replicate=False)
bg_img = bg_img*(1-bg_mask)
if np.random.randint(2) == 0:
bg_img = imagelib.apply_random_hsv_shift(bg_img)
else:
bg_img = imagelib.apply_random_rgb_levels(bg_img)
c_mask = (1-bg_mask) * (1-mask) c_mask = 1.0 - (1-bg_mask) * (1-mask)
img = img*(1-c_mask) + bg_img * c_mask rnd = 0.15 + np.random.uniform()*0.85
img = img*(c_mask) + img*(1-c_mask)*rnd + bg_img*(1-c_mask)*(1-rnd)
warp_params = imagelib.gen_warp_params(resolution, random_flip, rotation_range=rotation_range, scale_range=scale_range, tx_range=tx_range, ty_range=ty_range ) warp_params = imagelib.gen_warp_params(resolution, random_flip, rotation_range=rotation_range, scale_range=scale_range, tx_range=tx_range, ty_range=ty_range )
img = imagelib.warp_by_params (warp_params, img, can_warp=True, can_transform=True, can_flip=True, border_replicate=False) img = imagelib.warp_by_params (warp_params, img, can_warp=True, can_transform=True, can_flip=True, border_replicate=True)
mask = imagelib.warp_by_params (warp_params, mask, can_warp=True, can_transform=True, can_flip=True, border_replicate=False) mask = imagelib.warp_by_params (warp_params, mask, can_warp=True, can_transform=True, can_flip=True, border_replicate=False)
img = np.clip(img.astype(np.float32), 0, 1) img = np.clip(img.astype(np.float32), 0, 1)
@ -145,14 +152,36 @@ class SampleGeneratorFaceXSeg(SampleGeneratorBase):
mask[mask >= 0.5] = 1.0 mask[mask >= 0.5] = 1.0
mask = np.clip(mask, 0, 1) mask = np.clip(mask, 0, 1)
if np.random.randint(2) == 0:
# random face flare
krn = np.random.randint( resolution//4, resolution )
krn = krn - krn % 2 + 1
img = img + cv2.GaussianBlur(img*mask, (krn,krn), 0)
if np.random.randint(2) == 0:
# random bg flare
krn = np.random.randint( resolution//4, resolution )
krn = krn - krn % 2 + 1
img = img + cv2.GaussianBlur(img*(1-mask), (krn,krn), 0)
if np.random.randint(2) == 0: if np.random.randint(2) == 0:
img = imagelib.apply_random_hsv_shift(img, mask=sd.random_circle_faded ([resolution,resolution])) img = imagelib.apply_random_hsv_shift(img, mask=sd.random_circle_faded ([resolution,resolution]))
else: else:
img = imagelib.apply_random_rgb_levels(img, mask=sd.random_circle_faded ([resolution,resolution])) img = imagelib.apply_random_rgb_levels(img, mask=sd.random_circle_faded ([resolution,resolution]))
img = imagelib.apply_random_motion_blur( img, motion_blur_chance, motion_blur_mb_max_size, mask=sd.random_circle_faded ([resolution,resolution])) if np.random.randint(2) == 0:
img = imagelib.apply_random_gaussian_blur( img, gaussian_blur_chance, gaussian_blur_kernel_max_size, mask=sd.random_circle_faded ([resolution,resolution])) img = imagelib.apply_random_sharpen( img, sharpen_chance, sharpen_kernel_max_size, mask=sd.random_circle_faded ([resolution,resolution]))
img = imagelib.apply_random_bilinear_resize( img, random_bilinear_resize_chance, random_bilinear_resize_max_size_per, mask=sd.random_circle_faded ([resolution,resolution])) else:
img = imagelib.apply_random_motion_blur( img, motion_blur_chance, motion_blur_mb_max_size, mask=sd.random_circle_faded ([resolution,resolution]))
img = imagelib.apply_random_gaussian_blur( img, gaussian_blur_chance, gaussian_blur_kernel_max_size, mask=sd.random_circle_faded ([resolution,resolution]))
if np.random.randint(2) == 0:
img = imagelib.apply_random_nearest_resize( img, random_bilinear_resize_chance, random_bilinear_resize_max_size_per, mask=sd.random_circle_faded ([resolution,resolution]))
else:
img = imagelib.apply_random_bilinear_resize( img, random_bilinear_resize_chance, random_bilinear_resize_max_size_per, mask=sd.random_circle_faded ([resolution,resolution]))
img = np.clip(img, 0, 1)
img = imagelib.apply_random_jpeg_compress( img, random_jpeg_compress_chance, mask=sd.random_circle_faded ([resolution,resolution]))
if data_format == "NCHW": if data_format == "NCHW":
img = np.transpose(img, (2,0,1) ) img = np.transpose(img, (2,0,1) )
@ -222,3 +251,47 @@ class SegmentedSampleFilterSubprocessor(Subprocessor):
return idx, self.samples[idx].has_xseg_mask() return idx, self.samples[idx].has_xseg_mask()
else: else:
return idx, self.samples[idx].seg_ie_polys.get_pts_count() != 0 return idx, self.samples[idx].seg_ie_polys.get_pts_count() != 0
"""
bg_path = None
for path in paths:
bg_path = Path(path) / 'backgrounds'
if bg_path.exists():
break
if bg_path is None:
io.log_info(f'Random backgrounds will not be used. Place no face jpg images to aligned\backgrounds folder. ')
bg_pathes = None
else:
bg_pathes = pathex.get_image_paths(bg_path, image_extensions=['.jpg'], return_Path_class=True)
io.log_info(f'Using {len(bg_pathes)} random backgrounds from {bg_path}')
if bg_pathes is not None:
bg_path = bg_pathes[ np.random.randint(len(bg_pathes)) ]
bg_img = cv2_imread(bg_path)
if bg_img is not None:
bg_img = bg_img.astype(np.float32) / 255.0
bg_img = imagelib.normalize_channels(bg_img, 3)
bg_img = imagelib.random_crop(bg_img, resolution, resolution)
bg_img = cv2.resize(bg_img, (resolution, resolution), interpolation=cv2.INTER_LINEAR)
if np.random.randint(2) == 0:
bg_img = imagelib.apply_random_hsv_shift(bg_img)
else:
bg_img = imagelib.apply_random_rgb_levels(bg_img)
bg_wp = imagelib.gen_warp_params(resolution, True, rotation_range=[-180,180], scale_range=[0,0], tx_range=[0,0], ty_range=[0,0])
bg_img = imagelib.warp_by_params (bg_wp, bg_img, can_warp=False, can_transform=True, can_flip=True, border_replicate=True)
bg = img*(1-mask)
fg = img*mask
c_mask = sd.random_circle_faded ([resolution,resolution])
bg = ( bg_img*c_mask + bg*(1-c_mask) )*(1-mask)
img = fg+bg
else:
"""

View file

@ -23,7 +23,7 @@ class SampleLoader:
try: try:
samples = samplelib.PackedFaceset.load(samples_path) samples = samplelib.PackedFaceset.load(samples_path)
except: except:
io.log_err(f"Error occured while loading samplelib.PackedFaceset.load {str(samples_dat_path)}, {traceback.format_exc()}") io.log_err(f"Error occured while loading samplelib.PackedFaceset.load {str(samples_path)}, {traceback.format_exc()}")
if samples is None: if samples is None:
raise ValueError("packed faceset not found.") raise ValueError("packed faceset not found.")

View file

@ -7,7 +7,8 @@ import numpy as np
from core import imagelib from core import imagelib
from core.cv2ex import * from core.cv2ex import *
from core.imagelib import sd from core.imagelib import sd, LinearMotionBlur
from core.imagelib.color_transfer import random_lab_rotation
from facelib import FaceType, LandmarksProcessor from facelib import FaceType, LandmarksProcessor
@ -26,6 +27,8 @@ class SampleProcessor(object):
BGR = 1 #BGR BGR = 1 #BGR
G = 2 #Grayscale G = 2 #Grayscale
GGG = 3 #3xGrayscale GGG = 3 #3xGrayscale
LAB_RAND_TRANSFORM = 4 # LAB random transform
class FaceMaskType(IntEnum): class FaceMaskType(IntEnum):
NONE = 0 NONE = 0
@ -109,6 +112,10 @@ class SampleProcessor(object):
nearest_resize_to = opts.get('nearest_resize_to', None) nearest_resize_to = opts.get('nearest_resize_to', None)
warp = opts.get('warp', False) warp = opts.get('warp', False)
transform = opts.get('transform', False) transform = opts.get('transform', False)
random_downsample = opts.get('random_downsample', False)
random_noise = opts.get('random_noise', False)
random_blur = opts.get('random_blur', False)
random_jpeg = opts.get('random_jpeg', False)
motion_blur = opts.get('motion_blur', None) motion_blur = opts.get('motion_blur', None)
gaussian_blur = opts.get('gaussian_blur', None) gaussian_blur = opts.get('gaussian_blur', None)
random_bilinear_resize = opts.get('random_bilinear_resize', None) random_bilinear_resize = opts.get('random_bilinear_resize', None)
@ -211,6 +218,59 @@ class SampleProcessor(object):
img = imagelib.color_transfer (ct_mode, img, cv2.resize( ct_sample_bgr, (resolution,resolution), interpolation=cv2.INTER_LINEAR ) ) img = imagelib.color_transfer (ct_mode, img, cv2.resize( ct_sample_bgr, (resolution,resolution), interpolation=cv2.INTER_LINEAR ) )
randomization_order = ['blur', 'noise', 'jpeg', 'down']
np.random.shuffle(randomization_order)
for random_distortion in randomization_order:
# Apply random blur
if random_distortion == 'blur' and random_blur:
blur_type = np.random.choice(['motion', 'gaussian'])
if blur_type == 'motion':
blur_k = np.random.randint(10, 20)
blur_angle = 360 * np.random.random()
img = LinearMotionBlur(img, blur_k, blur_angle)
elif blur_type == 'gaussian':
blur_sigma = 5 * np.random.random() + 3
if blur_sigma < 5.0:
kernel_size = 2.9 * blur_sigma # 97% of weight
else:
kernel_size = 2.6 * blur_sigma # 95% of weight
kernel_size = int(kernel_size)
kernel_size = kernel_size + 1 if kernel_size % 2 == 0 else kernel_size
img = cv2.GaussianBlur(img, (kernel_size, kernel_size), blur_sigma)
# Apply random noise
if random_distortion == 'noise' and random_noise:
noise_type = np.random.choice(['gaussian', 'laplace', 'poisson'])
noise_scale = (20 * np.random.random() + 20)
if noise_type == 'gaussian':
noise = np.random.normal(scale=noise_scale, size=img.shape)
img += noise / 255.0
elif noise_type == 'laplace':
noise = np.random.laplace(scale=noise_scale, size=img.shape)
img += noise / 255.0
elif noise_type == 'poisson':
noise_lam = (15 * np.random.random() + 15)
noise = np.random.poisson(lam=noise_lam, size=img.shape)
img += noise / 255.0
# Apply random jpeg compression
if random_distortion == 'jpeg' and random_jpeg:
img = np.clip(img*255, 0, 255).astype(np.uint8)
jpeg_compression_level = np.random.randint(50, 85)
encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), jpeg_compression_level]
_, enc_img = cv2.imencode('.jpg', img, encode_param)
img = cv2.imdecode(enc_img, cv2.IMREAD_UNCHANGED).astype(np.float32) / 255.0
# Apply random downsampling
if random_distortion == 'down' and random_downsample:
down_res = np.random.randint(int(0.125*resolution), int(0.25*resolution))
img = cv2.resize(img, (down_res, down_res), interpolation=cv2.INTER_CUBIC)
img = cv2.resize(img, (resolution, resolution), interpolation=cv2.INTER_CUBIC)
img = imagelib.warp_by_params (params_per_resolution[resolution], img, warp, transform, can_flip=True, border_replicate=border_replicate) img = imagelib.warp_by_params (params_per_resolution[resolution], img, warp, transform, can_flip=True, border_replicate=border_replicate)
img = np.clip(img.astype(np.float32), 0, 1) img = np.clip(img.astype(np.float32), 0, 1)
@ -231,6 +291,8 @@ class SampleProcessor(object):
# Transform from BGR to desired channel_type # Transform from BGR to desired channel_type
if channel_type == SPCT.BGR: if channel_type == SPCT.BGR:
out_sample = img out_sample = img
elif channel_type == SPCT.LAB_RAND_TRANSFORM:
out_sample = random_lab_rotation(img, sample_rnd_seed)
elif channel_type == SPCT.G: elif channel_type == SPCT.G:
out_sample = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)[...,None] out_sample = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)[...,None]
elif channel_type == SPCT.GGG: elif channel_type == SPCT.GGG: