Merge remote-tracking branch 'eng-ita-fork/master'

This commit is contained in:
seranus 2021-11-21 18:07:52 +01:00
commit 7a08022148
42 changed files with 1767 additions and 422 deletions

12
.github/FUNDING.yml vendored Normal file
View file

@ -0,0 +1,12 @@
# These are supported funding model platforms
github: jmhummel
patreon: faceshiftlabs
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

3
.gitignore vendored
View file

@ -6,3 +6,6 @@
!requirements* !requirements*
!Dockerfile* !Dockerfile*
!*.sh !*.sh
convert.py
randomColor.py
train.py

154
CHANGELOG.md Normal file
View file

@ -0,0 +1,154 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.8.0] - 2021-06-20
### Added
- Morph factor option
- Migrated options from SAEHD to AMP:
- Loss function
- Random downsample
- Random noise
- Random blur
- Random jpeg
- Background Power
- CT mode: fs-aug
- Random color
## [1.7.3] - 2021-06-16
### Fixed
- AMP mask type
## [1.7.2] - 2021-06-15
### Added
- New sample degradation options (only affects input, similar to random warp):
- Random noise (gaussian/laplace/poisson)
- Random blur (gaussian/motion)
- Random jpeg compression
- Random downsampling
- New "warped" preview(s): Shows the input samples with any/all distortions.
## [1.7.1] - 2021-06-15
### Added
- New autobackup options:
- Session name
- ISO Timestamps (instead of numbered)
- Max number of backups to keep (use "0" for unlimited)
## [1.7.0] - 2021-06-15
### Updated
- Merged in latest changes from upstream, including new AMP model
## [1.6.2] - 2021-05-08
### Fixed
- Fixed bug with GAN smoothing/noisy labels with certain versions of Tensorflow
## [1.6.1] - 2021-05-04
### Fixed
- Fixed bug when `fs-aug` used on model with same resolution as dataset
## [1.6.0] - 2021-05-04
### Added
- New loss function "MS-SSIM+L1", based on ["Loss Functions for Image Restoration with Neural Networks"](https://research.nvidia.com/publication/loss-functions-image-restoration-neural-networks)
## [1.5.1] - 2021-04-23
### Fixed
- Fixes bug with MS-SSIM when using a version of tensorflow < 1.14
## [1.5.0] - 2021-03-29
### Changed
- Web UI previews now show preview pane as PNG (loss-less), instead of JPG (lossy), so we can see the same output
as on desktop, without any changes from JPG compression. This has the side-effect of the preview images loading slower
over web, as they are now larger, a future update may be considered which would give the option to view as JPG
instead.
## [1.4.2] - 2021-03-26
### Fixed
- Fixes bug in background power with MS-SSIM, that misattributed loss from dst to src
## [1.4.1] - 2021-03-25
### Fixed
- When both Background Power and MS-SSIM were enabled, the src and dst losses were being overwritten with the
"background power" losses. Fixed so "background power" losses are properly added with the total losses.
- *Note: since all the other losses were being skipped when ms-ssim and background loss were being enabled, this had
the side-effect of lowering the memory requirements (and raising the max batch size). With this fix, you may
experience an OOM error on models ran with both these features enabled. I may revisit this in another feature,
allowing you to manually disable certain loss calculations, for similar performance benefits.*
## [1.4.0] - 2021-03-24
### Added
- [MS-SSIM loss training option](doc/features/ms-ssim)
- GAN version option (v2 - late 2020 or v3 - current GAN)
- [GAN label smoothing and label noise options](doc/features/gan-options)
### Fixed
- Background Power now uses the entire image, not just the area outside of the mask for comparison.
This should help with rough areas directly next to the mask
## [1.3.0] - 2021-03-20
### Added
- [Background Power training option](doc/features/background-power/README.md)
## [1.2.1] - 2021-03-20
### Fixed
- Fixes bug with `fs-aug` color mode.
## [1.2.0] - 2021-03-17
### Added
- [Random color training option](doc/features/random-color/README.md)
## [1.1.5] - 2021-03-16
### Fixed
- Fixed unclosed websocket in Web UI client when exiting
## [1.1.4] - 2021-03-16
### Fixed
- Fixed bug when exiting from Web UI
## [1.1.3] - 2021-03-16
### Changed
- Updated changelog with unreleased features, links to working branches
## [1.1.2] - 2021-03-12
### Fixed
- [Fixed missing predicted src mask in 'SAEHD masked' preview](doc/fixes/predicted_src_mask/README.md)
## [1.1.1] - 2021-03-12
### Added
- CHANGELOG file for tracking updates, new features, and bug fixes
- Documentation for Web UI
- Link to CHANGELOG at top of README
## [1.1.0] - 2021-03-11
### Added
- [Web UI for training preview](doc/features/webui/README.md)
## [1.0.0] - 2021-03-09
### Initialized
- Reset stale master branch to [seranus/DeepFaceLab](https://github.com/seranus/DeepFaceLab),
21 commits ahead of [iperov/DeepFaceLab](https://github.com/iperov/DeepFaceLab) ([compare](https://github.com/iperov/DeepFaceLab/compare/4818183...seranus:3f5ae05))
[1.8.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.7.3...v1.8.0
[1.7.3]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.7.2...v1.7.3
[1.7.2]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.7.1...v1.7.2
[1.7.1]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.7.0...v1.7.1
[1.7.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.6.2...v1.7.0
[1.6.2]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.6.1...v1.6.2
[1.6.1]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.6.0...v1.6.1
[1.6.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.5.1...v1.6.0
[1.5.1]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.5.0...v1.5.1
[1.5.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.4.2...v1.5.0
[1.4.2]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.4.1...v1.4.2
[1.4.1]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.4.0...v1.4.1
[1.4.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.3.0...v1.4.0
[1.3.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.2.1...v1.3.0
[1.2.1]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.2.0...v1.2.1
[1.2.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.1.5...v1.2.0
[1.1.5]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.1.4...v1.1.5
[1.1.4]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.1.3...v1.1.4
[1.1.3]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.1.2...v1.1.3
[1.1.2]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.1.1...v1.1.2
[1.1.1]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.1.0...v1.1.1
[1.1.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.0.0...v1.1.0
[1.0.0]: https://github.com/faceshiftlabs/DeepFaceLab/releases/tag/v1.0.0

130
README.md
View file

@ -1,4 +1,4 @@
<table align="center" border="0"> <table align="center" border="0">
<tr><td colspan=2 align="center"> <tr><td colspan=2 align="center">
@ -53,132 +53,12 @@ DeepFaceLab is used by such popular youtube channels as
</td></tr> </td></tr>
<tr><td colspan=2 align="center"> <tr><td colspan=2 align="center">
# What can I do using DeepFaceLab?
</td></tr>
<tr><td colspan=2 align="center"> <tr><td colspan=2 align="center">
## Replace the face # CHANGELOG
### [View most recent changes](CHANGELOG.md)
<img src="doc/replace_the_face.jpg" align="center">
</td></tr>
<tr><td colspan=2 align="center"> <tr><td colspan=2 align="center">
## De-age the face
</td></tr>
<tr><td align="center" width="50%">
<img src="doc/deage_0_1.jpg" align="center">
</td>
<td align="center" width="50%">
<img src="doc/deage_0_2.jpg" align="center">
</td></tr>
<tr><td colspan=2 align="center">
![](doc/youtube_icon.png) https://www.youtube.com/watch?v=Ddx5B-84ebo
</td></tr>
<tr><td colspan=2 align="center">
## Replace the head
</td></tr>
<tr><td align="center" width="50%">
<img src="doc/head_replace_0_1.jpg" align="center">
</td>
<td align="center" width="50%">
<img src="doc/head_replace_0_2.jpg" align="center">
</td></tr>
<tr><td colspan=2 align="center">
![](doc/youtube_icon.png) https://www.youtube.com/watch?v=xr5FHd0AdlQ
</td></tr>
<tr><td align="center" width="50%">
<img src="doc/head_replace_1_1.jpg" align="center">
</td>
<td align="center" width="50%">
<img src="doc/head_replace_1_2.jpg" align="center">
</td></tr>
<tr><td colspan=2 align="center">
![](doc/youtube_icon.png) https://www.youtube.com/watch?v=RTjgkhMugVw
</td></tr>
<tr><td align="center" width="50%">
<img src="doc/head_replace_2_1.jpg" align="center">
</td>
<td align="center" width="50%">
<img src="doc/head_replace_2_2.jpg" align="center">
</td></tr>
<tr><td colspan=2 align="center">
![](doc/youtube_icon.png) https://www.youtube.com/watch?v=R9f7WD0gKPo
</td></tr>
<tr><td colspan=2 align="center">
## Manipulate politicians lips
(voice replacement is not included!)
(also requires a skill in video editors such as *Adobe After Effects* or *Davinci Resolve*)
<img src="doc/political_speech2.jpg" align="center">
![](doc/youtube_icon.png) https://www.youtube.com/watch?v=IvY-Abd2FfM
<img src="doc/political_speech3.jpg" align="center">
![](doc/youtube_icon.png) https://www.youtube.com/watch?v=ERQlaJ_czHU
</td></tr>
<tr><td colspan=2 align="center">
# Deepfake native resolution progress
</td></tr>
<tr><td colspan=2 align="center">
<img src="doc/deepfake_progress.png" align="center">
</td></tr>
<tr><td colspan=2 align="center">
<img src="doc/make_everything_ok.png" align="center">
Unfortunately, there is no "make everything ok" button in DeepFaceLab. You should spend time studying the workflow and growing your skills. A skill in programs such as *AfterEffects* or *Davinci Resolve* is also desirable.
</td></tr>
<tr><td colspan=2 align="center"> <tr><td colspan=2 align="center">
## Mini tutorial ## Mini tutorial
@ -205,8 +85,8 @@ Unfortunately, there is no "make everything ok" button in DeepFaceLab. You shoul
</td><td align="center">Contains new and prev releases.</td></tr> </td><td align="center">Contains new and prev releases.</td></tr>
<tr><td align="right"> <tr><td align="right">
<a href="https://github.com/chervonij/DFL-Colab">Google Colab (github)</a> <a href="https://github.com/Cioscos/DeepFaceLab-Colab">Google Colab (github)</a>
</td><td align="center">by @chervonij . You can train fakes for free using Google Colab.</td></tr> </td><td align="center">Personal fork from @chervonij repository. You can train fakes for free using Google Colab.</td></tr>
<tr><td align="right"> <tr><td align="right">
<a href="https://github.com/nagadit/DeepFaceLab_Linux">Linux (github)</a> <a href="https://github.com/nagadit/DeepFaceLab_Linux">Linux (github)</a>

View file

@ -85,6 +85,16 @@ class QStringDB():
'zh' : '保存并转到下一张图片\n按住SHIFT : 加快\n按住CTRL : 跳过未标记的\n', 'zh' : '保存并转到下一张图片\n按住SHIFT : 加快\n按住CTRL : 跳过未标记的\n',
}[lang] }[lang]
QStringDB.spinner_label = { 'en' : 'Step size',
'ru' : 'Размер шага',
'zh' : '台阶大小'
}[lang]
QStringDB.spinner_label_tip = { 'en' : 'Minimum 5\nMaximum 500',
'ru' : 'Минимум 5\nМаксимум 500',
'zh' : '最少5个\n最多500'
}[lang]
QStringDB.btn_delete_image_tip = { 'en' : 'Move to _trash and Next image\n', QStringDB.btn_delete_image_tip = { 'en' : 'Move to _trash and Next image\n',
'ru' : 'Переместить в _trash и следующее изображение\n', 'ru' : 'Переместить в _trash и следующее изображение\n',
'zh' : '移至_trash转到下一张图片 ', 'zh' : '移至_trash转到下一张图片 ',

View file

@ -1173,6 +1173,8 @@ class MainWindow(QXMainWindow):
self.cached_images = {} self.cached_images = {}
self.cached_has_ie_polys = {} self.cached_has_ie_polys = {}
self.spin_box = QSpinBox()
self.initialize_ui() self.initialize_ui()
# Loader # Loader
@ -1297,7 +1299,7 @@ class MainWindow(QXMainWindow):
def process_prev_image(self): def process_prev_image(self):
key_mods = QApplication.keyboardModifiers() key_mods = QApplication.keyboardModifiers()
step = 5 if key_mods == Qt.ShiftModifier else 1 step = self.spin_box.value() if key_mods == Qt.ShiftModifier else 1
only_has_polys = key_mods == Qt.ControlModifier only_has_polys = key_mods == Qt.ControlModifier
if self.canvas.op.is_initialized(): if self.canvas.op.is_initialized():
@ -1323,7 +1325,7 @@ class MainWindow(QXMainWindow):
def process_next_image(self, first_initialization=False): def process_next_image(self, first_initialization=False):
key_mods = QApplication.keyboardModifiers() key_mods = QApplication.keyboardModifiers()
step = 0 if first_initialization else 5 if key_mods == Qt.ShiftModifier else 1 step = 0 if first_initialization else self.spin_box.value() if key_mods == Qt.ShiftModifier else 1
only_has_polys = False if first_initialization else key_mods == Qt.ControlModifier only_has_polys = False if first_initialization else key_mods == Qt.ControlModifier
if self.canvas.op.is_initialized(): if self.canvas.op.is_initialized():
@ -1374,6 +1376,13 @@ class MainWindow(QXMainWindow):
pad_image = QWidget() pad_image = QWidget()
pad_image.setFixedSize(QUIConfig.preview_bar_icon_q_size) pad_image.setFixedSize(QUIConfig.preview_bar_icon_q_size)
self.spin_box.setFocusPolicy(Qt.ClickFocus)
self.spin_box.setRange(5, 500)
self.spin_box.setSingleStep(1)
self.spin_box.installEventFilter(self)
self.spin_box.valueChanged.connect(self.on_spinbox_value_changed)
self.spin_box.setToolTip(QStringDB.spinner_label_tip)
preview_image_bar_frame_l = QHBoxLayout() preview_image_bar_frame_l = QHBoxLayout()
preview_image_bar_frame_l.setContentsMargins(0,0,0,0) preview_image_bar_frame_l.setContentsMargins(0,0,0,0)
preview_image_bar_frame_l.addWidget ( pad_image, alignment=Qt.AlignCenter) preview_image_bar_frame_l.addWidget ( pad_image, alignment=Qt.AlignCenter)
@ -1404,14 +1413,25 @@ class MainWindow(QXMainWindow):
preview_image_bar.setLayout(preview_image_bar_l) preview_image_bar.setLayout(preview_image_bar_l)
label_font = QFont('Courier New') label_font = QFont('Courier New')
self.filename_label = QLabel() self.filename_label = QLabel()
self.filename_label.setFont(label_font) self.filename_label.setFont(label_font)
self.has_ie_polys_count_label = QLabel() self.has_ie_polys_count_label = QLabel()
status_frame_1_2 = QHBoxLayout()
status_frame_1_2.setContentsMargins(0,0,0,0)
step_string_label = QLabel()
step_string_label.setFont(label_font)
step_string_label.setText(QStringDB.spinner_label)
status_frame_1_2.addWidget (step_string_label, alignment=Qt.AlignRight)
status_frame_1_2.addWidget (self.spin_box, alignment=Qt.AlignLeft)
status_frame_l = QHBoxLayout() status_frame_l = QHBoxLayout()
status_frame_l.setContentsMargins(0,0,0,0) status_frame_l.setContentsMargins(0,0,0,0)
status_frame_l.addWidget ( QLabel(), alignment=Qt.AlignCenter) status_frame_l.addLayout (status_frame_1_2)
status_frame_l.addWidget (self.filename_label, alignment=Qt.AlignCenter) status_frame_l.addWidget (self.filename_label, alignment=Qt.AlignCenter)
status_frame_l.addWidget (self.has_ie_polys_count_label, alignment=Qt.AlignCenter) status_frame_l.addWidget (self.has_ie_polys_count_label, alignment=Qt.AlignCenter)
status_frame = QFrame() status_frame = QFrame()
@ -1438,6 +1458,21 @@ class MainWindow(QXMainWindow):
else: else:
self.move( QPoint(0,0)) self.move( QPoint(0,0))
def eventFilter(self, obj, event):
if event.type() == QEvent.KeyPress and obj is self.spin_box:
if event.key() == Qt.Key_Return or event.key() == Qt.Key_Enter and self.spin_box.hasFocus():
self.spin_box.clearFocus()
if event.type() == QEvent.MouseButtonPress and obj is self.spin_box:
if event.button() == Qt.LeftButton and self.spin_box.hasFocus():
self.spin_box.clearFocus()
return super().eventFilter(obj, event)
def on_spinbox_value_changed(self, value):
if value == self.spin_box.maximum() or value == self.spin_box.minimum():
self.spin_box.clearFocus()
def get_has_ie_polys_count(self): def get_has_ie_polys_count(self):
return self.has_ie_polys_count return self.has_ie_polys_count

View file

@ -12,7 +12,7 @@ from .warp import gen_warp_params, warp_by_params
from .reduce_colors import reduce_colors from .reduce_colors import reduce_colors
from .color_transfer import color_transfer, color_transfer_mix, color_transfer_sot, color_transfer_mkl, color_transfer_idt, color_hist_match, reinhard_color_transfer, linear_color_transfer from .color_transfer import color_transfer, color_transfer_mix, color_transfer_sot, color_transfer_mkl, color_transfer_idt, color_hist_match, reinhard_color_transfer, linear_color_transfer, color_augmentation
from .common import random_crop, normalize_channels, cut_odd_image, overlay_alpha_image from .common import random_crop, normalize_channels, cut_odd_image, overlay_alpha_image

View file

@ -1,6 +1,9 @@
import cv2 import cv2
import numexpr as ne import numexpr as ne
import numpy as np import numpy as np
from numpy import linalg as npla
import random
from scipy.stats import special_ortho_group
import scipy as sp import scipy as sp
from numpy import linalg as npla from numpy import linalg as npla
@ -9,14 +12,12 @@ def color_transfer_sot(src,trg, steps=10, batch_size=5, reg_sigmaXY=16.0, reg_si
""" """
Color Transform via Sliced Optimal Transfer Color Transform via Sliced Optimal Transfer
ported by @iperov from https://github.com/dcoeurjo/OTColorTransfer ported by @iperov from https://github.com/dcoeurjo/OTColorTransfer
src - any float range any channel image src - any float range any channel image
dst - any float range any channel image, same shape as src dst - any float range any channel image, same shape as src
steps - number of solver steps steps - number of solver steps
batch_size - solver batch size batch_size - solver batch size
reg_sigmaXY - apply regularization and sigmaXY of filter, otherwise set to 0.0 reg_sigmaXY - apply regularization and sigmaXY of filter, otherwise set to 0.0
reg_sigmaV - sigmaV of filter reg_sigmaV - sigmaV of filter
return value - clip it manually return value - clip it manually
""" """
if not np.issubdtype(src.dtype, np.floating): if not np.issubdtype(src.dtype, np.floating):
@ -334,3 +335,72 @@ def color_transfer(ct_mode, img_src, img_trg):
else: else:
raise ValueError(f"unknown ct_mode {ct_mode}") raise ValueError(f"unknown ct_mode {ct_mode}")
return out return out
# imported from faceswap
def color_augmentation(img, seed=None):
""" Color adjust RGB image """
img = img.astype(np.float32)
face = img
face = np.clip(face*255.0, 0, 255).astype(np.uint8)
face = random_clahe(face, seed)
face = random_lab(face, seed)
img[:, :, :3] = face
return (face / 255.0).astype(np.float32)
def random_lab_rotation(image, seed=None):
"""
Randomly rotates image color around the L axis in LAB colorspace,
keeping perceptual lightness constant.
"""
image = cv2.cvtColor(image.astype(np.float32), cv2.COLOR_BGR2LAB)
M = np.eye(3)
M[1:, 1:] = special_ortho_group.rvs(2, 1, seed)
image = image.dot(M)
l, a, b = cv2.split(image)
l = np.clip(l, 0, 100)
a = np.clip(a, -127, 127)
b = np.clip(b, -127, 127)
image = cv2.merge([l, a, b])
image = cv2.cvtColor(image.astype(np.float32), cv2.COLOR_LAB2BGR)
np.clip(image, 0, 1, out=image)
return image
def random_lab(image, seed=None):
""" Perform random color/lightness adjustment in L*a*b* colorspace """
random.seed(seed)
amount_l = 30 / 100
amount_ab = 8 / 100
randoms = [(random.random() * amount_l * 2) - amount_l, # L adjust
(random.random() * amount_ab * 2) - amount_ab, # A adjust
(random.random() * amount_ab * 2) - amount_ab] # B adjust
image = cv2.cvtColor( # pylint:disable=no-member
image, cv2.COLOR_BGR2LAB).astype("float32") / 255.0 # pylint:disable=no-member
for idx, adjustment in enumerate(randoms):
if adjustment >= 0:
image[:, :, idx] = ((1 - image[:, :, idx]) * adjustment) + image[:, :, idx]
else:
image[:, :, idx] = image[:, :, idx] * (1 + adjustment)
image = cv2.cvtColor((image * 255.0).astype("uint8"), # pylint:disable=no-member
cv2.COLOR_LAB2BGR) # pylint:disable=no-member
return image
def random_clahe(image, seed=None):
""" Randomly perform Contrast Limited Adaptive Histogram Equalization """
random.seed(seed)
contrast_random = random.random()
if contrast_random > 50 / 100:
return image
# base_contrast = image.shape[0] // 128
base_contrast = 1 # testing because it breaks on small sizes
grid_base = random.random() * 4
contrast_adjustment = int(grid_base * (base_contrast / 2))
grid_size = base_contrast + contrast_adjustment
clahe = cv2.createCLAHE(clipLimit=2.0, # pylint: disable=no-member
tileGridSize=(grid_size, grid_size))
for chan in range(3):
image[:, :, chan] = clahe.apply(image[:, :, chan])
return image

View file

@ -0,0 +1,50 @@
from core.leras import nn
tf = nn.tf
class MsSsim(nn.LayerBase):
default_power_factors = (0.0448, 0.2856, 0.3001, 0.2363, 0.1333)
default_l1_alpha = 0.84
def __init__(self, batch_size, in_ch, resolution, kernel_size=11, use_l1=False, **kwargs):
# restrict mssim factors to those greater/equal to kernel size
power_factors = [p for i, p in enumerate(self.default_power_factors) if resolution//(2**i) >= kernel_size]
# normalize power factors if reduced because of size
if sum(power_factors) < 1.0:
power_factors = [x/sum(power_factors) for x in power_factors]
self.power_factors = power_factors
self.num_scale = len(power_factors)
self.kernel_size = kernel_size
self.use_l1 = use_l1
if use_l1:
self.gaussian_weights = nn.get_gaussian_weights(batch_size, in_ch, resolution, num_scale=self.num_scale)
super().__init__(**kwargs)
def __call__(self, y_true, y_pred, max_val):
# Transpose images from NCHW to NHWC
y_true_t = tf.transpose(tf.cast(y_true, tf.float32), [0, 2, 3, 1])
y_pred_t = tf.transpose(tf.cast(y_pred, tf.float32), [0, 2, 3, 1])
# ssim_multiscale returns values in range [0, 1] (where 1 is completely identical)
# subtract from 1 to get loss
if tf.__version__ >= "1.14":
ms_ssim_loss = 1.0 - tf.image.ssim_multiscale(y_true_t, y_pred_t, max_val, power_factors=self.power_factors, filter_size=self.kernel_size)
else:
ms_ssim_loss = 1.0 - tf.image.ssim_multiscale(y_true_t, y_pred_t, max_val, power_factors=self.power_factors)
# If use L1 is enabled, use mix of ms-ssim and L1 (weighted by gaussian filters)
# H. Zhao, O. Gallo, I. Frosio and J. Kautz, "Loss Functions for Image Restoration With Neural Networks,"
# in IEEE Transactions on Computational Imaging, vol. 3, no. 1, pp. 47-57, March 2017,
# doi: 10.1109/TCI.2016.2644865.
# https://research.nvidia.com/publication/loss-functions-image-restoration-neural-networks
if self.use_l1:
diff = tf.tile(tf.expand_dims(tf.abs(y_true - y_pred), axis=0), multiples=[self.num_scale, 1, 1, 1, 1])
l1_loss = tf.reduce_mean(tf.reduce_sum(self.gaussian_weights[-1, :, :, :, :] * diff, axis=[0, 3, 4]), axis=[1])
return self.default_l1_alpha * ms_ssim_loss + (1 - self.default_l1_alpha) * l1_loss
return ms_ssim_loss
nn.MsSsim = MsSsim

View file

@ -16,3 +16,5 @@ from .ScaleAdd import *
from .DenseNorm import * from .DenseNorm import *
from .AdaIN import * from .AdaIN import *
from .TanhPolar import * from .TanhPolar import *
from .MsSsim import *
from .TanhPolar import *

View file

@ -133,7 +133,6 @@ class UNetPatchDiscriminator(nn.ModelBase):
def on_build(self, patch_size, in_ch, base_ch = 16, use_fp16 = False): def on_build(self, patch_size, in_ch, base_ch = 16, use_fp16 = False):
self.use_fp16 = use_fp16 self.use_fp16 = use_fp16
conv_dtype = tf.float16 if use_fp16 else tf.float32 conv_dtype = tf.float16 if use_fp16 else tf.float32
class ResidualBlock(nn.ModelBase): class ResidualBlock(nn.ModelBase):
def on_build(self, ch, kernel_size=3 ): def on_build(self, ch, kernel_size=3 ):
self.conv1 = nn.Conv2D( ch, ch, kernel_size=kernel_size, padding='SAME', dtype=conv_dtype) self.conv1 = nn.Conv2D( ch, ch, kernel_size=kernel_size, padding='SAME', dtype=conv_dtype)
@ -158,8 +157,14 @@ class UNetPatchDiscriminator(nn.ModelBase):
for i, (kernel_size, strides) in enumerate(layers): for i, (kernel_size, strides) in enumerate(layers):
self.convs.append ( nn.Conv2D( level_chs[i-1], level_chs[i], kernel_size=kernel_size, strides=strides, padding='SAME', dtype=conv_dtype) ) self.convs.append ( nn.Conv2D( level_chs[i-1], level_chs[i], kernel_size=kernel_size, strides=strides, padding='SAME', dtype=conv_dtype) )
self.res1.append ( ResidualBlock(level_chs[i]) )
self.res2.append ( ResidualBlock(level_chs[i]) )
self.upconvs.insert (0, nn.Conv2DTranspose( level_chs[i]*(2 if i != len(layers)-1 else 1), level_chs[i-1], kernel_size=kernel_size, strides=strides, padding='SAME', dtype=conv_dtype) ) self.upconvs.insert (0, nn.Conv2DTranspose( level_chs[i]*(2 if i != len(layers)-1 else 1), level_chs[i-1], kernel_size=kernel_size, strides=strides, padding='SAME', dtype=conv_dtype) )
self.upres1.insert (0, ResidualBlock(level_chs[i-1]*2) )
self.upres2.insert (0, ResidualBlock(level_chs[i-1]*2) )
self.out_conv = nn.Conv2D( level_chs[-1]*2, 1, kernel_size=1, padding='VALID', dtype=conv_dtype) self.out_conv = nn.Conv2D( level_chs[-1]*2, 1, kernel_size=1, padding='VALID', dtype=conv_dtype)
self.center_out = nn.Conv2D( level_chs[len(layers)-1], 1, kernel_size=1, padding='VALID', dtype=conv_dtype) self.center_out = nn.Conv2D( level_chs[len(layers)-1], 1, kernel_size=1, padding='VALID', dtype=conv_dtype)
@ -169,13 +174,14 @@ class UNetPatchDiscriminator(nn.ModelBase):
def forward(self, x): def forward(self, x):
if self.use_fp16: if self.use_fp16:
x = tf.cast(x, tf.float16) x = tf.cast(x, tf.float16)
x = tf.nn.leaky_relu( self.in_conv(x), 0.2 ) x = tf.nn.leaky_relu( self.in_conv(x), 0.2 )
encs = [] encs = []
for conv in self.convs: for conv in self.convs:
encs.insert(0, x) encs.insert(0, x)
x = tf.nn.leaky_relu( conv(x), 0.2 ) x = tf.nn.leaky_relu( conv(x), 0.2 )
x = res1(x)
x = res2(x)
center_out, x = self.center_out(x), tf.nn.leaky_relu( self.center_conv(x), 0.2 ) center_out, x = self.center_out(x), tf.nn.leaky_relu( self.center_conv(x), 0.2 )
@ -192,3 +198,129 @@ class UNetPatchDiscriminator(nn.ModelBase):
return center_out, x return center_out, x
nn.UNetPatchDiscriminator = UNetPatchDiscriminator nn.UNetPatchDiscriminator = UNetPatchDiscriminator
class UNetPatchDiscriminatorV2(nn.ModelBase):
"""
Inspired by https://arxiv.org/abs/2002.12655 "A U-Net Based Discriminator for Generative Adversarial Networks"
"""
def calc_receptive_field_size(self, layers):
"""
result the same as https://fomoro.com/research/article/receptive-field-calculatorindex.html
"""
rf = 0
ts = 1
for i, (k, s) in enumerate(layers):
if i == 0:
rf = k
else:
rf += (k-1)*ts
ts *= s
return rf
def find_archi(self, target_patch_size, max_layers=6):
"""
Find the best configuration of layers using only 3x3 convs for target patch size
"""
s = {}
for layers_count in range(1,max_layers+1):
val = 1 << (layers_count-1)
while True:
val -= 1
layers = []
sum_st = 0
for i in range(layers_count-1):
st = 1 + (1 if val & (1 << i) !=0 else 0 )
layers.append ( [3, st ])
sum_st += st
layers.append ( [3, 2])
sum_st += 2
rf = self.calc_receptive_field_size(layers)
s_rf = s.get(rf, None)
if s_rf is None:
s[rf] = (layers_count, sum_st, layers)
else:
if layers_count < s_rf[0] or \
( layers_count == s_rf[0] and sum_st > s_rf[1] ):
s[rf] = (layers_count, sum_st, layers)
if val == 0:
break
x = sorted(list(s.keys()))
q=x[np.abs(np.array(x)-target_patch_size).argmin()]
return s[q][2]
def on_build(self, patch_size, in_ch, use_fp16 = False):
self.use_fp16 = use_fp16
conv_dtype = tf.float16 if use_fp16 else tf.float32
class ResidualBlock(nn.ModelBase):
def on_build(self, ch, kernel_size=3 ):
self.conv1 = nn.Conv2D( ch, ch, kernel_size=kernel_size, padding='SAME', dtype=conv_dtype)
self.conv2 = nn.Conv2D( ch, ch, kernel_size=kernel_size, padding='SAME', dtype=conv_dtype)
def forward(self, inp):
x = self.conv1(inp)
x = tf.nn.leaky_relu(x, 0.2)
x = self.conv2(x)
x = tf.nn.leaky_relu(inp + x, 0.2)
return x
prev_ch = in_ch
self.convs = []
self.res = []
self.upconvs = []
self.upres = []
layers = self.find_archi(patch_size)
base_ch = 16
level_chs = { i-1:v for i,v in enumerate([ min( base_ch * (2**i), 512 ) for i in range(len(layers)+1)]) }
self.in_conv = nn.Conv2D( in_ch, level_chs[-1], kernel_size=1, padding='VALID', dtype=conv_dtype)
for i, (kernel_size, strides) in enumerate(layers):
self.convs.append ( nn.Conv2D( level_chs[i-1], level_chs[i], kernel_size=kernel_size, strides=strides, padding='SAME', dtype=conv_dtype) )
self.res.append ( ResidualBlock(level_chs[i]) )
self.upconvs.insert (0, nn.Conv2DTranspose( level_chs[i]*(2 if i != len(layers)-1 else 1), level_chs[i-1], kernel_size=kernel_size, strides=strides, padding='SAME', dtype=conv_dtype) )
self.upres.insert (0, ResidualBlock(level_chs[i-1]*2) )
self.out_conv = nn.Conv2D( level_chs[-1]*2, 1, kernel_size=1, padding='VALID', dtype=conv_dtype)
self.center_out = nn.Conv2D( level_chs[len(layers)-1], 1, kernel_size=1, padding='VALID', dtype=conv_dtype)
self.center_conv = nn.Conv2D( level_chs[len(layers)-1], level_chs[len(layers)-1], kernel_size=1, padding='VALID', dtype=conv_dtype)
def forward(self, x):
if self.use_fp16:
x = tf.cast(x, tf.float16)
x = tf.nn.leaky_relu( self.in_conv(x), 0.1 )
encs = []
for conv, res in zip(self.convs, self.res):
encs.insert(0, x)
x = tf.nn.leaky_relu( conv(x), 0.1 )
x = res(x)
center_out, x = self.center_out(x), self.center_conv(x)
for i, (upconv, enc, upres) in enumerate(zip(self.upconvs, encs, self.upres)):
x = tf.nn.leaky_relu( upconv(x), 0.1 )
x = tf.concat( [enc, x], axis=nn.conv2d_ch_axis)
x = upres(x)
x = self.out_conv(x)
if self.use_fp16:
center_out = tf.cast(center_out, tf.float32)
x = tf.cast(x, tf.float32)
return center_out, x
nn.UNetPatchDiscriminatorV2 = UNetPatchDiscriminatorV2

View file

@ -107,7 +107,7 @@ class nn():
else: else:
nn.tf_default_device_name = f'/{device_config.devices[0].tf_dev_type}:0' nn.tf_default_device_name = f'/{device_config.devices[0].tf_dev_type}:0'
config = tf.ConfigProto() config = tf.ConfigProto(allow_soft_placement=True)
config.gpu_options.visible_device_list = ','.join([str(device.index) for device in device_config.devices]) config.gpu_options.visible_device_list = ','.join([str(device.index) for device in device_config.devices])
config.gpu_options.force_gpu_compatible = True config.gpu_options.force_gpu_compatible = True

View file

@ -244,6 +244,19 @@ def gaussian_blur(input, radius=2.0):
return x return x
nn.gaussian_blur = gaussian_blur nn.gaussian_blur = gaussian_blur
def get_gaussian_weights(batch_size, in_ch, resolution, num_scale=5, sigma=(0.5, 1., 2., 4., 8.)):
w = np.empty((num_scale, batch_size, in_ch, resolution, resolution))
for i in range(num_scale):
gaussian = np.exp(-1.*np.arange(-(resolution/2-0.5), resolution/2+0.5)**2/(2*sigma[i]**2))
gaussian = np.outer(gaussian, gaussian.reshape((resolution, 1))) # extend to 2D
gaussian = gaussian/np.sum(gaussian) # normalization
gaussian = np.reshape(gaussian, (1, 1, resolution, resolution)) # reshape to 3D
gaussian = np.tile(gaussian, (batch_size, in_ch, 1, 1))
w[i, :, :, :, :] = gaussian
return w
nn.get_gaussian_weights = get_gaussian_weights
def style_loss(target, style, gaussian_blur_radius=0.0, loss_weight=1.0, step_size=1): def style_loss(target, style, gaussian_blur_radius=0.0, loss_weight=1.0, step_size=1):
def sd(content, style, loss_weight): def sd(content, style, loss_weight):
content_nc = content.shape[ nn.conv2d_ch_axis ] content_nc = content.shape[ nn.conv2d_ch_axis ]
@ -475,4 +488,3 @@ def bilinear_sampler(img, x, y):
return out return out
nn.bilinear_sampler = bilinear_sampler nn.bilinear_sampler = bilinear_sampler

BIN
doc/dfl_cover.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 326 KiB

View file

@ -0,0 +1,32 @@
# Background Power option
Allows you to train the model to include the background, which may help with areas around the mask.
Unlike **Background Style Power**, this does not use any additional VRAM, and does not require lowering the batch size.
- [DESCRIPTION](#description)
- [USAGE](#usage)
- [DIFFERENCE WITH BACKGROUND STYLE POWER](#difference-with-background-style-power)
*Examples trained with background power `0.3`:*
![](example.jpeg)![](example2.jpeg)
## DESCRIPTION
Applies the same loss calculation used for the area *inside* the mask, to the area *outside* the mask, multiplied with
the chosen background power value.
E.g. (simplified): Source Loss = Masked area image difference + Background Power * Non-masked area image difference
## USAGE
`[0.0] Background power ( 0.0..1.0 ?:help ) : 0.3`
## DIFFERENCE WITH BACKGROUND STYLE POWER
**Background Style Power** applies a loss to the source by comparing the background of the dest to that of the
predicted src/dest (5th column). This operation requires additional VRAM, due to the face that the predicted src/dest
outputs are not normally used in training (other then being viewable in the preview window).
**Background Power** does *not* use the src/dest images whatsoever, instead comparing the background of the predicted
source to that of the original source, and the same for the background of the dest images.

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 121 KiB

View file

@ -0,0 +1,50 @@
# GAN Options
Allows you to use one-sided label smoothing and noisy labels when training the discriminator.
- [ONE-SIDED LABEL SMOOTHING](#one-sided-label-smoothing)
- [NOISY LABELS](#noisy-labels)
## ONE-SIDED LABEL SMOOTHING
![](tutorial-on-theory-and-application-of-generative-adversarial-networks-54-638.jpg)
> Deep networks may suffer from overconfidence. For example, it uses very few features to classify an object. To
> mitigate the problem, deep learning uses regulation and dropout to avoid overconfidence.
>
> In GAN, if the discriminator depends on a small set of features to detect real images, the generator may just produce
> these features only to exploit the discriminator. The optimization may turn too greedy and produces no long term
> benefit. In GAN, overconfidence hurts badly. To avoid the problem, we penalize the discriminator when the prediction
> for any real images go beyond 0.9 (D(real image)>0.9). This is done by setting our target label value to be 0.9
> instead of 1.0.
- [GAN — Ways to improve GAN performance](https://towardsdatascience.com/gan-ways-to-improve-gan-performance-acf37f9f59b)
By setting the label smoothing value to any value > 0, the target label value used with the discriminator will be:
```
target label value = 1 - (label smoothing value)
```
### USAGE
```
[0.1] GAN label smoothing ( 0 - 0.5 ?:help ) : 0.1
```
## NOISY LABELS
> make the labels the noisy for the discriminator: occasionally flip the labels when training the discriminator
- [How to Train a GAN? Tips and tricks to make GANs work](https://github.com/soumith/ganhacks/blob/master/README.md#6-use-soft-and-noisy-labels)
By setting the noisy labels value to any value > 0, then the target labels used with the discriminator will be flipped
("fake" => "real" / "real" => "fake") with probability p (where p is the noisy label value).
E.g., if the value is 0.05, then ~5% of the labels will be flipped when training the discriminator
### USAGE
```
[0.05] GAN noisy labels ( 0 - 0.5 ?:help ) : 0.05
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

View file

@ -0,0 +1,43 @@
# Multiscale SSIM (MS-SSIM)
Allows you to train using the MS-SSIM (multiscale structural similarity index measure) as the main loss metric,
a perceptually more accurate measure of image quality than MSE (mean squared error).
As an added benefit, you may see a decrease in ms/iteration (when using the same batch size) with Multiscale loss
enabled. You may also be able to train with a larger batch size with it enabled.
- [DESCRIPTION](#description)
- [USAGE](#usage)
## DESCRIPTION
[SSIM](https://en.wikipedia.org/wiki/Structural_similarity) is metric for comparing the perceptial quality of an image:
> SSIM is a perception-based model that considers image degradation as perceived change in structural information,
> while also incorporating important perceptual phenomena, including both luminance masking and contrast masking terms.
> [...]
> Structural information is the idea that the pixels have strong inter-dependencies especially when they are spatially
> close. These dependencies carry important information about the structure of the objects in the visual scene.
> Luminance masking is a phenomenon whereby image distortions (in this context) tend to be less visible in bright
> regions, while contrast masking is a phenomenon whereby distortions become less visible where there is significant
> activity or "texture" in the image.
The current loss metric is a combination of SSIM (structural similarity index measure) and
[MSE](https://en.wikipedia.org/wiki/Mean_squared_error) (mean squared error).
[Multiscale SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Multi-Scale_SSIM) is a variant of SSIM that
improves upon SSIM by comparing the similarity at multiple scales (e.g.: full-size, half-size, 1/4 size, etc.)
By using MS-SSIM as our main loss metric, we should expect the image similarity to improve across each scale, improving
both the large scale and small scale detail of the predicted images.
Original paper: [Wang, Zhou, Eero P. Simoncelli, and Alan C. Bovik.
"Multiscale structural similarity for image quality assessment."
Signals, Systems and Computers, 2004.](https://www.cns.nyu.edu/pub/eero/wang03b.pdf)
## USAGE
```
[n] Use multiscale loss? ( y/n ?:help ) : y
```

View file

@ -0,0 +1,25 @@
# Random Color option
Helps train the model to generalize perceptual color and lightness, and improves color transfer between src and dst.
- [DESCRIPTION](#description)
- [USAGE](#usage)
![](example.jpeg)
## DESCRIPTION
Converts images to [CIE L\*a\*b* colorspace](https://en.wikipedia.org/wiki/CIELAB_color_space),
and then randomly rotates around the `L*` axis. While the perceptual lightness stays constant, only the `a*` and `b*`
color channels are modified. After rotation, converts back to BGR (blue/green/red) colorspace.
If visualized using the [CIE L\*a\*b* cylindical model](https://en.wikipedia.org/wiki/CIELAB_color_space#Cylindrical_model),
this is a random rotation of `h°` (hue angle, angle of the hue in the CIELAB color wheel),
maintaining the same `C*` (chroma, relative saturation).
## USAGE
```
[n] Random color ( y/n ?:help ) : y
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

View file

@ -0,0 +1,45 @@
# Web UI
View and interact with the training preview window with your web browser.
Allows you to view and control the preview remotely, and train on headless machines.
- [INSTALLATION](#installation)
- [DESCRIPTION](#description)
- [USAGE](#usage)
- [SSH PORT FORWARDING](#ssh-port-forwarding)
![](example.png)
## INSTALLATION
Requires additional Python dependencies to be installed:
- [Flask](https://palletsprojects.com/p/flask/),
version [1.1.1](https://pypi.org/project/Flask/1.1.1/)
- [Flask-SocketIO](https://github.com/miguelgrinberg/Flask-SocketIO/),
version [4.2.1](https://pypi.org/project/Flask-SocketIO/4.2.1/)
```
pip install Flask==1.1.1
pip install Flask-SocketIO==4.2.1
```
## DESCRIPTION
Launches a Flask web application which sends commands to the training thread
(save/exit/fetch new preview, etc.), and displays live updates for the log output
e.g.: `[09:50:53][#106913][0503ms][0.3109][0.2476]`, and updates the graph/preview image.
## USAGE
Enable the Web UI by appending `--flask-preview` to the `train` command.
Once training begins, Web UI will start, and can be accessed at http://localhost:5000/
## SSH PORT FORWARDING
When running on a remote/headless box, view the Web UI in your local browser simply by
adding the ssh option `-L 5000:localhost:5000`. Once connected, the Web UI can be viewed
locally at http://localhost:5000/
Several Android/iOS SSH apps (such as [JuiceSSH](https://juicessh.com/)
exist which support port forwarding, allowing you to interact with the preview pane
from anywhere with your phone.

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 MiB

View file

@ -0,0 +1,5 @@
# Example of bug:
![](preview_image_bug.jpeg)
# Demonstration of fix:
![](preview_image_fix.jpeg)

Binary file not shown.

After

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

View file

@ -7,6 +7,7 @@ class FaceType(IntEnum):
FULL = 2 FULL = 2
FULL_NO_ALIGN = 3 FULL_NO_ALIGN = 3
WHOLE_FACE = 4 WHOLE_FACE = 4
CUSTOM = 5
HEAD = 10 HEAD = 10
HEAD_NO_ALIGN = 20 HEAD_NO_ALIGN = 20
@ -30,6 +31,7 @@ to_string_dict = { FaceType.HALF : 'half_face',
FaceType.WHOLE_FACE : 'whole_face', FaceType.WHOLE_FACE : 'whole_face',
FaceType.HEAD : 'head', FaceType.HEAD : 'head',
FaceType.HEAD_NO_ALIGN : 'head_no_align', FaceType.HEAD_NO_ALIGN : 'head_no_align',
FaceType.CUSTOM : 'mve_custom',
FaceType.MARK_ONLY :'mark_only', FaceType.MARK_ONLY :'mark_only',
} }

View file

@ -382,11 +382,9 @@ def expand_eyebrows(lmrks, eyebrows_expand_mod=1.0):
# Adjust eyebrow arrays # Adjust eyebrow arrays
lmrks[17:22] = top_l + eyebrows_expand_mod * 0.5 * (top_l - bot_l) lmrks[17:22] = top_l + eyebrows_expand_mod * 0.5 * (top_l - bot_l)
lmrks[22:27] = top_r + eyebrows_expand_mod * 0.5 * (top_r - bot_r) lmrks[22:27] = top_r + eyebrows_expand_mod * 0.5 * (top_r - bot_r)
return lmrks return lmrks
def get_image_hull_mask (image_shape, image_landmarks, eyebrows_expand_mod=1.0 ): def get_image_hull_mask (image_shape, image_landmarks, eyebrows_expand_mod=1.0 ):
hull_mask = np.zeros(image_shape[0:2]+(1,),dtype=np.float32) hull_mask = np.zeros(image_shape[0:2]+(1,),dtype=np.float32)
@ -441,7 +439,7 @@ def get_image_mouth_mask (image_shape, image_landmarks):
image_landmarks = image_landmarks.astype(np.int) image_landmarks = image_landmarks.astype(np.int)
cv2.fillConvexPoly( hull_mask, cv2.convexHull( image_landmarks[60:]), (1,) ) cv2.fillConvexPoly( hull_mask, cv2.convexHull( image_landmarks[48:60]), (1,) )
dilate = h // 32 dilate = h // 32
hull_mask = cv2.dilate(hull_mask, cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(dilate,dilate)), iterations = 1 ) hull_mask = cv2.dilate(hull_mask, cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(dilate,dilate)), iterations = 1 )

0
flaskr/__init__.py Normal file
View file

102
flaskr/app.py Normal file
View file

@ -0,0 +1,102 @@
from pathlib import Path
from flask import Flask, send_file, Response, render_template, render_template_string, request, g
from flask_socketio import SocketIO, emit
import logging
def create_flask_app(s2c, c2s, s2flask, kwargs):
app = Flask(__name__, template_folder="templates", static_folder="static")
log = logging.getLogger('werkzeug')
log.disabled = True
model_path = Path(kwargs.get('saved_models_path', ''))
filename = 'preview.png'
preview_file = str(model_path / filename)
def gen():
frame = open(preview_file, 'rb').read()
while True:
try:
frame = open(preview_file, 'rb').read()
except:
pass
yield b'--frame\r\nContent-Type: image/png\r\n\r\n'
yield frame
yield b'\r\n\r\n'
def send(queue, op):
queue.put({'op': op})
def send_and_wait(queue, op):
while not s2flask.empty():
s2flask.get()
queue.put({'op': op})
while s2flask.empty():
pass
s2flask.get()
@app.route('/save', methods=['POST'])
def save():
send(s2c, 'save')
return '', 204
@app.route('/exit', methods=['POST'])
def exit():
send(c2s, 'close')
request.environ.get('werkzeug.server.shutdown')()
return '', 204
@app.route('/update', methods=['POST'])
def update():
send(c2s, 'update')
return '', 204
@app.route('/next_preview', methods=['POST'])
def next_preview():
send(c2s, 'next_preview')
return '', 204
@app.route('/change_history_range', methods=['POST'])
def change_history_range():
send(c2s, 'change_history_range')
return '', 204
@app.route('/zoom_prev', methods=['POST'])
def zoom_prev():
send(c2s, 'zoom_prev')
return '', 204
@app.route('/zoom_next', methods=['POST'])
def zoom_next():
send(c2s, 'zoom_next')
return '', 204
@app.route('/')
def index():
return render_template('index.html')
# @app.route('/preview_image')
# def preview_image():
# return Response(gen(), mimetype='multipart/x-mixed-replace;boundary=frame')
@app.route('/preview_image')
def preview_image():
return send_file(preview_file, mimetype='image/png', cache_timeout=-1)
socketio = SocketIO(app)
@socketio.on('connect', namespace='/')
def test_connect():
emit('my response', {'data': 'Connected'})
@socketio.on('disconnect', namespace='/test')
def test_disconnect():
print('Client disconnected')
return socketio, app

BIN
flaskr/static/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 284 KiB

View file

@ -0,0 +1,95 @@
<head>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js"
integrity="sha256-CSXorXvZcTkaix6Yvo6HppcZGetbYMGWSFlBw8HfCJo="
crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/2.2.0/socket.io.js"
integrity="sha256-yr4fRk/GU1ehYJPAs8P4JlTgu0Hdsp4ZKrx8bDEDC3I="
crossorigin="anonymous"></script>
<link rel="stylesheet" href="https://fonts.googleapis.com/icon?family=Material+Icons">
<link rel="stylesheet" href="https://code.getmdl.io/1.3.0/material.indigo-pink.min.css">
<script defer src="https://code.getmdl.io/1.3.0/material.min.js"></script>
<title>Training Preview</title>
<link rel="shortcut icon" href="{{ url_for('static', filename='favicon.ico') }}">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<script type="text/javascript">
$(function() {
const socket = io.connect();
socket.on('preview', function(msg) {
console.log(msg);
$('img#preview').attr("src", "{{ url_for('preview_image') }}?q=" + new Date().getTime());
});
socket.on('loss', function(loss_string) {
console.log(loss_string);
$('div#loss').html(loss_string);
});
function save() {
$.post("{{ url_for('save') }}");
}
function exit() {
$.post("{{ url_for('exit') }}");
socket.close();
}
function update() {
$.post("{{ url_for('update') }}");
}
function next_preview() {
$.post("{{ url_for('next_preview') }}");
}
function change_history_range() {
$.post("{{ url_for('change_history_range') }}");
}
function zoom_prev() {
$.post("{{ url_for('zoom_prev') }}");
}
function zoom_next() {
$.post("{{ url_for('zoom_next') }}");
}
$(document).keypress(function (event) {
switch (event.key) {
case "s" : save(); break;
case "Enter" : exit(); break;
case "p" : update(); break;
case " " : next_preview(); break;
case "l" : change_history_range(); break;
case "-" : zoom_prev(); break;
case "=" : zoom_next(); break;
}
// console.log('kp:', event);
});
$('button#save').click(save);
$('button#exit').click(exit);
$('button#update').click(update);
$('button#next_preview').click(next_preview);
$('button#change_history_range').click(change_history_range);
$('button#zoom_prev').click(zoom_prev);
$('button#zoom_next').click(zoom_next);
$('img#preview').click(update);
});
</script>
</head>
<body>
<div class="mdl-typography--headline">Training Preview</div>
<div id="loss"></div>
<div>
<button class='mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect' id='save'>Save</button>
<button class='mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect' id='exit'>Exit</button>
<button class='mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect' id='update'>Update</button>
<button class='mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect' id='next_preview'>Next preview</button>
<button class='mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect' id='change_history_range'>Change History Range</button>
<button class='mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect' id='zoom_prev'>Zoom -</button>
<button class='mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect' id='zoom_next'>Zoom +</button>
</div>
<img id='preview' src="{{ url_for('preview_image') }}" style="max-width: 100%">
</body>
</html>

15
main.py
View file

@ -127,6 +127,8 @@ if __name__ == "__main__":
'silent_start' : arguments.silent_start, 'silent_start' : arguments.silent_start,
'execute_programs' : [ [int(x[0]), x[1] ] for x in arguments.execute_program ], 'execute_programs' : [ [int(x[0]), x[1] ] for x in arguments.execute_program ],
'debug' : arguments.debug, 'debug' : arguments.debug,
'dump_ckpt' : arguments.dump_ckpt,
'flask_preview' : arguments.flask_preview,
} }
from mainscripts import Trainer from mainscripts import Trainer
Trainer.main(**kwargs) Trainer.main(**kwargs)
@ -144,6 +146,9 @@ if __name__ == "__main__":
p.add_argument('--cpu-only', action="store_true", dest="cpu_only", default=False, help="Train on CPU.") p.add_argument('--cpu-only', action="store_true", dest="cpu_only", default=False, help="Train on CPU.")
p.add_argument('--force-gpu-idxs', dest="force_gpu_idxs", default=None, help="Force to choose GPU indexes separated by comma.") p.add_argument('--force-gpu-idxs', dest="force_gpu_idxs", default=None, help="Force to choose GPU indexes separated by comma.")
p.add_argument('--silent-start', action="store_true", dest="silent_start", default=False, help="Silent start. Automatically chooses Best GPU and last used model.") p.add_argument('--silent-start', action="store_true", dest="silent_start", default=False, help="Silent start. Automatically chooses Best GPU and last used model.")
p.add_argument('--dump-ckpt', action="store_true", dest="dump_ckpt", default=False, help="Dump the model to ckpt format.")
p.add_argument('--flask-preview', action="store_true", dest="flask_preview", default=False,
help="Launches a flask server to view the previews in a web browser")
p.add_argument('--execute-program', dest="execute_program", default=[], action='append', nargs='+') p.add_argument('--execute-program', dest="execute_program", default=[], action='append', nargs='+')
p.set_defaults (func=process_train) p.set_defaults (func=process_train)
@ -158,6 +163,16 @@ if __name__ == "__main__":
p.add_argument('--model', required=True, dest="model_name", choices=pathex.get_all_dir_names_startswith ( Path(__file__).parent / 'models' , 'Model_'), help="Model class name.") p.add_argument('--model', required=True, dest="model_name", choices=pathex.get_all_dir_names_startswith ( Path(__file__).parent / 'models' , 'Model_'), help="Model class name.")
p.set_defaults (func=process_exportdfm) p.set_defaults (func=process_exportdfm)
def process_exportdfm(arguments):
osex.set_process_lowest_prio()
from mainscripts import ExportDFM
ExportDFM.main(model_class_name = arguments.model_name, saved_models_path = Path(arguments.model_dir))
p = subparsers.add_parser( "exportdfm", help="Export model to use in DeepFaceLive.")
p.add_argument('--model-dir', required=True, action=fixPathAction, dest="model_dir", help="Saved models dir.")
p.add_argument('--model', required=True, dest="model_name", choices=pathex.get_all_dir_names_startswith ( Path(__file__).parent / 'models' , 'Model_'), help="Model class name.")
p.set_defaults (func=process_exportdfm)
def process_merge(arguments): def process_merge(arguments):
osex.set_process_lowest_prio() osex.set_process_lowest_prio()
from mainscripts import Merger from mainscripts import Merger

View file

@ -1,9 +1,11 @@
import os import os
import sys import sys
import traceback import traceback
import queue import queue
import threading import threading
import time import time
from enum import Enum
import numpy as np import numpy as np
import itertools import itertools
from pathlib import Path from pathlib import Path
@ -14,6 +16,7 @@ import models
from core.interact import interact as io from core.interact import interact as io
def trainerThread (s2c, c2s, e, def trainerThread (s2c, c2s, e,
socketio=None,
model_class_name = None, model_class_name = None,
saved_models_path = None, saved_models_path = None,
training_data_src_path = None, training_data_src_path = None,
@ -62,6 +65,7 @@ def trainerThread (s2c, c2s, e,
shared_state = {'after_save': False} shared_state = {'after_save': False}
loss_string = "" loss_string = ""
save_iter = model.get_iter() save_iter = model.get_iter()
def model_save(): def model_save():
if not debug and not is_reached_goal: if not debug and not is_reached_goal:
io.log_info("Saving....", end='\r') io.log_info("Saving....", end='\r')
@ -75,7 +79,8 @@ def trainerThread (s2c, c2s, e,
def send_preview(): def send_preview():
if not debug: if not debug:
previews = model.get_previews() previews = model.get_previews()
c2s.put ( {'op':'show', 'previews': previews, 'iter':model.get_iter(), 'loss_history': model.get_loss_history().copy() } ) c2s.put({'op': 'show', 'previews': previews, 'iter': model.get_iter(),
'loss_history': model.get_loss_history().copy()})
else: else:
previews = [('debug, press update for new', model.debug_one_iter())] previews = [('debug, press update for new', model.debug_one_iter())]
c2s.put({'op': 'show', 'previews': previews}) c2s.put({'op': 'show', 'previews': previews})
@ -85,7 +90,8 @@ def trainerThread (s2c, c2s, e,
if is_reached_goal: if is_reached_goal:
io.log_info('Model already trained to target iteration. You can use preview.') io.log_info('Model already trained to target iteration. You can use preview.')
else: else:
io.log_info('Starting. Target iteration: %d. Press "Enter" to stop training and save model.' % ( model.get_target_iter() ) ) io.log_info('Starting. Target iteration: %d. Press "Enter" to stop training and save model.' % (
model.get_target_iter()))
else: else:
io.log_info('Starting. Press "Enter" to stop training and save model.') io.log_info('Starting. Press "Enter" to stop training and save model.')
@ -100,7 +106,7 @@ def trainerThread (s2c, c2s, e,
for x in execute_programs: for x in execute_programs:
prog_time, prog, last_time = x prog_time, prog, last_time = x
exec_prog = False exec_prog = False
if prog_time > 0 and (cur_time - start_time) >= prog_time: if 0 < prog_time <= (cur_time - start_time):
x[0] = 0 x[0] = 0
exec_prog = True exec_prog = True
elif prog_time < 0 and (cur_time - last_time) >= -prog_time: elif prog_time < 0 and (cur_time - last_time) >= -prog_time:
@ -111,18 +117,20 @@ def trainerThread (s2c, c2s, e,
try: try:
exec(prog) exec(prog)
except Exception as e: except Exception as e:
print("Unable to execute program: %s" % (prog) ) print("Unable to execute program: %s" % prog)
if not is_reached_goal: if not is_reached_goal:
if model.get_iter() == 0: if model.get_iter() == 0:
io.log_info("") io.log_info("")
io.log_info("Trying to do the first iteration. If an error occurs, reduce the model parameters.") io.log_info(
"Trying to do the first iteration. If an error occurs, reduce the model parameters.")
io.log_info("") io.log_info("")
if sys.platform[0:3] == 'win': if sys.platform[0:3] == 'win':
io.log_info("!!!") io.log_info("!!!")
io.log_info("Windows 10 users IMPORTANT notice. You should set this setting in order to work correctly.") io.log_info(
"Windows 10 users IMPORTANT notice. You should set this setting in order to work correctly.")
io.log_info("https://i.imgur.com/B7cmDCB.jpg") io.log_info("https://i.imgur.com/B7cmDCB.jpg")
io.log_info("!!!") io.log_info("!!!")
@ -155,6 +163,9 @@ def trainerThread (s2c, c2s, e,
else: else:
io.log_info(loss_string, end='\r') io.log_info(loss_string, end='\r')
if socketio is not None:
socketio.emit('loss', loss_string)
if model.get_iter() == 1: if model.get_iter() == 1:
model_save() model_save()
@ -182,8 +193,8 @@ def trainerThread (s2c, c2s, e,
time.sleep(0.005) time.sleep(0.005)
while not s2c.empty(): while not s2c.empty():
input = s2c.get() item = s2c.get()
op = input['op'] op = item['op']
if op == 'save': if op == 'save':
model_save() model_save()
elif op == 'backup': elif op == 'backup':
@ -200,8 +211,6 @@ def trainerThread (s2c, c2s, e,
if i == -1: if i == -1:
break break
model.finalize() model.finalize()
except Exception as e: except Exception as e:
@ -211,16 +220,202 @@ def trainerThread (s2c, c2s, e,
c2s.put({'op': 'close'}) c2s.put({'op': 'close'})
class Zoom(Enum):
ZOOM_25 = (1 / 4, '25%')
ZOOM_33 = (1 / 3, '33%')
ZOOM_50 = (1 / 2, '50%')
ZOOM_67 = (2 / 3, '67%')
ZOOM_75 = (3 / 4, '75%')
ZOOM_80 = (4 / 5, '80%')
ZOOM_90 = (9 / 10, '90%')
ZOOM_100 = (1, '100%')
ZOOM_110 = (11 / 10, '110%')
ZOOM_125 = (5 / 4, '125%')
ZOOM_150 = (3 / 2, '150%')
ZOOM_175 = (7 / 4, '175%')
ZOOM_200 = (2, '200%')
ZOOM_250 = (5 / 2, '250%')
ZOOM_300 = (3, '300%')
ZOOM_400 = (4, '400%')
ZOOM_500 = (5, '500%')
def __init__(self, scale, label):
self.scale = scale
self.label = label
def prev(self):
cls = self.__class__
members = list(cls)
index = members.index(self) - 1
if index < 0:
return self
return members[index]
def next(self):
cls = self.__class__
members = list(cls)
index = members.index(self) + 1
if index >= len(members):
return self
return members[index]
def scale_previews(previews, zoom=Zoom.ZOOM_100):
scaled = []
for preview in previews:
preview_name, preview_rgb = preview
scale_factor = zoom.scale
if scale_factor < 1:
scaled.append((preview_name, cv2.resize(preview_rgb, (0, 0),
fx=scale_factor,
fy=scale_factor,
interpolation=cv2.INTER_AREA)))
elif scale_factor > 1:
scaled.append((preview_name, cv2.resize(preview_rgb, (0, 0),
fx=scale_factor,
fy=scale_factor,
interpolation=cv2.INTER_LANCZOS4)))
else:
scaled.append((preview_name, preview_rgb))
return scaled
def create_preview_pane_image(previews, selected_preview, loss_history,
show_last_history_iters_count, iteration, batch_size, zoom=Zoom.ZOOM_100):
scaled_previews = scale_previews(previews, zoom)
selected_preview_name = scaled_previews[selected_preview][0]
selected_preview_rgb = scaled_previews[selected_preview][1]
h, w, c = selected_preview_rgb.shape
# HEAD
head_lines = [
'[s]:save [enter]:exit [-/+]:zoom: %s' % zoom.label,
'[p]:update [space]:next preview [l]:change history range',
'Preview: "%s" [%d/%d]' % (selected_preview_name, selected_preview + 1, len(previews))
]
head_line_height = int(15 * zoom.scale)
head_height = len(head_lines) * head_line_height
head = np.ones((head_height, w, c)) * 0.1
for i in range(0, len(head_lines)):
t = i * head_line_height
b = (i + 1) * head_line_height
head[t:b, 0:w] += imagelib.get_text_image((head_line_height, w, c), head_lines[i], color=[0.8] * c)
final = head
if loss_history is not None:
if show_last_history_iters_count == 0:
loss_history_to_show = loss_history
else:
loss_history_to_show = loss_history[-show_last_history_iters_count:]
lh_height = int(100 * zoom.scale)
lh_img = models.ModelBase.get_loss_history_preview(loss_history_to_show, iteration, w, c, lh_height)
final = np.concatenate([final, lh_img], axis=0)
final = np.concatenate([final, selected_preview_rgb], axis=0)
final = np.clip(final, 0, 1)
return (final * 255).astype(np.uint8)
def main(**kwargs): def main(**kwargs):
io.log_info("Running trainer.\r\n") io.log_info("Running trainer.\r\n")
no_preview = kwargs.get('no_preview', False) no_preview = kwargs.get('no_preview', False)
flask_preview = kwargs.get('flask_preview', False)
s2c = queue.Queue() s2c = queue.Queue()
c2s = queue.Queue() c2s = queue.Queue()
e = threading.Event() e = threading.Event()
previews = None
loss_history = None
selected_preview = 0
update_preview = False
is_waiting_preview = False
show_last_history_iters_count = 0
iteration = 0
batch_size = 1
zoom = Zoom.ZOOM_100
if flask_preview:
from flaskr.app import create_flask_app
s2flask = queue.Queue()
socketio, flask_app = create_flask_app(s2c, c2s, s2flask, kwargs)
thread = threading.Thread(target=trainerThread, args=(s2c, c2s, e, socketio), kwargs=kwargs)
thread.start()
e.wait() # Wait for inital load to occur.
flask_t = threading.Thread(target=socketio.run, args=(flask_app,),
kwargs={'debug': True, 'use_reloader': False})
flask_t.start()
while True:
if not c2s.empty():
item = c2s.get()
op = item['op']
if op == 'show':
is_waiting_preview = False
loss_history = item['loss_history'] if 'loss_history' in item.keys() else None
previews = item['previews'] if 'previews' in item.keys() else None
iteration = item['iter'] if 'iter' in item.keys() else 0
# batch_size = input['batch_size'] if 'iter' in input.keys() else 1
if previews is not None:
update_preview = True
elif op == 'update':
if not is_waiting_preview:
is_waiting_preview = True
s2c.put({'op': 'preview'})
elif op == 'next_preview':
selected_preview = (selected_preview + 1) % len(previews)
update_preview = True
elif op == 'change_history_range':
if show_last_history_iters_count == 0:
show_last_history_iters_count = 5000
elif show_last_history_iters_count == 5000:
show_last_history_iters_count = 10000
elif show_last_history_iters_count == 10000:
show_last_history_iters_count = 50000
elif show_last_history_iters_count == 50000:
show_last_history_iters_count = 100000
elif show_last_history_iters_count == 100000:
show_last_history_iters_count = 0
update_preview = True
elif op == 'close':
s2c.put({'op': 'close'})
break
elif op == 'zoom_prev':
zoom = zoom.prev()
update_preview = True
elif op == 'zoom_next':
zoom = zoom.next()
update_preview = True
if update_preview:
update_preview = False
selected_preview = selected_preview % len(previews)
preview_pane_image = create_preview_pane_image(previews,
selected_preview,
loss_history,
show_last_history_iters_count,
iteration,
batch_size,
zoom)
# io.show_image(wnd_name, preview_pane_image)
model_path = Path(kwargs.get('saved_models_path', ''))
filename = 'preview.png'
preview_file = str(model_path / filename)
cv2.imwrite(preview_file, preview_pane_image)
s2flask.put({'op': 'show'})
socketio.emit('preview', {'iter': iteration, 'loss': loss_history[-1]})
try:
io.process_messages(0.01)
except KeyboardInterrupt:
s2c.put({'op': 'close'})
else:
thread = threading.Thread(target=trainerThread, args=(s2c, c2s, e), kwargs=kwargs) thread = threading.Thread(target=trainerThread, args=(s2c, c2s, e), kwargs=kwargs)
thread.start() thread.start()
@ -229,8 +424,8 @@ def main(**kwargs):
if no_preview: if no_preview:
while True: while True:
if not c2s.empty(): if not c2s.empty():
input = c2s.get() item = c2s.get()
op = input.get('op','') op = item.get('op', '')
if op == 'close': if op == 'close':
break break
try: try:
@ -252,13 +447,13 @@ def main(**kwargs):
iter = 0 iter = 0
while True: while True:
if not c2s.empty(): if not c2s.empty():
input = c2s.get() item = c2s.get()
op = input['op'] op = item['op']
if op == 'show': if op == 'show':
is_waiting_preview = False is_waiting_preview = False
loss_history = input['loss_history'] if 'loss_history' in input.keys() else None loss_history = item['loss_history'] if 'loss_history' in item.keys() else None
previews = input['previews'] if 'previews' in input.keys() else None previews = item['previews'] if 'previews' in item.keys() else None
iter = input['iter'] if 'iter' in input.keys() else 0 iter = item['iter'] if 'iter' in item.keys() else 0
if previews is not None: if previews is not None:
max_w = 0 max_w = 0
max_h = 0 max_h = 0
@ -324,7 +519,8 @@ def main(**kwargs):
is_showing = True is_showing = True
key_events = io.get_key_events(wnd_name) key_events = io.get_key_events(wnd_name)
key, chr_key, ctrl_pressed, alt_pressed, shift_pressed = key_events[-1] if len(key_events) > 0 else (0,0,False,False,False) key, chr_key, ctrl_pressed, alt_pressed, shift_pressed = key_events[-1] if len(key_events) > 0 else (
0, 0, False, False, False)
if key == ord('\n') or key == ord('\r'): if key == ord('\n') or key == ord('\r'):
s2c.put({'op': 'close'}) s2c.put({'op': 'close'})

View file

@ -8,6 +8,7 @@ import pickle
import shutil import shutil
import tempfile import tempfile
import time import time
import datetime
from pathlib import Path from pathlib import Path
import cv2 import cv2
@ -182,13 +183,15 @@ class ModelBase(object):
if self.is_first_run(): if self.is_first_run():
# save as default options only for first run model initialize # save as default options only for first run model initialize
self.default_options_path.write_bytes( pickle.dumps (self.options) ) self.default_options_path.write_bytes( pickle.dumps (self.options) )
self.session_name = self.options.get('session_name', "")
self.autobackup_hour = self.options.get('autobackup_hour', 0) self.autobackup_hour = self.options.get('autobackup_hour', 0)
self.maximum_n_backups = self.options.get('maximum_n_backups', 24)
self.write_preview_history = self.options.get('write_preview_history', False) self.write_preview_history = self.options.get('write_preview_history', False)
self.target_iter = self.options.get('target_iter',0) self.target_iter = self.options.get('target_iter',0)
self.random_flip = self.options.get('random_flip',True) self.random_flip = self.options.get('random_flip',True)
self.random_src_flip = self.options.get('random_src_flip', False) self.random_src_flip = self.options.get('random_src_flip', False)
self.random_dst_flip = self.options.get('random_dst_flip', True) self.random_dst_flip = self.options.get('random_dst_flip', True)
self.retraining_samples = self.options.get('retraining_samples', False)
self.on_initialize() self.on_initialize()
self.options['batch_size'] = self.batch_size self.options['batch_size'] = self.batch_size
@ -280,13 +283,21 @@ class ModelBase(object):
def ask_override(self): def ask_override(self):
return self.is_training and self.iter != 0 and io.input_in_time ("Press enter in 2 seconds to override model settings.", 5 if io.is_colab() else 2 ) return self.is_training and self.iter != 0 and io.input_in_time ("Press enter in 2 seconds to override model settings.", 5 if io.is_colab() else 2 )
def ask_session_name(self, default_value=""):
default_session_name = self.options['session_name'] = self.load_or_def_option('session_name', default_value)
self.options['session_name'] = io.input_str("Session name", default_session_name, help_message="String to refer back to in summary.txt and in autobackup foldername")
def ask_autobackup_hour(self, default_value=0): def ask_autobackup_hour(self, default_value=0):
default_autobackup_hour = self.options['autobackup_hour'] = self.load_or_def_option('autobackup_hour', default_value) default_autobackup_hour = self.options['autobackup_hour'] = self.load_or_def_option('autobackup_hour', default_value)
self.options['autobackup_hour'] = io.input_int(f"Autobackup every N hour", default_autobackup_hour, add_info="0..24", help_message="Autobackup model files with preview every N hour. Latest backup located in model/<>_autobackups/01") self.options['autobackup_hour'] = io.input_int(f"Autobackup every N hour", default_autobackup_hour, add_info="0..24", help_message="Autobackup model files with preview every N hour. Latest backup is the last folder when sorted by name ascending located in model/<>_autobackups")
def ask_maximum_n_backups(self, default_value=24):
default_maximum_n_backups = self.options['maximum_n_backups'] = self.load_or_def_option('maximum_n_backups', default_value)
self.options['maximum_n_backups'] = io.input_int(f"Maximum N backups", default_maximum_n_backups, help_message="Maximum amount of backups that are located in model/<>_autobackups. Inputting 0 here would allow it to autobackup as many times as it occurs.")
def ask_write_preview_history(self, default_value=False): def ask_write_preview_history(self, default_value=False):
default_write_preview_history = self.load_or_def_option('write_preview_history', default_value) default_write_preview_history = self.load_or_def_option('write_preview_history', default_value)
self.options['write_preview_history'] = io.input_bool(f"Write preview history", default_write_preview_history, help_message="Preview history will be writed to <ModelName>_history folder.") self.options['write_preview_history'] = io.input_bool(f"Write preview history", default_write_preview_history, help_message="Preview history will be written to <ModelName>_history folder.")
if self.options['write_preview_history']: if self.options['write_preview_history']:
if io.is_support_windows(): if io.is_support_windows():
@ -320,6 +331,10 @@ class ModelBase(object):
self.options['batch_size'] = self.batch_size = batch_size self.options['batch_size'] = self.batch_size = batch_size
def ask_retraining_samples(self, default_value=False):
default_retraining_samples = self.load_or_def_option('retraining_samples', default_value)
self.options['retraining_samples'] = io.input_bool("Retrain high loss samples", default_retraining_samples, help_message="Periodically retrains last 16 \"high-loss\" sample")
#overridable #overridable
def on_initialize_options(self): def on_initialize_options(self):
@ -382,6 +397,9 @@ class ModelBase(object):
def get_history_previews(self): def get_history_previews(self):
return self.onGetPreview (self.sample_for_preview, for_history=True) return self.onGetPreview (self.sample_for_preview, for_history=True)
def get_history_previews(self):
return self.onGetPreview (self.sample_for_preview, for_history=True)
def get_preview_history_writer(self): def get_preview_history_writer(self):
if self.preview_history_writer is None: if self.preview_history_writer is None:
self.preview_history_writer = PreviewHistoryWriter() self.preview_history_writer = PreviewHistoryWriter()
@ -417,26 +435,17 @@ class ModelBase(object):
bckp_filename_list = [ self.get_strpath_storage_for_file(filename) for _, filename in self.get_model_filename_list() ] bckp_filename_list = [ self.get_strpath_storage_for_file(filename) for _, filename in self.get_model_filename_list() ]
bckp_filename_list += [ str(self.get_summary_path()), str(self.model_data_path) ] bckp_filename_list += [ str(self.get_summary_path()), str(self.model_data_path) ]
for i in range(24,0,-1): # Create new backup
idx_str = '%.2d' % i session_suffix = f'_{self.session_name}' if self.session_name else ''
next_idx_str = '%.2d' % (i+1) idx_str = datetime.datetime.now().strftime('%Y%m%dT%H%M%S') + session_suffix
idx_backup_path = self.autobackups_path / idx_str idx_backup_path = self.autobackups_path / idx_str
next_idx_packup_path = self.autobackups_path / next_idx_str idx_backup_path.mkdir()
if idx_backup_path.exists():
if i == 24:
pathex.delete_all_files(idx_backup_path)
else:
next_idx_packup_path.mkdir(exist_ok=True)
pathex.move_all_files (idx_backup_path, next_idx_packup_path)
if i == 1:
idx_backup_path.mkdir(exist_ok=True)
for filename in bckp_filename_list: for filename in bckp_filename_list:
shutil.copy ( str(filename), str(idx_backup_path / Path(filename).name) ) shutil.copy(str(filename), str(idx_backup_path / Path(filename).name))\
previews = self.get_previews() previews = self.get_previews()
# Generate previews and save in new backup
plist = [] plist = []
for i in range(len(previews)): for i in range(len(previews)):
name, bgr = previews[i] name, bgr = previews[i]
@ -445,6 +454,14 @@ class ModelBase(object):
if len(plist) != 0: if len(plist) != 0:
self.get_preview_history_writer().post(plist, self.loss_history, self.iter) self.get_preview_history_writer().post(plist, self.loss_history, self.iter)
# Check if we've exceeded the max number of backups
if self.maximum_n_backups != 0:
all_backups = sorted([x for x in self.autobackups_path.iterdir() if x.is_dir()])
while len(all_backups) > self.maximum_n_backups:
oldest_backup = all_backups.pop(0)
pathex.delete_all_files(oldest_backup)
oldest_backup.rmdir()
def debug_one_iter(self): def debug_one_iter(self):
images = [] images = []
for generator in self.generator_list: for generator in self.generator_list:
@ -586,10 +603,9 @@ class ModelBase(object):
return summary_text return summary_text
@staticmethod @staticmethod
def get_loss_history_preview(loss_history, iter, w, c): def get_loss_history_preview(loss_history, iter, w, c, lh_height=100):
loss_history = np.array (loss_history.copy()) loss_history = np.array (loss_history.copy())
lh_height = 100
lh_img = np.ones ( (lh_height,w,c) ) * 0.1 lh_img = np.ones ( (lh_height,w,c) ) * 0.1
if len(loss_history) != 0: if len(loss_history) != 0:

View file

@ -4,7 +4,6 @@ from functools import partial
import numpy as np import numpy as np
from core import mathlib
from core.interact import interact as io from core.interact import interact as io
from core.leras import nn from core.leras import nn
from facelib import FaceType from facelib import FaceType
@ -16,6 +15,8 @@ class AMPModel(ModelBase):
#override #override
def on_initialize_options(self): def on_initialize_options(self):
default_retraining_samples = self.options['retraining_samples'] = self.load_or_def_option('retraining_samples', False)
# default_usefp16 = self.options['use_fp16'] = self.load_or_def_option('use_fp16', False)
default_resolution = self.options['resolution'] = self.load_or_def_option('resolution', 224) default_resolution = self.options['resolution'] = self.load_or_def_option('resolution', 224)
default_face_type = self.options['face_type'] = self.load_or_def_option('face_type', 'wf') default_face_type = self.options['face_type'] = self.load_or_def_option('face_type', 'wf')
default_models_opt_on_gpu = self.options['models_opt_on_gpu'] = self.load_or_def_option('models_opt_on_gpu', True) default_models_opt_on_gpu = self.options['models_opt_on_gpu'] = self.load_or_def_option('models_opt_on_gpu', True)
@ -27,11 +28,28 @@ class AMPModel(ModelBase):
default_d_dims = self.options['d_dims'] = self.options.get('d_dims', None) default_d_dims = self.options['d_dims'] = self.options.get('d_dims', None)
default_d_mask_dims = self.options['d_mask_dims'] = self.options.get('d_mask_dims', None) default_d_mask_dims = self.options['d_mask_dims'] = self.options.get('d_mask_dims', None)
default_morph_factor = self.options['morph_factor'] = self.options.get('morph_factor', 0.5) default_morph_factor = self.options['morph_factor'] = self.options.get('morph_factor', 0.5)
default_eyes_mouth_prio = self.options['eyes_mouth_prio'] = self.load_or_def_option('eyes_mouth_prio', False)
default_uniform_yaw = self.options['uniform_yaw'] = self.load_or_def_option('uniform_yaw', False) default_uniform_yaw = self.options['uniform_yaw'] = self.load_or_def_option('uniform_yaw', False)
# Uncomment it just if you want to impelement other loss functions
#default_loss_function = self.options['loss_function'] = self.load_or_def_option('loss_function', 'SSIM')
default_blur_out_mask = self.options['blur_out_mask'] = self.load_or_def_option('blur_out_mask', False) default_blur_out_mask = self.options['blur_out_mask'] = self.load_or_def_option('blur_out_mask', False)
default_adabelief = self.options['adabelief'] = self.load_or_def_option('adabelief', True)
default_lr_dropout = self.options['lr_dropout'] = self.load_or_def_option('lr_dropout', 'n') default_lr_dropout = self.options['lr_dropout'] = self.load_or_def_option('lr_dropout', 'n')
default_random_warp = self.options['random_warp'] = self.load_or_def_option('random_warp', True) default_random_warp = self.options['random_warp'] = self.load_or_def_option('random_warp', True)
default_random_downsample = self.options['random_downsample'] = self.load_or_def_option('random_downsample', False)
default_random_noise = self.options['random_noise'] = self.load_or_def_option('random_noise', False)
default_random_blur = self.options['random_blur'] = self.load_or_def_option('random_blur', False)
default_random_jpeg = self.options['random_jpeg'] = self.load_or_def_option('random_jpeg', False)
# Uncomment it just if you want to impelement other loss functions
#default_background_power = self.options['background_power'] = self.load_or_def_option('background_power', 0.0)
default_ct_mode = self.options['ct_mode'] = self.load_or_def_option('ct_mode', 'none') default_ct_mode = self.options['ct_mode'] = self.load_or_def_option('ct_mode', 'none')
default_random_color = self.options['random_color'] = self.load_or_def_option('random_color', False)
default_clipgrad = self.options['clipgrad'] = self.load_or_def_option('clipgrad', False) default_clipgrad = self.options['clipgrad'] = self.load_or_def_option('clipgrad', False)
ask_override = self.ask_override() ask_override = self.ask_override()
@ -39,9 +57,12 @@ class AMPModel(ModelBase):
self.ask_autobackup_hour() self.ask_autobackup_hour()
self.ask_write_preview_history() self.ask_write_preview_history()
self.ask_target_iter() self.ask_target_iter()
self.ask_retraining_samples()
self.ask_random_src_flip() self.ask_random_src_flip()
self.ask_random_dst_flip() self.ask_random_dst_flip()
self.ask_batch_size(8) self.ask_batch_size(8)
# self.options['use_fp16'] = io.input_bool ("Use fp16", default_usefp16, help_message='Increases training/inference speed, reduces model size. Model may crash. Enable it after 1-5k iters.')
if self.is_first_run(): if self.is_first_run():
resolution = io.input_int("Resolution", default_resolution, add_info="64-640", help_message="More resolution requires more VRAM and time to train. Value will be adjusted to multiple of 32 .") resolution = io.input_int("Resolution", default_resolution, add_info="64-640", help_message="More resolution requires more VRAM and time to train. Value will be adjusted to multiple of 32 .")
@ -73,8 +94,11 @@ class AMPModel(ModelBase):
self.options['morph_factor'] = morph_factor self.options['morph_factor'] = morph_factor
if self.is_first_run() or ask_override: if self.is_first_run() or ask_override:
self.options['eyes_mouth_prio'] = io.input_bool ("Eyes and mouth priority", default_eyes_mouth_prio, help_message='Helps to fix eye problems during training like "alien eyes" and wrong eyes direction. Also makes the detail of the teeth higher.')
self.options['uniform_yaw'] = io.input_bool ("Uniform yaw distribution of samples", default_uniform_yaw, help_message='Helps to fix blurry side faces due to small amount of them in the faceset.') self.options['uniform_yaw'] = io.input_bool ("Uniform yaw distribution of samples", default_uniform_yaw, help_message='Helps to fix blurry side faces due to small amount of them in the faceset.')
self.options['blur_out_mask'] = io.input_bool ("Blur out mask", default_blur_out_mask, help_message='Blurs nearby area outside of applied face mask of training samples. The result is the background near the face is smoothed and less noticeable on swapped face. The exact xseg mask in src and dst faceset is required.') self.options['blur_out_mask'] = io.input_bool ("Blur out mask", default_blur_out_mask, help_message='Blurs nearby area outside of applied face mask of training samples. The result is the background near the face is smoothed and less noticeable on swapped face. The exact xseg mask in src and dst faceset is required.')
self.options['lr_dropout'] = io.input_str (f"Use learning rate dropout", default_lr_dropout, ['n','y','cpu'], help_message="When the face is trained enough, you can enable this option to get extra sharpness and reduce subpixel shake for less amount of iterations. Enabled it before `disable random warp` and before GAN. \nn - disabled.\ny - enabled\ncpu - enabled on CPU. This allows not to use extra VRAM, sacrificing 20% time of iteration.") self.options['lr_dropout'] = io.input_str (f"Use learning rate dropout", default_lr_dropout, ['n','y','cpu'], help_message="When the face is trained enough, you can enable this option to get extra sharpness and reduce subpixel shake for less amount of iterations. Enabled it before `disable random warp` and before GAN. \nn - disabled.\ny - enabled\ncpu - enabled on CPU. This allows not to use extra VRAM, sacrificing 20% time of iteration.")
default_gan_power = self.options['gan_power'] = self.load_or_def_option('gan_power', 0.0) default_gan_power = self.options['gan_power'] = self.load_or_def_option('gan_power', 0.0)
@ -84,7 +108,13 @@ class AMPModel(ModelBase):
if self.is_first_run() or ask_override: if self.is_first_run() or ask_override:
self.options['models_opt_on_gpu'] = io.input_bool ("Place models and optimizer on GPU", default_models_opt_on_gpu, help_message="When you train on one GPU, by default model and optimizer weights are placed on GPU to accelerate the process. You can place they on CPU to free up extra VRAM, thus set bigger dimensions.") self.options['models_opt_on_gpu'] = io.input_bool ("Place models and optimizer on GPU", default_models_opt_on_gpu, help_message="When you train on one GPU, by default model and optimizer weights are placed on GPU to accelerate the process. You can place they on CPU to free up extra VRAM, thus set bigger dimensions.")
self.options['adabelief'] = io.input_bool ("Use AdaBelief optimizer?", default_adabelief, help_message="Use AdaBelief optimizer. It requires more VRAM, but the accuracy and the generalization of the model is higher.")
self.options['random_warp'] = io.input_bool ("Enable random warp of samples", default_random_warp, help_message="Random warp is required to generalize facial expressions of both faces. When the face is trained enough, you can disable it to get extra sharpness and reduce subpixel shake for less amount of iterations.") self.options['random_warp'] = io.input_bool ("Enable random warp of samples", default_random_warp, help_message="Random warp is required to generalize facial expressions of both faces. When the face is trained enough, you can disable it to get extra sharpness and reduce subpixel shake for less amount of iterations.")
self.options['random_downsample'] = io.input_bool("Enable random downsample of samples", default_random_downsample, help_message="")
self.options['random_noise'] = io.input_bool("Enable random noise added to samples", default_random_noise, help_message="")
self.options['random_blur'] = io.input_bool("Enable random blur of samples", default_random_blur, help_message="")
self.options['random_jpeg'] = io.input_bool("Enable random jpeg compression of samples", default_random_jpeg, help_message="")
self.options['gan_power'] = np.clip ( io.input_number ("GAN power", default_gan_power, add_info="0.0 .. 5.0", help_message="Forces the neural network to learn small details of the face. Enable it only when the face is trained enough with random_warp(off), and don't disable. The higher the value, the higher the chances of artifacts. Typical fine value is 0.1"), 0.0, 5.0 ) self.options['gan_power'] = np.clip ( io.input_number ("GAN power", default_gan_power, add_info="0.0 .. 5.0", help_message="Forces the neural network to learn small details of the face. Enable it only when the face is trained enough with random_warp(off), and don't disable. The higher the value, the higher the chances of artifacts. Typical fine value is 0.1"), 0.0, 5.0 )
@ -95,7 +125,11 @@ class AMPModel(ModelBase):
gan_dims = np.clip ( io.input_int("GAN dimensions", default_gan_dims, add_info="4-512", help_message="The dimensions of the GAN network. The higher dimensions, the more VRAM is required. You can get sharper edges even at the lowest setting. Typical fine value is 16." ), 4, 512 ) gan_dims = np.clip ( io.input_int("GAN dimensions", default_gan_dims, add_info="4-512", help_message="The dimensions of the GAN network. The higher dimensions, the more VRAM is required. You can get sharper edges even at the lowest setting. Typical fine value is 16." ), 4, 512 )
self.options['gan_dims'] = gan_dims self.options['gan_dims'] = gan_dims
self.options['ct_mode'] = io.input_str (f"Color transfer for src faceset", default_ct_mode, ['none','rct','lct','mkl','idt','sot'], help_message="Change color distribution of src samples close to dst samples. If src faceset is deverse enough, then lct mode is fine in most cases.") #self.options['background_power'] = np.clip ( io.input_number("Background power", default_background_power, add_info="0.0..1.0", help_message="Learn the area outside of the mask. Helps smooth out area near the mask boundaries. Can be used at any time"), 0.0, 1.0 )
self.options['ct_mode'] = io.input_str (f"Color transfer for src faceset", default_ct_mode, ['none','rct','lct','mkl','idt','sot', 'fs-aug'], help_message="Change color distribution of src samples close to dst samples. Try all modes to find the best.")
self.options['random_color'] = io.input_bool ("Random color", default_random_color, help_message="Samples are randomly rotated around the L axis in LAB colorspace, helps generalize training")
self.options['clipgrad'] = io.input_bool ("Enable gradient clipping", default_clipgrad, help_message="Gradient clipping reduces chance of model collapse, sacrificing speed of training.") self.options['clipgrad'] = io.input_bool ("Enable gradient clipping", default_clipgrad, help_message="Gradient clipping reduces chance of model collapse, sacrificing speed of training.")
self.gan_model_changed = (default_gan_patch_size != self.options['gan_patch_size']) or (default_gan_dims != self.options['gan_dims']) self.gan_model_changed = (default_gan_patch_size != self.options['gan_patch_size']) or (default_gan_dims != self.options['gan_dims'])
@ -123,13 +157,17 @@ class AMPModel(ModelBase):
gan_power = self.gan_power = self.options['gan_power'] gan_power = self.gan_power = self.options['gan_power']
random_warp = self.options['random_warp'] random_warp = self.options['random_warp']
eyes_mouth_prio = self.options['eyes_mouth_prio']
blur_out_mask = self.options['blur_out_mask'] blur_out_mask = self.options['blur_out_mask']
ct_mode = self.options['ct_mode'] ct_mode = self.options['ct_mode']
if ct_mode == 'none': if ct_mode == 'none':
ct_mode = None ct_mode = None
use_fp16 = False adabelief = self.options['adabelief']
# use_fp16 = self.options['use_fp16']
if self.is_exporting: if self.is_exporting:
use_fp16 = io.input_bool ("Export quantized?", False, help_message='Makes the exported model faster. If you have problems, disable this option.') use_fp16 = io.input_bool ("Export quantized?", False, help_message='Makes the exported model faster. If you have problems, disable this option.')
@ -300,13 +338,15 @@ class AMPModel(ModelBase):
lr_dropout = 1.0 lr_dropout = 1.0
self.G_weights = self.encoder.get_weights() + self.decoder.get_weights() self.G_weights = self.encoder.get_weights() + self.decoder.get_weights()
self.src_dst_opt = nn.AdaBelief(lr=5e-5, lr_dropout=lr_dropout, lr_cos=lr_cos, clipnorm=clipnorm, name='src_dst_opt') OptimizerClass = nn.AdaBelief if adabelief else nn.RMSprop
self.src_dst_opt = OptimizerClass(lr=5e-5, lr_dropout=lr_dropout, lr_cos=lr_cos, clipnorm=clipnorm, name='src_dst_opt')
self.src_dst_opt.initialize_variables (self.G_weights, vars_on_cpu=optimizer_vars_on_cpu) self.src_dst_opt.initialize_variables (self.G_weights, vars_on_cpu=optimizer_vars_on_cpu)
self.model_filename_list += [ (self.src_dst_opt, 'src_dst_opt.npy') ] self.model_filename_list += [ (self.src_dst_opt, 'src_dst_opt.npy') ]
if gan_power != 0: if gan_power != 0:
self.GAN = nn.UNetPatchDiscriminator(patch_size=self.options['gan_patch_size'], in_ch=input_ch, base_ch=self.options['gan_dims'], name="GAN") self.GAN = nn.UNetPatchDiscriminator(patch_size=self.options['gan_patch_size'], in_ch=input_ch, base_ch=self.options['gan_dims'], use_fp16=use_fp16, name="GAN")
self.GAN_opt = nn.AdaBelief(lr=5e-5, lr_dropout=lr_dropout, lr_cos=lr_cos, clipnorm=clipnorm, name='GAN_opt') self.GAN_opt = OptimizerClass(lr=5e-5, lr_dropout=lr_dropout, lr_cos=lr_cos, clipnorm=clipnorm, name='GAN_opt')
self.GAN_opt.initialize_variables ( self.GAN.get_weights(), vars_on_cpu=optimizer_vars_on_cpu) self.GAN_opt.initialize_variables ( self.GAN.get_weights(), vars_on_cpu=optimizer_vars_on_cpu)
self.model_filename_list += [ [self.GAN, 'GAN.npy'], self.model_filename_list += [ [self.GAN, 'GAN.npy'],
[self.GAN_opt, 'GAN_opt.npy'] ] [self.GAN_opt, 'GAN_opt.npy'] ]
@ -424,6 +464,7 @@ class AMPModel(ModelBase):
gpu_dst_loss += tf.reduce_mean (10*tf.square(gpu_target_dst_masked-gpu_pred_dst_dst_masked), axis=[1,2,3]) gpu_dst_loss += tf.reduce_mean (10*tf.square(gpu_target_dst_masked-gpu_pred_dst_dst_masked), axis=[1,2,3])
# Eyes+mouth prio loss # Eyes+mouth prio loss
if eyes_mouth_prio:
gpu_src_loss += tf.reduce_mean (300*tf.abs (gpu_target_src*gpu_target_srcm_em-gpu_pred_src_src*gpu_target_srcm_em), axis=[1,2,3]) gpu_src_loss += tf.reduce_mean (300*tf.abs (gpu_target_src*gpu_target_srcm_em-gpu_pred_src_src*gpu_target_srcm_em), axis=[1,2,3])
gpu_dst_loss += tf.reduce_mean (300*tf.abs (gpu_target_dst*gpu_target_dstm_em-gpu_pred_dst_dst*gpu_target_dstm_em), axis=[1,2,3]) gpu_dst_loss += tf.reduce_mean (300*tf.abs (gpu_target_dst*gpu_target_dstm_em-gpu_pred_dst_dst*gpu_target_dstm_em), axis=[1,2,3])
@ -558,30 +599,52 @@ class AMPModel(ModelBase):
if ct_mode is not None: if ct_mode is not None:
src_generators_count = int(src_generators_count * 1.5) src_generators_count = int(src_generators_count * 1.5)
fs_aug = None
if ct_mode == 'fs-aug':
fs_aug = 'fs-aug'
channel_type = SampleProcessor.ChannelType.LAB_RAND_TRANSFORM if self.options['random_color'] else SampleProcessor.ChannelType.BGR
self.set_training_data_generators ([ self.set_training_data_generators ([
SampleGeneratorFace(training_data_src_path, random_ct_samples_path=random_ct_samples_path, debug=self.is_debug(), batch_size=self.get_batch_size(), SampleGeneratorFace(training_data_src_path, random_ct_samples_path=random_ct_samples_path, debug=self.is_debug(), batch_size=self.get_batch_size(),
sample_process_options=SampleProcessor.Options(scale_range=[-0.15, 0.15], random_flip=self.random_src_flip), sample_process_options=SampleProcessor.Options(scale_range=[-0.125, 0.125], random_flip=self.random_src_flip),
output_sample_types = [ {'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':random_warp, 'transform':True, 'channel_type' : SampleProcessor.ChannelType.BGR, 'ct_mode': ct_mode, 'face_type':face_type, 'data_format':nn.data_format, 'resolution': resolution}, output_sample_types = [ {'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':random_warp,
{'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.BGR, 'ct_mode': ct_mode, 'face_type':face_type, 'data_format':nn.data_format, 'resolution': resolution}, 'random_downsample': self.options['random_downsample'],
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE, 'face_type':face_type, 'data_format':nn.data_format, 'resolution': resolution}, 'random_noise': self.options['random_noise'],
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.EYES_MOUTH, 'face_type':face_type, 'data_format':nn.data_format, 'resolution': resolution}, 'random_blur': self.options['random_blur'],
'random_jpeg': self.options['random_jpeg'],
'transform':True, 'channel_type' : channel_type, 'ct_mode': ct_mode,
'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':False,
'transform':True, 'channel_type' : channel_type, 'ct_mode': ct_mode,
'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE_EYES, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
], ],
uniform_yaw_distribution=self.options['uniform_yaw'],# or self.pretrain, uniform_yaw_distribution=self.options['uniform_yaw'], #or self.pretrain
generators_count=src_generators_count ), generators_count=src_generators_count ),
SampleGeneratorFace(training_data_dst_path, debug=self.is_debug(), batch_size=self.get_batch_size(), SampleGeneratorFace(training_data_dst_path, debug=self.is_debug(), batch_size=self.get_batch_size(),
sample_process_options=SampleProcessor.Options(scale_range=[-0.15, 0.15], random_flip=self.random_dst_flip), sample_process_options=SampleProcessor.Options(scale_range=[-0.125, 0.125], random_flip=self.random_dst_flip),
output_sample_types = [ {'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':random_warp, 'transform':True, 'channel_type' : SampleProcessor.ChannelType.BGR, 'face_type':face_type, 'data_format':nn.data_format, 'resolution': resolution}, output_sample_types = [ {'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':random_warp,
{'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.BGR, 'face_type':face_type, 'data_format':nn.data_format, 'resolution': resolution}, 'random_downsample': self.options['random_downsample'],
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE, 'face_type':face_type, 'data_format':nn.data_format, 'resolution': resolution}, 'random_noise': self.options['random_noise'],
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.EYES_MOUTH, 'face_type':face_type, 'data_format':nn.data_format, 'resolution': resolution}, 'random_blur': self.options['random_blur'],
'random_jpeg': self.options['random_jpeg'],
'transform':True, 'channel_type' : channel_type, 'ct_mode': fs_aug,
'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':False , 'transform':True, 'channel_type' : channel_type, 'ct_mode': fs_aug, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE_EYES, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
], ],
uniform_yaw_distribution=self.options['uniform_yaw'], #or self.pretrain, uniform_yaw_distribution=self.options['uniform_yaw'], #or self.pretrain,
generators_count=dst_generators_count ) generators_count=dst_generators_count )
]) ])
if self.options['retraining_samples']:
self.last_src_samples_loss = []
self.last_dst_samples_loss = []
def export_dfm (self): def export_dfm (self):
output_path=self.get_strpath_storage_for_file('model.dfm') output_path=self.get_strpath_storage_for_file('model.dfm')
@ -651,6 +714,27 @@ class AMPModel(ModelBase):
src_loss, dst_loss = self.train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em) src_loss, dst_loss = self.train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
if self.options['retraining_samples']:
for i in range(bs):
self.last_src_samples_loss.append ( (src_loss[i], target_src[i], target_srcm[i], target_srcm_em[i]) )
self.last_dst_samples_loss.append ( (dst_loss[i], target_dst[i], target_dstm[i], target_dstm_em[i]) )
if len(self.last_src_samples_loss) >= bs*16:
src_samples_loss = sorted(self.last_src_samples_loss, key=operator.itemgetter(0), reverse=True)
dst_samples_loss = sorted(self.last_dst_samples_loss, key=operator.itemgetter(0), reverse=True)
target_src = np.stack( [ x[1] for x in src_samples_loss[:bs] ] )
target_srcm = np.stack( [ x[2] for x in src_samples_loss[:bs] ] )
target_srcm_em = np.stack( [ x[3] for x in src_samples_loss[:bs] ] )
target_dst = np.stack( [ x[1] for x in dst_samples_loss[:bs] ] )
target_dstm = np.stack( [ x[2] for x in dst_samples_loss[:bs] ] )
target_dstm_em = np.stack( [ x[3] for x in dst_samples_loss[:bs] ] )
src_loss, dst_loss = self.train (target_src, target_src, target_srcm, target_srcm_em, target_dst, target_dst, target_dstm, target_dstm_em)
self.last_src_samples_loss = []
self.last_dst_samples_loss = []
if self.gan_power != 0: if self.gan_power != 0:
self.GAN_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em) self.GAN_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)

View file

@ -1,6 +1,5 @@
import multiprocessing import multiprocessing
import operator import operator
from functools import partial
import numpy as np import numpy as np
@ -26,7 +25,6 @@ class SAEHDModel(ModelBase):
else: else:
suggest_batch_size = 4 suggest_batch_size = 4
yn_str = {True:'y',False:'n'}
min_res = 64 min_res = 64
max_res = 640 max_res = 640
@ -42,7 +40,8 @@ class SAEHDModel(ModelBase):
default_d_dims = self.options['d_dims'] = self.options.get('d_dims', None) default_d_dims = self.options['d_dims'] = self.options.get('d_dims', None)
default_d_mask_dims = self.options['d_mask_dims'] = self.options.get('d_mask_dims', None) default_d_mask_dims = self.options['d_mask_dims'] = self.options.get('d_mask_dims', None)
default_masked_training = self.options['masked_training'] = self.load_or_def_option('masked_training', True) default_masked_training = self.options['masked_training'] = self.load_or_def_option('masked_training', True)
default_eyes_mouth_prio = self.options['eyes_mouth_prio'] = self.load_or_def_option('eyes_mouth_prio', False) default_eyes_prio = self.options['eyes_prio'] = self.load_or_def_option('eyes_prio', False)
default_mouth_prio = self.options['mouth_prio'] = self.load_or_def_option('mouth_prio', False)
default_uniform_yaw = self.options['uniform_yaw'] = self.load_or_def_option('uniform_yaw', False) default_uniform_yaw = self.options['uniform_yaw'] = self.load_or_def_option('uniform_yaw', False)
default_blur_out_mask = self.options['blur_out_mask'] = self.load_or_def_option('blur_out_mask', False) default_blur_out_mask = self.options['blur_out_mask'] = self.load_or_def_option('blur_out_mask', False)
@ -52,20 +51,32 @@ class SAEHDModel(ModelBase):
lr_dropout = {True:'y', False:'n'}.get(lr_dropout, lr_dropout) #backward comp lr_dropout = {True:'y', False:'n'}.get(lr_dropout, lr_dropout) #backward comp
default_lr_dropout = self.options['lr_dropout'] = lr_dropout default_lr_dropout = self.options['lr_dropout'] = lr_dropout
default_loss_function = self.options['loss_function'] = self.load_or_def_option('loss_function', 'SSIM')
default_random_warp = self.options['random_warp'] = self.load_or_def_option('random_warp', True) default_random_warp = self.options['random_warp'] = self.load_or_def_option('random_warp', True)
default_random_hsv_power = self.options['random_hsv_power'] = self.load_or_def_option('random_hsv_power', 0.0) default_random_hsv_power = self.options['random_hsv_power'] = self.load_or_def_option('random_hsv_power', 0.0)
default_random_downsample = self.options['random_downsample'] = self.load_or_def_option('random_downsample', False)
default_random_noise = self.options['random_noise'] = self.load_or_def_option('random_noise', False)
default_random_blur = self.options['random_blur'] = self.load_or_def_option('random_blur', False)
default_random_jpeg = self.options['random_jpeg'] = self.load_or_def_option('random_jpeg', False)
default_background_power = self.options['background_power'] = self.load_or_def_option('background_power', 0.0)
default_true_face_power = self.options['true_face_power'] = self.load_or_def_option('true_face_power', 0.0) default_true_face_power = self.options['true_face_power'] = self.load_or_def_option('true_face_power', 0.0)
default_face_style_power = self.options['face_style_power'] = self.load_or_def_option('face_style_power', 0.0) default_face_style_power = self.options['face_style_power'] = self.load_or_def_option('face_style_power', 0.0)
default_bg_style_power = self.options['bg_style_power'] = self.load_or_def_option('bg_style_power', 0.0) default_bg_style_power = self.options['bg_style_power'] = self.load_or_def_option('bg_style_power', 0.0)
default_ct_mode = self.options['ct_mode'] = self.load_or_def_option('ct_mode', 'none') default_ct_mode = self.options['ct_mode'] = self.load_or_def_option('ct_mode', 'none')
default_random_color = self.options['random_color'] = self.load_or_def_option('random_color', False)
default_clipgrad = self.options['clipgrad'] = self.load_or_def_option('clipgrad', False) default_clipgrad = self.options['clipgrad'] = self.load_or_def_option('clipgrad', False)
default_pretrain = self.options['pretrain'] = self.load_or_def_option('pretrain', False) default_pretrain = self.options['pretrain'] = self.load_or_def_option('pretrain', False)
ask_override = self.ask_override() ask_override = self.ask_override()
if self.is_first_run() or ask_override: if self.is_first_run() or ask_override:
self.ask_session_name()
self.ask_autobackup_hour() self.ask_autobackup_hour()
self.ask_maximum_n_backups()
self.ask_write_preview_history() self.ask_write_preview_history()
self.ask_target_iter() self.ask_target_iter()
self.ask_retraining_samples()
self.ask_random_src_flip() self.ask_random_src_flip()
self.ask_random_dst_flip() self.ask_random_dst_flip()
self.ask_batch_size(suggest_batch_size) self.ask_batch_size(suggest_batch_size)
@ -75,10 +86,7 @@ class SAEHDModel(ModelBase):
resolution = io.input_int("Resolution", default_resolution, add_info="64-640", help_message="More resolution requires more VRAM and time to train. Value will be adjusted to multiple of 16 and 32 for -d archi.") resolution = io.input_int("Resolution", default_resolution, add_info="64-640", help_message="More resolution requires more VRAM and time to train. Value will be adjusted to multiple of 16 and 32 for -d archi.")
resolution = np.clip ( (resolution // 16) * 16, min_res, max_res) resolution = np.clip ( (resolution // 16) * 16, min_res, max_res)
self.options['resolution'] = resolution self.options['resolution'] = resolution
self.options['face_type'] = io.input_str ("Face type", default_face_type, ['h','mf','f','wf','head', 'custom'], help_message="Half / mid face / full face / whole face / head / custom. Half face has better resolution, but covers less area of cheeks. Mid face is 30% wider than half face. 'Whole face' covers full area of face include forehead. 'head' covers full head, but requires XSeg for src and dst faceset.").lower()
self.options['face_type'] = io.input_str ("Face type", default_face_type, ['h','mf','f','wf','head'], help_message="Half / mid face / full face / whole face / head. Half face has better resolution, but covers less area of cheeks. Mid face is 30% wider than half face. 'Whole face' covers full area of face include forehead. 'head' covers full head, but requires XSeg for src and dst faceset.").lower()
while True: while True:
archi = io.input_str ("AE architecture", default_archi, help_message=\ archi = io.input_str ("AE architecture", default_archi, help_message=\
@ -133,16 +141,21 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
self.options['d_mask_dims'] = d_mask_dims + d_mask_dims % 2 self.options['d_mask_dims'] = d_mask_dims + d_mask_dims % 2
if self.is_first_run() or ask_override: if self.is_first_run() or ask_override:
if self.options['face_type'] == 'wf' or self.options['face_type'] == 'head': if self.options['face_type'] == 'wf' or self.options['face_type'] == 'head' or self.options['face_type'] == 'custom':
self.options['masked_training'] = io.input_bool ("Masked training", default_masked_training, help_message="This option is available only for 'whole_face' or 'head' type. Masked training clips training area to full_face mask or XSeg mask, thus network will train the faces properly.") self.options['masked_training'] = io.input_bool ("Masked training", default_masked_training, help_message="This option is available only for 'whole_face' or 'head' type. Masked training clips training area to full_face mask or XSeg mask, thus network will train the faces properly.")
self.options['eyes_mouth_prio'] = io.input_bool ("Eyes and mouth priority", default_eyes_mouth_prio, help_message='Helps to fix eye problems during training like "alien eyes" and wrong eyes direction. Also makes the detail of the teeth higher.') self.options['eyes_prio'] = io.input_bool ("Eyes priority", default_eyes_prio, help_message='Helps to fix eye problems during training like "alien eyes" and wrong eyes direction ( especially on HD architectures ) by forcing the neural network to train eyes with higher priority. before/after https://i.imgur.com/YQHOuSR.jpg ')
self.options['mouth_prio'] = io.input_bool ("Mouth priority", default_mouth_prio, help_message='Helps to fix mouth problems during training by forcing the neural network to train mouth with higher priority similar to eyes ')
self.options['uniform_yaw'] = io.input_bool ("Uniform yaw distribution of samples", default_uniform_yaw, help_message='Helps to fix blurry side faces due to small amount of them in the faceset.') self.options['uniform_yaw'] = io.input_bool ("Uniform yaw distribution of samples", default_uniform_yaw, help_message='Helps to fix blurry side faces due to small amount of them in the faceset.')
self.options['blur_out_mask'] = io.input_bool ("Blur out mask", default_blur_out_mask, help_message='Blurs nearby area outside of applied face mask of training samples. The result is the background near the face is smoothed and less noticeable on swapped face. The exact xseg mask in src and dst faceset is required.') self.options['blur_out_mask'] = io.input_bool ("Blur out mask", default_blur_out_mask, help_message='Blurs nearby area outside of applied face mask of training samples. The result is the background near the face is smoothed and less noticeable on swapped face. The exact xseg mask in src and dst faceset is required.')
default_gan_version = self.options['gan_version'] = self.load_or_def_option('gan_version', 2)
default_gan_power = self.options['gan_power'] = self.load_or_def_option('gan_power', 0.0) default_gan_power = self.options['gan_power'] = self.load_or_def_option('gan_power', 0.0)
default_gan_patch_size = self.options['gan_patch_size'] = self.load_or_def_option('gan_patch_size', self.options['resolution'] // 8) default_gan_patch_size = self.options['gan_patch_size'] = self.load_or_def_option('gan_patch_size', self.options['resolution'] // 8)
default_gan_dims = self.options['gan_dims'] = self.load_or_def_option('gan_dims', 16) default_gan_dims = self.options['gan_dims'] = self.load_or_def_option('gan_dims', 16)
default_gan_smoothing = self.options['gan_smoothing'] = self.load_or_def_option('gan_smoothing', 0.1)
default_gan_noise = self.options['gan_noise'] = self.load_or_def_option('gan_noise', 0.0)
if self.is_first_run() or ask_override: if self.is_first_run() or ask_override:
self.options['models_opt_on_gpu'] = io.input_bool ("Place models and optimizer on GPU", default_models_opt_on_gpu, help_message="When you train on one GPU, by default model and optimizer weights are placed on GPU to accelerate the process. You can place they on CPU to free up extra VRAM, thus set bigger dimensions.") self.options['models_opt_on_gpu'] = io.input_bool ("Place models and optimizer on GPU", default_models_opt_on_gpu, help_message="When you train on one GPU, by default model and optimizer weights are placed on GPU to accelerate the process. You can place they on CPU to free up extra VRAM, thus set bigger dimensions.")
@ -151,28 +164,48 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
self.options['lr_dropout'] = io.input_str (f"Use learning rate dropout", default_lr_dropout, ['n','y','cpu'], help_message="When the face is trained enough, you can enable this option to get extra sharpness and reduce subpixel shake for less amount of iterations. Enabled it before `disable random warp` and before GAN. \nn - disabled.\ny - enabled\ncpu - enabled on CPU. This allows not to use extra VRAM, sacrificing 20% time of iteration.") self.options['lr_dropout'] = io.input_str (f"Use learning rate dropout", default_lr_dropout, ['n','y','cpu'], help_message="When the face is trained enough, you can enable this option to get extra sharpness and reduce subpixel shake for less amount of iterations. Enabled it before `disable random warp` and before GAN. \nn - disabled.\ny - enabled\ncpu - enabled on CPU. This allows not to use extra VRAM, sacrificing 20% time of iteration.")
self.options['loss_function'] = io.input_str(f"Loss function", default_loss_function, ['SSIM', 'MS-SSIM', 'MS-SSIM+L1'],
help_message="Change loss function used for image quality assessment.")
self.options['random_warp'] = io.input_bool ("Enable random warp of samples", default_random_warp, help_message="Random warp is required to generalize facial expressions of both faces. When the face is trained enough, you can disable it to get extra sharpness and reduce subpixel shake for less amount of iterations.") self.options['random_warp'] = io.input_bool ("Enable random warp of samples", default_random_warp, help_message="Random warp is required to generalize facial expressions of both faces. When the face is trained enough, you can disable it to get extra sharpness and reduce subpixel shake for less amount of iterations.")
self.options['random_hsv_power'] = np.clip ( io.input_number ("Random hue/saturation/light intensity", default_random_hsv_power, add_info="0.0 .. 0.3", help_message="Random hue/saturation/light intensity applied to the src face set only at the input of the neural network. Stabilizes color perturbations during face swapping. Reduces the quality of the color transfer by selecting the closest one in the src faceset. Thus the src faceset must be diverse enough. Typical fine value is 0.05"), 0.0, 0.3 ) self.options['random_hsv_power'] = np.clip ( io.input_number ("Random hue/saturation/light intensity", default_random_hsv_power, add_info="0.0 .. 0.3", help_message="Random hue/saturation/light intensity applied to the src face set only at the input of the neural network. Stabilizes color perturbations during face swapping. Reduces the quality of the color transfer by selecting the closest one in the src faceset. Thus the src faceset must be diverse enough. Typical fine value is 0.05"), 0.0, 0.3 )
self.options['gan_power'] = np.clip ( io.input_number ("GAN power", default_gan_power, add_info="0.0 .. 5.0", help_message="Forces the neural network to learn small details of the face. Enable it only when the face is trained enough with lr_dropout(on) and random_warp(off), and don't disable. The higher the value, the higher the chances of artifacts. Typical fine value is 0.1"), 0.0, 5.0 ) self.options['random_downsample'] = io.input_bool("Enable random downsample of samples", default_random_downsample, help_message="")
self.options['random_noise'] = io.input_bool("Enable random noise added to samples", default_random_noise, help_message="")
self.options['random_blur'] = io.input_bool("Enable random blur of samples", default_random_blur, help_message="")
self.options['random_jpeg'] = io.input_bool("Enable random jpeg compression of samples", default_random_jpeg, help_message="")
self.options['gan_version'] = np.clip (io.input_int("GAN version", default_gan_version, add_info="2 or 3", help_message="Choose GAN version (v2: 7/16/2020, v3: 1/3/2021):"), 2, 3)
if self.options['gan_version'] == 2:
self.options['gan_power'] = np.clip ( io.input_number ("GAN power", default_gan_power, add_info="0.0 .. 10.0", help_message="Train the network in Generative Adversarial manner. Forces the neural network to learn small details of the face. Enable it only when the face is trained enough and don't disable. Typical value is 0.1"), 0.0, 10.0 )
else:
self.options['gan_power'] = np.clip ( io.input_number ("GAN power", default_gan_power, add_info="0.0 .. 1.0", help_message="Forces the neural network to learn small details of the face. Enable it only when the face is trained enough with lr_dropout(on) and random_warp(off), and don't disable. The higher the value, the higher the chances of artifacts. Typical fine value is 0.1"), 0.0, 1.0 )
if self.options['gan_power'] != 0.0: if self.options['gan_power'] != 0.0:
if self.options['gan_version'] == 3:
gan_patch_size = np.clip ( io.input_int("GAN patch size", default_gan_patch_size, add_info="3-640", help_message="The higher patch size, the higher the quality, the more VRAM is required. You can get sharper edges even at the lowest setting. Typical fine value is resolution / 8." ), 3, 640 ) gan_patch_size = np.clip ( io.input_int("GAN patch size", default_gan_patch_size, add_info="3-640", help_message="The higher patch size, the higher the quality, the more VRAM is required. You can get sharper edges even at the lowest setting. Typical fine value is resolution / 8." ), 3, 640 )
self.options['gan_patch_size'] = gan_patch_size self.options['gan_patch_size'] = gan_patch_size
gan_dims = np.clip ( io.input_int("GAN dimensions", default_gan_dims, add_info="4-512", help_message="The dimensions of the GAN network. The higher dimensions, the more VRAM is required. You can get sharper edges even at the lowest setting. Typical fine value is 16." ), 4, 512 ) gan_dims = np.clip ( io.input_int("GAN dimensions", default_gan_dims, add_info="4-64", help_message="The dimensions of the GAN network. The higher dimensions, the more VRAM is required. You can get sharper edges even at the lowest setting. Typical fine value is 16." ), 4, 64 )
self.options['gan_dims'] = gan_dims self.options['gan_dims'] = gan_dims
self.options['gan_smoothing'] = np.clip ( io.input_number("GAN label smoothing", default_gan_smoothing, add_info="0 - 0.5", help_message="Uses soft labels with values slightly off from 0/1 for GAN, has a regularizing effect"), 0, 0.5)
self.options['gan_noise'] = np.clip ( io.input_number("GAN noisy labels", default_gan_noise, add_info="0 - 0.5", help_message="Marks some images with the wrong label, helps prevent collapse"), 0, 0.5)
if 'df' in self.options['archi']: if 'df' in self.options['archi']:
self.options['true_face_power'] = np.clip ( io.input_number ("'True face' power.", default_true_face_power, add_info="0.0000 .. 1.0", help_message="Experimental option. Discriminates result face to be more like src face. Higher value - stronger discrimination. Typical value is 0.01 . Comparison - https://i.imgur.com/czScS9q.png"), 0.0, 1.0 ) self.options['true_face_power'] = np.clip ( io.input_number ("'True face' power.", default_true_face_power, add_info="0.0000 .. 1.0", help_message="Experimental option. Discriminates result face to be more like src face. Higher value - stronger discrimination. Typical value is 0.01 . Comparison - https://i.imgur.com/czScS9q.png"), 0.0, 1.0 )
else: else:
self.options['true_face_power'] = 0.0 self.options['true_face_power'] = 0.0
self.options['background_power'] = np.clip ( io.input_number("Background power", default_background_power, add_info="0.0..1.0", help_message="Learn the area outside of the mask. Helps smooth out area near the mask boundaries. Can be used at any time"), 0.0, 1.0 )
self.options['face_style_power'] = np.clip ( io.input_number("Face style power", default_face_style_power, add_info="0.0..100.0", help_message="Learn the color of the predicted face to be the same as dst inside mask. If you want to use this option with 'whole_face' you have to use XSeg trained mask. Warning: Enable it only after 10k iters, when predicted face is clear enough to start learn style. Start from 0.001 value and check history changes. Enabling this option increases the chance of model collapse."), 0.0, 100.0 ) self.options['face_style_power'] = np.clip ( io.input_number("Face style power", default_face_style_power, add_info="0.0..100.0", help_message="Learn the color of the predicted face to be the same as dst inside mask. If you want to use this option with 'whole_face' you have to use XSeg trained mask. Warning: Enable it only after 10k iters, when predicted face is clear enough to start learn style. Start from 0.001 value and check history changes. Enabling this option increases the chance of model collapse."), 0.0, 100.0 )
self.options['bg_style_power'] = np.clip ( io.input_number("Background style power", default_bg_style_power, add_info="0.0..100.0", help_message="Learn the area outside mask of the predicted face to be the same as dst. If you want to use this option with 'whole_face' you have to use XSeg trained mask. For whole_face you have to use XSeg trained mask. This can make face more like dst. Enabling this option increases the chance of model collapse. Typical value is 2.0"), 0.0, 100.0 ) self.options['bg_style_power'] = np.clip ( io.input_number("Background style power", default_bg_style_power, add_info="0.0..100.0", help_message="Learn the area outside mask of the predicted face to be the same as dst. If you want to use this option with 'whole_face' you have to use XSeg trained mask. For whole_face you have to use XSeg trained mask. This can make face more like dst. Enabling this option increases the chance of model collapse. Typical value is 2.0"), 0.0, 100.0 )
self.options['ct_mode'] = io.input_str (f"Color transfer for src faceset", default_ct_mode, ['none','rct','lct','mkl','idt','sot'], help_message="Change color distribution of src samples close to dst samples. Try all modes to find the best.") self.options['ct_mode'] = io.input_str (f"Color transfer for src faceset", default_ct_mode, ['none','rct','lct','mkl','idt','sot', 'fs-aug'], help_message="Change color distribution of src samples close to dst samples. Try all modes to find the best. FS aug adds random color to dst and src")
self.options['random_color'] = io.input_bool ("Random color", default_random_color, help_message="Samples are randomly rotated around the L axis in LAB colorspace, helps generalize training")
self.options['clipgrad'] = io.input_bool ("Enable gradient clipping", default_clipgrad, help_message="Gradient clipping reduces chance of model collapse, sacrificing speed of training.") self.options['clipgrad'] = io.input_bool ("Enable gradient clipping", default_clipgrad, help_message="Gradient clipping reduces chance of model collapse, sacrificing speed of training.")
self.options['pretrain'] = io.input_bool ("Enable pretraining mode", default_pretrain, help_message="Pretrain the model with large amount of various faces. After that, model can be used to train the fakes more quickly. Forces random_warp=N, random_flips=Y, gan_power=0.0, lr_dropout=N, styles=0.0, uniform_yaw=Y") self.options['pretrain'] = io.input_bool ("Enable pretraining mode", default_pretrain, help_message="Pretrain the model with large amount of various faces. After that, model can be used to train the fakes more quickly. Forces random_warp=N, random_flips=Y, gan_power=0.0, lr_dropout=N, styles=0.0, uniform_yaw=Y")
@ -197,12 +230,11 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
'mf' : FaceType.MID_FULL, 'mf' : FaceType.MID_FULL,
'f' : FaceType.FULL, 'f' : FaceType.FULL,
'wf' : FaceType.WHOLE_FACE, 'wf' : FaceType.WHOLE_FACE,
'custom' : FaceType.CUSTOM,
'head' : FaceType.HEAD}[ self.options['face_type'] ] 'head' : FaceType.HEAD}[ self.options['face_type'] ]
if 'eyes_prio' in self.options: eyes_prio = self.options['eyes_prio']
self.options.pop('eyes_prio') mouth_prio = self.options['mouth_prio']
eyes_mouth_prio = self.options['eyes_mouth_prio']
archi_split = self.options['archi'].split('-') archi_split = self.options['archi'].split('-')
@ -223,6 +255,10 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
adabelief = self.options['adabelief'] adabelief = self.options['adabelief']
use_fp16 = self.options['use_fp16']
if self.is_exporting:
use_fp16 = io.input_bool ("Export quantized?", False, help_message='Makes the exported model faster. If you have problems, disable this option.')
use_fp16 = False use_fp16 = False
if self.is_exporting: if self.is_exporting:
use_fp16 = io.input_bool ("Export quantized?", False, help_message='Makes the exported model faster. If you have problems, disable this option.') use_fp16 = io.input_bool ("Export quantized?", False, help_message='Makes the exported model faster. If you have problems, disable this option.')
@ -313,7 +349,11 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
if self.is_training: if self.is_training:
if gan_power != 0: if gan_power != 0:
self.D_src = nn.UNetPatchDiscriminator(patch_size=self.options['gan_patch_size'], in_ch=input_ch, base_ch=self.options['gan_dims'], name="D_src") if self.options['gan_version'] == 2:
self.D_src = nn.UNetPatchDiscriminatorV2(patch_size=resolution//16, in_ch=input_ch, name="D_src", use_fp16=self.options['use_fp16'])
self.model_filename_list += [ [self.D_src, 'D_src_v2.npy'] ]
else:
self.D_src = nn.UNetPatchDiscriminator(patch_size=self.options['gan_patch_size'], in_ch=input_ch, base_ch=self.options['gan_dims'], use_fp16=self.options['use_fp16'], name="D_src")
self.model_filename_list += [ [self.D_src, 'GAN.npy'] ] self.model_filename_list += [ [self.D_src, 'GAN.npy'] ]
# Initialize optimizers # Initialize optimizers
@ -347,6 +387,11 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
self.model_filename_list += [ (self.D_code_opt, 'D_code_opt.npy') ] self.model_filename_list += [ (self.D_code_opt, 'D_code_opt.npy') ]
if gan_power != 0: if gan_power != 0:
if self.options['gan_version'] == 2:
self.D_src_dst_opt = OptimizerClass(lr=lr, lr_dropout=lr_dropout, lr_cos=lr_cos, clipnorm=clipnorm, name='D_src_dst_opt')
self.D_src_dst_opt.initialize_variables ( self.D_src.get_weights(), vars_on_cpu=optimizer_vars_on_cpu, lr_dropout_on_cpu=self.options['lr_dropout']=='cpu')#+self.D_src_x2.get_weights()
self.model_filename_list += [ (self.D_src_dst_opt, 'D_src_v2_opt.npy') ]
else:
self.D_src_dst_opt = OptimizerClass(lr=lr, lr_dropout=lr_dropout, lr_cos=lr_cos, clipnorm=clipnorm, name='GAN_opt') self.D_src_dst_opt = OptimizerClass(lr=lr, lr_dropout=lr_dropout, lr_cos=lr_cos, clipnorm=clipnorm, name='GAN_opt')
self.D_src_dst_opt.initialize_variables ( self.D_src.get_weights(), vars_on_cpu=optimizer_vars_on_cpu, lr_dropout_on_cpu=self.options['lr_dropout']=='cpu')#+self.D_src_x2.get_weights() self.D_src_dst_opt.initialize_variables ( self.D_src.get_weights(), vars_on_cpu=optimizer_vars_on_cpu, lr_dropout_on_cpu=self.options['lr_dropout']=='cpu')#+self.D_src_x2.get_weights()
self.model_filename_list += [ (self.D_src_dst_opt, 'GAN_opt.npy') ] self.model_filename_list += [ (self.D_src_dst_opt, 'GAN_opt.npy') ]
@ -380,11 +425,27 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
gpu_warped_dst = self.warped_dst [batch_slice,:,:,:] gpu_warped_dst = self.warped_dst [batch_slice,:,:,:]
gpu_target_src = self.target_src [batch_slice,:,:,:] gpu_target_src = self.target_src [batch_slice,:,:,:]
gpu_target_dst = self.target_dst [batch_slice,:,:,:] gpu_target_dst = self.target_dst [batch_slice,:,:,:]
gpu_target_srcm = self.target_srcm[batch_slice,:,:,:] gpu_target_srcm_all = self.target_srcm[batch_slice,:,:,:]
gpu_target_srcm_em = self.target_srcm_em[batch_slice,:,:,:] gpu_target_srcm_em = self.target_srcm_em[batch_slice,:,:,:]
gpu_target_dstm = self.target_dstm[batch_slice,:,:,:] gpu_target_dstm_all = self.target_dstm[batch_slice,:,:,:]
gpu_target_dstm_em = self.target_dstm_em[batch_slice,:,:,:] gpu_target_dstm_em = self.target_dstm_em[batch_slice,:,:,:]
gpu_target_srcm_anti = 1-gpu_target_srcm_all
gpu_target_dstm_anti = 1-gpu_target_dstm_all
if blur_out_mask:
sigma = resolution / 128
x = nn.gaussian_blur(gpu_target_src*gpu_target_srcm_anti, sigma)
y = 1-nn.gaussian_blur(gpu_target_srcm_all, sigma)
y = tf.where(tf.equal(y, 0), tf.ones_like(y), y)
gpu_target_src = gpu_target_src*gpu_target_srcm_all + (x/y)*gpu_target_srcm_anti
x = nn.gaussian_blur(gpu_target_dst*gpu_target_dstm_anti, sigma)
y = 1-nn.gaussian_blur(gpu_target_dstm_all, sigma)
y = tf.where(tf.equal(y, 0), tf.ones_like(y), y)
gpu_target_dst = gpu_target_dst*gpu_target_dstm_all + (x/y)*gpu_target_dstm_anti
gpu_target_srcm_anti = 1-gpu_target_srcm gpu_target_srcm_anti = 1-gpu_target_srcm
gpu_target_dstm_anti = 1-gpu_target_dstm gpu_target_dstm_anti = 1-gpu_target_dstm
@ -434,6 +495,16 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
gpu_pred_dst_dstm_list.append(gpu_pred_dst_dstm) gpu_pred_dst_dstm_list.append(gpu_pred_dst_dstm)
gpu_pred_src_dstm_list.append(gpu_pred_src_dstm) gpu_pred_src_dstm_list.append(gpu_pred_src_dstm)
# unpack masks from one combined mask
gpu_target_srcm = tf.clip_by_value (gpu_target_srcm_all, 0, 1)
gpu_target_dstm = tf.clip_by_value (gpu_target_dstm_all, 0, 1)
gpu_target_srcm_eye_mouth = tf.clip_by_value (gpu_target_srcm_em-1, 0, 1)
gpu_target_dstm_eye_mouth = tf.clip_by_value (gpu_target_dstm_em-1, 0, 1)
gpu_target_srcm_mouth = tf.clip_by_value (gpu_target_srcm_em-2, 0, 1)
gpu_target_dstm_mouth = tf.clip_by_value (gpu_target_dstm_em-2, 0, 1)
gpu_target_srcm_eyes = tf.clip_by_value (gpu_target_srcm_eye_mouth-gpu_target_srcm_mouth, 0, 1)
gpu_target_dstm_eyes = tf.clip_by_value (gpu_target_dstm_eye_mouth-gpu_target_dstm_mouth, 0, 1)
gpu_target_srcm_blur = nn.gaussian_blur(gpu_target_srcm, max(1, resolution // 32) ) gpu_target_srcm_blur = nn.gaussian_blur(gpu_target_srcm, max(1, resolution // 32) )
gpu_target_srcm_blur = tf.clip_by_value(gpu_target_srcm_blur, 0, 0.5) * 2 gpu_target_srcm_blur = tf.clip_by_value(gpu_target_srcm_blur, 0, 0.5) * 2
gpu_target_srcm_anti_blur = 1.0-gpu_target_srcm_blur gpu_target_srcm_anti_blur = 1.0-gpu_target_srcm_blur
@ -455,6 +526,12 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
gpu_pred_src_src_masked_opt = gpu_pred_src_src*gpu_target_srcm_blur if masked_training else gpu_pred_src_src gpu_pred_src_src_masked_opt = gpu_pred_src_src*gpu_target_srcm_blur if masked_training else gpu_pred_src_src
gpu_pred_dst_dst_masked_opt = gpu_pred_dst_dst*gpu_target_dstm_blur if masked_training else gpu_pred_dst_dst gpu_pred_dst_dst_masked_opt = gpu_pred_dst_dst*gpu_target_dstm_blur if masked_training else gpu_pred_dst_dst
if self.options['loss_function'] == 'MS-SSIM':
gpu_src_loss = 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution)(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0)
gpu_src_loss += tf.reduce_mean ( 10*tf.square ( gpu_target_src_masked_opt - gpu_pred_src_src_masked_opt ), axis=[1,2,3])
elif self.options['loss_function'] == 'MS-SSIM+L1':
gpu_src_loss = 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution, use_l1=True)(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0)
else:
if resolution < 256: if resolution < 256:
gpu_src_loss = tf.reduce_mean ( 10*nn.dssim(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1]) gpu_src_loss = tf.reduce_mean ( 10*nn.dssim(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
else: else:
@ -462,11 +539,34 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
gpu_src_loss += tf.reduce_mean ( 5*nn.dssim(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0, filter_size=int(resolution/23.2)), axis=[1]) gpu_src_loss += tf.reduce_mean ( 5*nn.dssim(gpu_target_src_masked_opt, gpu_pred_src_src_masked_opt, max_val=1.0, filter_size=int(resolution/23.2)), axis=[1])
gpu_src_loss += tf.reduce_mean ( 10*tf.square ( gpu_target_src_masked_opt - gpu_pred_src_src_masked_opt ), axis=[1,2,3]) gpu_src_loss += tf.reduce_mean ( 10*tf.square ( gpu_target_src_masked_opt - gpu_pred_src_src_masked_opt ), axis=[1,2,3])
if eyes_mouth_prio: if eyes_prio or mouth_prio:
gpu_src_loss += tf.reduce_mean ( 300*tf.abs ( gpu_target_src*gpu_target_srcm_em - gpu_pred_src_src*gpu_target_srcm_em ), axis=[1,2,3]) if eyes_prio and mouth_prio:
gpu_target_part_mask = gpu_target_srcm_eye_mouth
elif eyes_prio:
gpu_target_part_mask = gpu_target_srcm_eyes
elif mouth_prio:
gpu_target_part_mask = gpu_target_srcm_mouth
gpu_src_loss += tf.reduce_mean ( 300*tf.abs ( gpu_target_src*gpu_target_part_mask - gpu_pred_src_src*gpu_target_part_mask ), axis=[1,2,3])
gpu_src_loss += tf.reduce_mean ( 10*tf.square( gpu_target_srcm - gpu_pred_src_srcm ),axis=[1,2,3] ) gpu_src_loss += tf.reduce_mean ( 10*tf.square( gpu_target_srcm - gpu_pred_src_srcm ),axis=[1,2,3] )
if self.options['background_power'] > 0:
bg_factor = self.options['background_power']
if self.options['loss_function'] == 'MS-SSIM':
gpu_src_loss += bg_factor * 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution)(gpu_target_src, gpu_pred_src_src, max_val=1.0)
gpu_src_loss += bg_factor * tf.reduce_mean ( 10*tf.square ( gpu_target_src - gpu_pred_src_src ), axis=[1,2,3])
elif self.options['loss_function'] == 'MS-SSIM+L1':
gpu_src_loss += bg_factor * 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution, use_l1=True)(gpu_target_src, gpu_pred_src_src, max_val=1.0)
else:
if resolution < 256:
gpu_src_loss += bg_factor * tf.reduce_mean ( 10*nn.dssim(gpu_target_src, gpu_pred_src_src, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
else:
gpu_src_loss += bg_factor * tf.reduce_mean ( 5*nn.dssim(gpu_target_src, gpu_pred_src_src, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
gpu_src_loss += bg_factor * tf.reduce_mean ( 5*nn.dssim(gpu_target_src, gpu_pred_src_src, max_val=1.0, filter_size=int(resolution/23.2)), axis=[1])
gpu_src_loss += bg_factor * tf.reduce_mean ( 10*tf.square ( gpu_target_src - gpu_pred_src_src ), axis=[1,2,3])
face_style_power = self.options['face_style_power'] / 100.0 face_style_power = self.options['face_style_power'] / 100.0
if face_style_power != 0 and not self.pretrain: if face_style_power != 0 and not self.pretrain:
gpu_src_loss += nn.style_loss(gpu_pred_src_dst_no_code_grad*tf.stop_gradient(gpu_pred_src_dstm), tf.stop_gradient(gpu_pred_dst_dst*gpu_pred_dst_dstm), gaussian_blur_radius=resolution//8, loss_weight=10000*face_style_power) gpu_src_loss += nn.style_loss(gpu_pred_src_dst_no_code_grad*tf.stop_gradient(gpu_pred_src_dstm), tf.stop_gradient(gpu_pred_dst_dst*gpu_pred_dst_dstm), gaussian_blur_radius=resolution//8, loss_weight=10000*face_style_power)
@ -479,6 +579,12 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
gpu_src_loss += tf.reduce_mean( (10*bg_style_power)*nn.dssim( gpu_psd_style_anti_masked, gpu_target_dst_style_anti_masked, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1]) gpu_src_loss += tf.reduce_mean( (10*bg_style_power)*nn.dssim( gpu_psd_style_anti_masked, gpu_target_dst_style_anti_masked, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
gpu_src_loss += tf.reduce_mean( (10*bg_style_power)*tf.square(gpu_psd_style_anti_masked - gpu_target_dst_style_anti_masked), axis=[1,2,3] ) gpu_src_loss += tf.reduce_mean( (10*bg_style_power)*tf.square(gpu_psd_style_anti_masked - gpu_target_dst_style_anti_masked), axis=[1,2,3] )
if self.options['loss_function'] == 'MS-SSIM':
gpu_dst_loss = 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution)(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0)
gpu_dst_loss += tf.reduce_mean ( 10*tf.square( gpu_target_dst_masked_opt- gpu_pred_dst_dst_masked_opt ), axis=[1,2,3])
elif self.options['loss_function'] == 'MS-SSIM+L1':
gpu_dst_loss = 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution, use_l1=True)(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0)
else:
if resolution < 256: if resolution < 256:
gpu_dst_loss = tf.reduce_mean ( 10*nn.dssim(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0, filter_size=int(resolution/11.6) ), axis=[1]) gpu_dst_loss = tf.reduce_mean ( 10*nn.dssim(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0, filter_size=int(resolution/11.6) ), axis=[1])
else: else:
@ -486,8 +592,31 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
gpu_dst_loss += tf.reduce_mean ( 5*nn.dssim(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0, filter_size=int(resolution/23.2) ), axis=[1]) gpu_dst_loss += tf.reduce_mean ( 5*nn.dssim(gpu_target_dst_masked_opt, gpu_pred_dst_dst_masked_opt, max_val=1.0, filter_size=int(resolution/23.2) ), axis=[1])
gpu_dst_loss += tf.reduce_mean ( 10*tf.square( gpu_target_dst_masked_opt- gpu_pred_dst_dst_masked_opt ), axis=[1,2,3]) gpu_dst_loss += tf.reduce_mean ( 10*tf.square( gpu_target_dst_masked_opt- gpu_pred_dst_dst_masked_opt ), axis=[1,2,3])
if eyes_mouth_prio: if eyes_prio or mouth_prio:
gpu_dst_loss += tf.reduce_mean ( 300*tf.abs ( gpu_target_dst*gpu_target_dstm_em - gpu_pred_dst_dst*gpu_target_dstm_em ), axis=[1,2,3]) if eyes_prio and mouth_prio:
gpu_target_part_mask = gpu_target_dstm_eye_mouth
elif eyes_prio:
gpu_target_part_mask = gpu_target_dstm_eyes
elif mouth_prio:
gpu_target_part_mask = gpu_target_dstm_mouth
gpu_dst_loss += tf.reduce_mean ( 300*tf.abs ( gpu_target_dst*gpu_target_part_mask - gpu_pred_dst_dst*gpu_target_part_mask ), axis=[1,2,3])
if self.options['background_power'] > 0:
bg_factor = self.options['background_power']
if self.options['loss_function'] == 'MS-SSIM':
gpu_dst_loss += bg_factor * 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution)(gpu_target_dst, gpu_pred_dst_dst, max_val=1.0)
gpu_dst_loss += bg_factor * tf.reduce_mean ( 10*tf.square ( gpu_target_dst - gpu_pred_dst_dst ), axis=[1,2,3])
elif self.options['loss_function'] == 'MS-SSIM+L1':
gpu_dst_loss += bg_factor * 10 * nn.MsSsim(bs_per_gpu, input_ch, resolution, use_l1=True)(gpu_target_dst, gpu_pred_dst_dst, max_val=1.0)
else:
if resolution < 256:
gpu_dst_loss += bg_factor * tf.reduce_mean ( 10*nn.dssim(gpu_target_dst, gpu_pred_dst_dst, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
else:
gpu_dst_loss += bg_factor * tf.reduce_mean ( 5*nn.dssim(gpu_target_dst, gpu_pred_dst_dst, max_val=1.0, filter_size=int(resolution/11.6)), axis=[1])
gpu_dst_loss += bg_factor * tf.reduce_mean ( 5*nn.dssim(gpu_target_dst, gpu_pred_dst_dst, max_val=1.0, filter_size=int(resolution/23.2)), axis=[1])
gpu_dst_loss += bg_factor * tf.reduce_mean ( 10*tf.square ( gpu_target_dst - gpu_pred_dst_dst ), axis=[1,2,3])
gpu_dst_loss += tf.reduce_mean ( 10*tf.square( gpu_target_dstm - gpu_pred_dst_dstm ),axis=[1,2,3] ) gpu_dst_loss += tf.reduce_mean ( 10*tf.square( gpu_target_dstm - gpu_pred_dst_dstm ),axis=[1,2,3] )
@ -517,22 +646,37 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
gpu_pred_src_src_d, \ gpu_pred_src_src_d, \
gpu_pred_src_src_d2 = self.D_src(gpu_pred_src_src_masked_opt) gpu_pred_src_src_d2 = self.D_src(gpu_pred_src_src_masked_opt)
def get_smooth_noisy_labels(label, tensor, smoothing=0.1, noise=0.05):
num_labels = self.batch_size
for d in tensor.get_shape().as_list()[1:]:
num_labels *= d
probs = tf.math.log([[noise, 1-noise]]) if label == 1 else tf.math.log([[1-noise, noise]])
x = tf.random.categorical(probs, num_labels)
x = tf.cast(x, tf.float32)
x = tf.math.scalar_mul(1-smoothing, x)
# x = x + (smoothing/num_labels)
x = tf.reshape(x, (self.batch_size,) + tuple(tensor.get_shape().as_list()[1:]))
return x
smoothing = self.options['gan_smoothing']
noise = self.options['gan_noise']
gpu_pred_src_src_d_ones = tf.ones_like(gpu_pred_src_src_d) gpu_pred_src_src_d_ones = tf.ones_like(gpu_pred_src_src_d)
gpu_pred_src_src_d_zeros = tf.zeros_like(gpu_pred_src_src_d)
gpu_pred_src_src_d2_ones = tf.ones_like(gpu_pred_src_src_d2) gpu_pred_src_src_d2_ones = tf.ones_like(gpu_pred_src_src_d2)
gpu_pred_src_src_d2_zeros = tf.zeros_like(gpu_pred_src_src_d2)
gpu_target_src_d, \ gpu_pred_src_src_d_smooth_zeros = get_smooth_noisy_labels(0, gpu_pred_src_src_d, smoothing=smoothing, noise=noise)
gpu_target_src_d2 = self.D_src(gpu_target_src_masked_opt) gpu_pred_src_src_d2_smooth_zeros = get_smooth_noisy_labels(0, gpu_pred_src_src_d2, smoothing=smoothing, noise=noise)
gpu_target_src_d_ones = tf.ones_like(gpu_target_src_d) gpu_target_src_d, gpu_target_src_d2 = self.D_src(gpu_target_src_masked_opt)
gpu_target_src_d2_ones = tf.ones_like(gpu_target_src_d2)
gpu_D_src_dst_loss = (DLoss(gpu_target_src_d_ones , gpu_target_src_d) + \ gpu_target_src_d_smooth_ones = get_smooth_noisy_labels(1, gpu_target_src_d, smoothing=smoothing, noise=noise)
DLoss(gpu_pred_src_src_d_zeros , gpu_pred_src_src_d) ) * 0.5 + \ gpu_target_src_d2_smooth_ones = get_smooth_noisy_labels(1, gpu_target_src_d2, smoothing=smoothing, noise=noise)
(DLoss(gpu_target_src_d2_ones , gpu_target_src_d2) + \
DLoss(gpu_pred_src_src_d2_zeros , gpu_pred_src_src_d2) ) * 0.5 gpu_D_src_dst_loss = DLoss(gpu_target_src_d_smooth_ones, gpu_target_src_d) \
+ DLoss(gpu_pred_src_src_d_smooth_zeros, gpu_pred_src_src_d) \
+ DLoss(gpu_target_src_d2_smooth_ones, gpu_target_src_d2) \
+ DLoss(gpu_pred_src_src_d2_smooth_zeros, gpu_pred_src_src_d2)
gpu_D_src_dst_loss_gvs += [ nn.gradients (gpu_D_src_dst_loss, self.D_src.get_weights() ) ]#+self.D_src_x2.get_weights() gpu_D_src_dst_loss_gvs += [ nn.gradients (gpu_D_src_dst_loss, self.D_src.get_weights() ) ]#+self.D_src_x2.get_weights()
@ -547,8 +691,6 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
gpu_G_loss_gvs += [ nn.gradients ( gpu_G_loss, self.src_dst_trainable_weights ) ] gpu_G_loss_gvs += [ nn.gradients ( gpu_G_loss, self.src_dst_trainable_weights ) ]
# Average losses and gradients, and create optimizer update ops # Average losses and gradients, and create optimizer update ops
with tf.device(f'/CPU:0'): with tf.device(f'/CPU:0'):
pred_src_src = nn.concat(gpu_pred_src_src_list, 0) pred_src_src = nn.concat(gpu_pred_src_src_list, 0)
@ -606,7 +748,7 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
def AE_view(warped_src, warped_dst): def AE_view(warped_src, warped_dst):
return nn.tf_sess.run ( [pred_src_src, pred_dst_dst, pred_dst_dstm, pred_src_dst, pred_src_dstm], return nn.tf_sess.run ( [pred_src_src, pred_src_srcm, pred_dst_dst, pred_dst_dstm, pred_src_dst, pred_src_dstm],
feed_dict={self.warped_src:warped_src, feed_dict={self.warped_src:warped_src,
self.warped_dst:warped_dst}) self.warped_dst:warped_dst})
self.AE_view = AE_view self.AE_view = AE_view
@ -672,28 +814,50 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
if ct_mode is not None: if ct_mode is not None:
src_generators_count = int(src_generators_count * 1.5) src_generators_count = int(src_generators_count * 1.5)
fs_aug = None
if ct_mode == 'fs-aug':
fs_aug = 'fs-aug'
channel_type = SampleProcessor.ChannelType.LAB_RAND_TRANSFORM if self.options['random_color'] else SampleProcessor.ChannelType.BGR
self.set_training_data_generators ([ self.set_training_data_generators ([
SampleGeneratorFace(training_data_src_path, random_ct_samples_path=random_ct_samples_path, debug=self.is_debug(), batch_size=self.get_batch_size(), SampleGeneratorFace(training_data_src_path, random_ct_samples_path=random_ct_samples_path, debug=self.is_debug(), batch_size=self.get_batch_size(),
sample_process_options=SampleProcessor.Options(scale_range=[-0.15, 0.15], random_flip=random_src_flip), sample_process_options=SampleProcessor.Options(scale_range=[-0.15, 0.15], random_flip=random_src_flip),
output_sample_types = [ {'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':random_warp, 'transform':True, 'channel_type' : SampleProcessor.ChannelType.BGR, 'ct_mode': ct_mode, 'random_hsv_shift_amount' : random_hsv_power, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution}, output_sample_types = [ {'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':random_warp,
{'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.BGR, 'ct_mode': ct_mode, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution}, 'random_downsample': self.options['random_downsample'],
'random_noise': self.options['random_noise'],
'random_blur': self.options['random_blur'],
'random_jpeg': self.options['random_jpeg'],
'transform':True, 'channel_type' : channel_type, 'ct_mode': ct_mode,
'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':False , 'transform':True, 'channel_type' : channel_type, 'ct_mode': ct_mode, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution}, {'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.EYES_MOUTH, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution}, {'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE_EYES, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
], ],
uniform_yaw_distribution=self.options['uniform_yaw'] or self.pretrain, uniform_yaw_distribution=self.options['uniform_yaw'] or self.pretrain,
generators_count=src_generators_count ), generators_count=src_generators_count ),
SampleGeneratorFace(training_data_dst_path, debug=self.is_debug(), batch_size=self.get_batch_size(), SampleGeneratorFace(training_data_dst_path, debug=self.is_debug(), batch_size=self.get_batch_size(),
sample_process_options=SampleProcessor.Options(scale_range=[-0.15, 0.15], random_flip=random_dst_flip), sample_process_options=SampleProcessor.Options(scale_range=[-0.15, 0.15], random_flip=random_dst_flip),
output_sample_types = [ {'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':random_warp, 'transform':True, 'channel_type' : SampleProcessor.ChannelType.BGR, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution}, output_sample_types = [ {'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':random_warp,
{'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.BGR, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution}, 'random_downsample': self.options['random_downsample'],
'random_noise': self.options['random_noise'],
'random_blur': self.options['random_blur'],
'random_jpeg': self.options['random_jpeg'],
'transform':True, 'channel_type' : channel_type, 'ct_mode': fs_aug,
'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_IMAGE,'warp':False , 'transform':True, 'channel_type' : channel_type, 'ct_mode': fs_aug, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution}, {'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
{'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.EYES_MOUTH, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution}, {'sample_type': SampleProcessor.SampleType.FACE_MASK, 'warp':False , 'transform':True, 'channel_type' : SampleProcessor.ChannelType.G, 'face_mask_type' : SampleProcessor.FaceMaskType.FULL_FACE_EYES, 'face_type':self.face_type, 'data_format':nn.data_format, 'resolution': resolution},
], ],
uniform_yaw_distribution=self.options['uniform_yaw'] or self.pretrain, uniform_yaw_distribution=self.options['uniform_yaw'] or self.pretrain,
generators_count=dst_generators_count ) generators_count=dst_generators_count )
]) ])
if self.options['retraining_samples']:
self.last_src_samples_loss = []
self.last_dst_samples_loss = []
if self.pretrain_just_disabled: if self.pretrain_just_disabled:
self.update_sample_for_preview(force_new=True) self.update_sample_for_preview(force_new=True)
@ -773,6 +937,29 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em) src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
if self.options['retraining_samples']:
bs = self.get_batch_size()
for i in range(bs):
self.last_src_samples_loss.append ( (target_src[i], target_srcm[i], target_srcm_em[i], src_loss[i] ) )
self.last_dst_samples_loss.append ( (target_dst[i], target_dstm[i], target_dstm_em[i], dst_loss[i] ) )
if len(self.last_src_samples_loss) >= bs*16:
src_samples_loss = sorted(self.last_src_samples_loss, key=operator.itemgetter(3), reverse=True)
dst_samples_loss = sorted(self.last_dst_samples_loss, key=operator.itemgetter(3), reverse=True)
target_src = np.stack( [ x[0] for x in src_samples_loss[:bs] ] )
target_srcm = np.stack( [ x[1] for x in src_samples_loss[:bs] ] )
target_srcm_em = np.stack( [ x[2] for x in src_samples_loss[:bs] ] )
target_dst = np.stack( [ x[0] for x in dst_samples_loss[:bs] ] )
target_dstm = np.stack( [ x[1] for x in dst_samples_loss[:bs] ] )
target_dstm_em = np.stack( [ x[2] for x in dst_samples_loss[:bs] ] )
src_loss, dst_loss = self.src_dst_train (target_src, target_src, target_srcm, target_srcm_em, target_dst, target_dst, target_dstm, target_dstm_em)
self.last_src_samples_loss = []
self.last_dst_samples_loss = []
if self.options['true_face_power'] != 0 and not self.pretrain: if self.options['true_face_power'] != 0 and not self.pretrain:
self.D_train (warped_src, warped_dst) self.D_train (warped_src, warped_dst)
@ -780,14 +967,14 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
self.D_src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em) self.D_src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
return ( ('src_loss', np.mean(src_loss) ), ('dst_loss', np.mean(dst_loss) ), ) return ( ('src_loss', np.mean(src_loss) ), ('dst_loss', np.mean(dst_loss) ), )
#override #override
def onGetPreview(self, samples, for_history=False): def onGetPreview(self, samples, for_history=False):
( (warped_src, target_src, target_srcm, target_srcm_em), ( (warped_src, target_src, target_srcm, target_srcm_em),
(warped_dst, target_dst, target_dstm, target_dstm_em) ) = samples (warped_dst, target_dst, target_dstm, target_dstm_em) ) = samples
S, D, SS, DD, DDM, SD, SDM = [ np.clip( nn.to_data_format(x,"NHWC", self.model_data_format), 0.0, 1.0) for x in ([target_src,target_dst] + self.AE_view (target_src, target_dst) ) ] S, D, SS, SSM, DD, DDM, SD, SDM = [ np.clip( nn.to_data_format(x,"NHWC", self.model_data_format), 0.0, 1.0) for x in ([target_src,target_dst] + self.AE_view (target_src, target_dst) ) ]
DDM, SDM, = [ np.repeat (x, (3,), -1) for x in [DDM, SDM] ] SW, DW = [ np.clip( nn.to_data_format(x,"NHWC", self.model_data_format), 0.0, 1.0) for x in ([warped_src,warped_dst]) ]
SSM, DDM, SDM, = [ np.repeat (x, (3,), -1) for x in [SSM, DDM, SDM] ]
target_srcm, target_dstm = [ nn.to_data_format(x,"NHWC", self.model_data_format) for x in ([target_srcm, target_dstm] )] target_srcm, target_dstm = [ nn.to_data_format(x,"NHWC", self.model_data_format) for x in ([target_srcm, target_dstm] )]
@ -802,12 +989,17 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
st.append ( np.concatenate ( ar, axis=1) ) st.append ( np.concatenate ( ar, axis=1) )
result += [ ('SAEHD', np.concatenate (st, axis=0 )), ] result += [ ('SAEHD', np.concatenate (st, axis=0 )), ]
wt = []
for i in range(n_samples):
ar = SW[i], SS[i], DW[i], DD[i], SD[i]
wt.append ( np.concatenate ( ar, axis=1) )
result += [ ('SAEHD warped', np.concatenate (wt, axis=0 )), ]
st_m = [] st_m = []
for i in range(n_samples): for i in range(n_samples):
SD_mask = DDM[i]*SDM[i] if self.face_type < FaceType.HEAD else SDM[i] SD_mask = DDM[i]*SDM[i] if self.face_type < FaceType.HEAD else SDM[i]
ar = S[i]*target_srcm[i], SS[i], D[i]*target_dstm[i], DD[i]*DDM[i], SD[i]*SD_mask ar = S[i]*target_srcm[i], SS[i]*SSM[i], D[i]*target_dstm[i], DD[i]*DDM[i], SD[i]*SD_mask
st_m.append ( np.concatenate ( ar, axis=1) ) st_m.append ( np.concatenate ( ar, axis=1) )
result += [ ('SAEHD masked', np.concatenate (st_m, axis=0 )), ] result += [ ('SAEHD masked', np.concatenate (st_m, axis=0 )), ]
@ -832,10 +1024,27 @@ Examples: df, liae, df-d, df-ud, liae-ud, ...
st.append ( np.concatenate ( ar, axis=1) ) st.append ( np.concatenate ( ar, axis=1) )
result += [ ('SAEHD pred', np.concatenate (st, axis=0 )), ] result += [ ('SAEHD pred', np.concatenate (st, axis=0 )), ]
wt = []
for i in range(n_samples):
ar = SW[i], SS[i]
wt.append ( np.concatenate ( ar, axis=1) )
result += [ ('SAEHD warped src-src', np.concatenate (wt, axis=0 )), ]
wt = []
for i in range(n_samples):
ar = DW[i], DD[i]
wt.append ( np.concatenate ( ar, axis=1) )
result += [ ('SAEHD warped dst-dst', np.concatenate (wt, axis=0 )), ]
wt = []
for i in range(n_samples):
ar = DW[i], SD[i]
wt.append ( np.concatenate ( ar, axis=1) )
result += [ ('SAEHD warped pred', np.concatenate (wt, axis=0 )), ]
st_m = [] st_m = []
for i in range(n_samples): for i in range(n_samples):
ar = S[i]*target_srcm[i], SS[i] ar = S[i]*target_srcm[i], SS[i]*SSM[i]
st_m.append ( np.concatenate ( ar, axis=1) ) st_m.append ( np.concatenate ( ar, axis=1) )
result += [ ('SAEHD masked src-src', np.concatenate (st_m, axis=0 )), ] result += [ ('SAEHD masked src-src', np.concatenate (st_m, axis=0 )), ]

View file

@ -10,3 +10,5 @@ colorama
tensorflow-gpu==2.4.0 tensorflow-gpu==2.4.0
pyqt5 pyqt5
tf2onnx==1.9.3 tf2onnx==1.9.3
Flask==1.1.1
flask-socketio==4.2.1

View file

@ -7,7 +7,8 @@ import numpy as np
from core import imagelib from core import imagelib
from core.cv2ex import * from core.cv2ex import *
from core.imagelib import sd from core.imagelib import sd, LinearMotionBlur
from core.imagelib.color_transfer import random_lab_rotation
from facelib import FaceType, LandmarksProcessor from facelib import FaceType, LandmarksProcessor
@ -26,15 +27,17 @@ class SampleProcessor(object):
BGR = 1 #BGR BGR = 1 #BGR
G = 2 #Grayscale G = 2 #Grayscale
GGG = 3 #3xGrayscale GGG = 3 #3xGrayscale
LAB_RAND_TRANSFORM = 4 # LAB random transform
class FaceMaskType(IntEnum): class FaceMaskType(IntEnum):
NONE = 0 NONE = 0
FULL_FACE = 1 # mask all hull as grayscale FULL_FACE = 1 # mask all hull as grayscale
EYES = 2 # mask eyes hull as grayscale EYES = 2 # mask eyes hull as grayscale
EYES_MOUTH = 3 # eyes and mouse FULL_FACE_EYES = 3 # eyes and mouse
class Options(object): class Options(object):
def __init__(self, random_flip = True, rotation_range=[-10,10], scale_range=[-0.05, 0.05], tx_range=[-0.05, 0.05], ty_range=[-0.05, 0.05] ): def __init__(self, random_flip = True, rotation_range=[-2,2], scale_range=[-0.05, 0.05], tx_range=[-0.05, 0.05], ty_range=[-0.05, 0.05] ):
self.random_flip = random_flip self.random_flip = random_flip
self.rotation_range = rotation_range self.rotation_range = rotation_range
self.scale_range = scale_range self.scale_range = scale_range
@ -71,13 +74,17 @@ class SampleProcessor(object):
def get_eyes_mask(): def get_eyes_mask():
eyes_mask = LandmarksProcessor.get_image_eye_mask (sample_bgr.shape, sample_landmarks) eyes_mask = LandmarksProcessor.get_image_eye_mask (sample_bgr.shape, sample_landmarks)
return np.clip(eyes_mask, 0, 1) # set eye masks to 1-2
clip = np.clip(eyes_mask, 0, 1)
clip[clip > 0.1] += 1
return clip
def get_eyes_mouth_mask(): def get_mouth_mask():
eyes_mask = LandmarksProcessor.get_image_eye_mask (sample_bgr.shape, sample_landmarks)
mouth_mask = LandmarksProcessor.get_image_mouth_mask (sample_bgr.shape, sample_landmarks) mouth_mask = LandmarksProcessor.get_image_mouth_mask (sample_bgr.shape, sample_landmarks)
mask = eyes_mask + mouth_mask # set eye masks to 2-3
return np.clip(mask, 0, 1) clip = np.clip(mouth_mask, 0, 1)
clip[clip > 0.1] += 2
return clip
is_face_sample = sample_landmarks is not None is_face_sample = sample_landmarks is not None
@ -93,6 +100,10 @@ class SampleProcessor(object):
warp = opts.get('warp', False) warp = opts.get('warp', False)
transform = opts.get('transform', False) transform = opts.get('transform', False)
random_hsv_shift_amount = opts.get('random_hsv_shift_amount', 0) random_hsv_shift_amount = opts.get('random_hsv_shift_amount', 0)
random_downsample = opts.get('random_downsample', False)
random_noise = opts.get('random_noise', False)
random_blur = opts.get('random_blur', False)
random_jpeg = opts.get('random_jpeg', False)
normalize_tanh = opts.get('normalize_tanh', False) normalize_tanh = opts.get('normalize_tanh', False)
ct_mode = opts.get('ct_mode', None) ct_mode = opts.get('ct_mode', None)
data_format = opts.get('data_format', 'NHWC') data_format = opts.get('data_format', 'NHWC')
@ -139,10 +150,16 @@ class SampleProcessor(object):
img = get_full_face_mask() img = get_full_face_mask()
elif face_mask_type == SPFMT.EYES: elif face_mask_type == SPFMT.EYES:
img = get_eyes_mask() img = get_eyes_mask()
elif face_mask_type == SPFMT.EYES_MOUTH: elif face_mask_type == SPFMT.FULL_FACE_EYES:
mask = get_full_face_mask().copy() # sets both eyes and mouth mask parts
img = get_full_face_mask()
mask = img.copy()
mask[mask != 0.0] = 1.0 mask[mask != 0.0] = 1.0
img = get_eyes_mouth_mask()*mask eye_mask = get_eyes_mask() * mask
img = np.where(eye_mask > 1, eye_mask, img)
mouth_mask = get_mouth_mask() * mask
img = np.where(mouth_mask > 2, mouth_mask, img)
else: else:
img = np.zeros ( sample_bgr.shape[0:2]+(1,), dtype=np.float32) img = np.zeros ( sample_bgr.shape[0:2]+(1,), dtype=np.float32)
@ -150,9 +167,6 @@ class SampleProcessor(object):
raise NotImplementedError() raise NotImplementedError()
mat = LandmarksProcessor.get_transform_mat (sample_landmarks, warp_resolution, face_type) mat = LandmarksProcessor.get_transform_mat (sample_landmarks, warp_resolution, face_type)
img = cv2.warpAffine( img, mat, (warp_resolution, warp_resolution), flags=cv2.INTER_LINEAR ) img = cv2.warpAffine( img, mat, (warp_resolution, warp_resolution), flags=cv2.INTER_LINEAR )
img = imagelib.warp_by_params (warp_params, img, warp, transform, can_flip=True, border_replicate=border_replicate, cv2_inter=cv2.INTER_LINEAR)
img = cv2.resize( img, (resolution,resolution), interpolation=cv2.INTER_LINEAR )
else: else:
if face_type != sample_face_type: if face_type != sample_face_type:
mat = LandmarksProcessor.get_transform_mat (sample_landmarks, resolution, face_type) mat = LandmarksProcessor.get_transform_mat (sample_landmarks, resolution, face_type)
@ -163,11 +177,6 @@ class SampleProcessor(object):
img = imagelib.warp_by_params (warp_params, img, warp, transform, can_flip=True, border_replicate=border_replicate, cv2_inter=cv2.INTER_LINEAR) img = imagelib.warp_by_params (warp_params, img, warp, transform, can_flip=True, border_replicate=border_replicate, cv2_inter=cv2.INTER_LINEAR)
if face_mask_type == SPFMT.EYES_MOUTH:
div = img.max()
if div != 0.0:
img = img / div # normalize to 1.0 after warp
if len(img.shape) == 2: if len(img.shape) == 2:
img = img[...,None] img = img[...,None]
@ -187,11 +196,68 @@ class SampleProcessor(object):
img = cv2.resize( img, (resolution, resolution), interpolation=cv2.INTER_CUBIC ) img = cv2.resize( img, (resolution, resolution), interpolation=cv2.INTER_CUBIC )
# Apply random color transfer # Apply random color transfer
if ct_mode is not None and ct_sample is not None: if ct_mode is not None and ct_sample is not None or ct_mode == 'fs-aug':
if ct_mode == 'fs-aug':
img = imagelib.color_augmentation(img, sample_rnd_seed)
else:
if ct_sample_bgr is None: if ct_sample_bgr is None:
ct_sample_bgr = ct_sample.load_bgr() ct_sample_bgr = ct_sample.load_bgr()
img = imagelib.color_transfer (ct_mode, img, cv2.resize( ct_sample_bgr, (resolution,resolution), interpolation=cv2.INTER_LINEAR ) ) img = imagelib.color_transfer (ct_mode, img, cv2.resize( ct_sample_bgr, (resolution,resolution), interpolation=cv2.INTER_LINEAR ) )
randomization_order = ['blur', 'noise', 'jpeg', 'down']
np.random.shuffle(randomization_order)
for random_distortion in randomization_order:
# Apply random blur
if random_distortion == 'blur' and random_blur:
blur_type = np.random.choice(['motion', 'gaussian'])
if blur_type == 'motion':
blur_k = np.random.randint(10, 20)
blur_angle = 360 * np.random.random()
img = LinearMotionBlur(img, blur_k, blur_angle)
elif blur_type == 'gaussian':
blur_sigma = 5 * np.random.random() + 3
if blur_sigma < 5.0:
kernel_size = 2.9 * blur_sigma # 97% of weight
else:
kernel_size = 2.6 * blur_sigma # 95% of weight
kernel_size = int(kernel_size)
kernel_size = kernel_size + 1 if kernel_size % 2 == 0 else kernel_size
img = cv2.GaussianBlur(img, (kernel_size, kernel_size), blur_sigma)
# Apply random noise
if random_distortion == 'noise' and random_noise:
noise_type = np.random.choice(['gaussian', 'laplace', 'poisson'])
noise_scale = (20 * np.random.random() + 20)
if noise_type == 'gaussian':
noise = np.random.normal(scale=noise_scale, size=img.shape)
img += noise / 255.0
elif noise_type == 'laplace':
noise = np.random.laplace(scale=noise_scale, size=img.shape)
img += noise / 255.0
elif noise_type == 'poisson':
noise_lam = (15 * np.random.random() + 15)
noise = np.random.poisson(lam=noise_lam, size=img.shape)
img += noise / 255.0
# Apply random jpeg compression
if random_distortion == 'jpeg' and random_jpeg:
img = np.clip(img*255, 0, 255).astype(np.uint8)
jpeg_compression_level = np.random.randint(50, 85)
encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), jpeg_compression_level]
_, enc_img = cv2.imencode('.jpg', img, encode_param)
img = cv2.imdecode(enc_img, cv2.IMREAD_UNCHANGED).astype(np.float32) / 255.0
# Apply random downsampling
if random_distortion == 'down' and random_downsample:
down_res = np.random.randint(int(0.125*resolution), int(0.25*resolution))
img = cv2.resize(img, (down_res, down_res), interpolation=cv2.INTER_CUBIC)
img = cv2.resize(img, (resolution, resolution), interpolation=cv2.INTER_CUBIC)
if random_hsv_shift_amount != 0: if random_hsv_shift_amount != 0:
a = random_hsv_shift_amount a = random_hsv_shift_amount
h_amount = max(1, int(360*a*0.5)) h_amount = max(1, int(360*a*0.5))
@ -202,12 +268,13 @@ class SampleProcessor(object):
img = np.clip( cv2.cvtColor(cv2.merge([img_h, img_s, img_v]), cv2.COLOR_HSV2BGR) , 0, 1 ) img = np.clip( cv2.cvtColor(cv2.merge([img_h, img_s, img_v]), cv2.COLOR_HSV2BGR) , 0, 1 )
img = imagelib.warp_by_params (warp_params, img, warp, transform, can_flip=True, border_replicate=border_replicate) img = imagelib.warp_by_params (warp_params, img, warp, transform, can_flip=True, border_replicate=border_replicate)
img = np.clip(img.astype(np.float32), 0, 1) img = np.clip(img.astype(np.float32), 0, 1)
# Transform from BGR to desired channel_type # Transform from BGR to desired channel_type
if channel_type == SPCT.BGR: if channel_type == SPCT.BGR:
out_sample = img out_sample = img
elif channel_type == SPCT.LAB_RAND_TRANSFORM:
out_sample = random_lab_rotation(img, sample_rnd_seed)
elif channel_type == SPCT.G: elif channel_type == SPCT.G:
out_sample = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)[...,None] out_sample = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)[...,None]
elif channel_type == SPCT.GGG: elif channel_type == SPCT.GGG:
@ -255,4 +322,3 @@ class SampleProcessor(object):
outputs += [outputs_sample] outputs += [outputs_sample]
return outputs return outputs