mirror of
https://github.com/iperov/DeepFaceLab.git
synced 2025-08-21 05:53:24 -07:00
Merge pull request #131 from faceshiftlabs/docs/changelog
docs: update changelog
This commit is contained in:
commit
47dc0f518a
4 changed files with 64 additions and 5 deletions
14
CHANGELOG.md
14
CHANGELOG.md
|
@ -6,9 +6,16 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||
|
||||
## [Unreleased]
|
||||
### In Progress
|
||||
- [MS-SSIM loss training option](https://github.com/faceshiftlabs/DeepFaceLab/tree/feature/ms-ssim-loss-2)
|
||||
- [Freezeable layers (encoder/decoder/etc.)](https://github.com/faceshiftlabs/DeepFaceLab/tree/feature/freezable-weights)
|
||||
- [GAN stability improvements](https://github.com/faceshiftlabs/DeepFaceLab/tree/feature/gan-updates)
|
||||
|
||||
## [1.4.0] - 2020-03-24
|
||||
### Added
|
||||
- [MS-SSIM loss training option](doc/features/ms-ssim)
|
||||
- GAN version option (v2 - late 2020 or v3 - current GAN)
|
||||
- [GAN label smoothing and label noise options](doc/features/gan-options)
|
||||
### Fixed
|
||||
- Background Power now uses the entire image, not just the area outside of the mask for comparison.
|
||||
This should help with rough areas directly next to the mask
|
||||
|
||||
## [1.3.0] - 2020-03-20
|
||||
### Added
|
||||
|
@ -53,7 +60,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
|
|||
- Reset stale master branch to [seranus/DeepFaceLab](https://github.com/seranus/DeepFaceLab),
|
||||
21 commits ahead of [iperov/DeepFaceLab](https://github.com/iperov/DeepFaceLab) ([compare](https://github.com/iperov/DeepFaceLab/compare/4818183...seranus:3f5ae05))
|
||||
|
||||
[Unreleased]: https://github.com/olivierlacan/keep-a-changelog/compare/v1.3.0...HEAD
|
||||
[Unreleased]: https://github.com/olivierlacan/keep-a-changelog/compare/v1.4.0...HEAD
|
||||
[1.4.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.3.0...v1.4.0
|
||||
[1.3.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.2.1...v1.3.0
|
||||
[1.2.1]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.2.0...v1.2.1
|
||||
[1.2.0]: https://github.com/faceshiftlabs/DeepFaceLab/compare/v1.1.5...v1.2.0
|
||||
|
|
50
doc/features/gan-options/README.md
Normal file
50
doc/features/gan-options/README.md
Normal file
|
@ -0,0 +1,50 @@
|
|||
# GAN Options
|
||||
|
||||
Allows you to use one-sided label smoothing and noisy labels when training the discriminator.
|
||||
|
||||
- [ONE-SIDED LABEL SMOOTHING](#one-sided-label-smoothing)
|
||||
- [USAGE](#usage)
|
||||
|
||||
## ONE-SIDED LABEL SMOOTHING
|
||||
|
||||

|
||||
|
||||
> Deep networks may suffer from overconfidence. For example, it uses very few features to classify an object. To
|
||||
> mitigate the problem, deep learning uses regulation and dropout to avoid overconfidence.
|
||||
>
|
||||
> In GAN, if the discriminator depends on a small set of features to detect real images, the generator may just produce
|
||||
> these features only to exploit the discriminator. The optimization may turn too greedy and produces no long term
|
||||
> benefit. In GAN, overconfidence hurts badly. To avoid the problem, we penalize the discriminator when the prediction
|
||||
> for any real images go beyond 0.9 (D(real image)>0.9). This is done by setting our target label value to be 0.9
|
||||
> instead of 1.0.
|
||||
- [GAN — Ways to improve GAN performance](https://towardsdatascience.com/gan-ways-to-improve-gan-performance-acf37f9f59b)
|
||||
|
||||
By setting the label smoothing value to any value > 0, the target label value used with the discriminator will be:
|
||||
```
|
||||
target label value = 1 - (label smoothing value)
|
||||
```
|
||||
### USAGE
|
||||
|
||||
```
|
||||
[0.1] GAN label smoothing ( 0 - 0.5 ?:help ) : 0.1
|
||||
```
|
||||
|
||||
## Noisy labels
|
||||
|
||||
> make the labels the noisy for the discriminator: occasionally flip the labels when training the discriminator
|
||||
- [How to Train a GAN? Tips and tricks to make GANs work](https://github.com/soumith/ganhacks/blob/master/README.md#6-use-soft-and-noisy-labels)
|
||||
|
||||
By setting the noisy labels value to any value > 0, then the target labels used with the discriminator will be flipped
|
||||
("fake" => "real" / "real" => "fake") with probability p (where p is the noisy label value).
|
||||
|
||||
E.g., if the value is 0.05, then ~5% of the labels will be flipped when training the discriminator
|
||||
|
||||
### USAGE
|
||||
```
|
||||
[0.05] GAN noisy labels ( 0 - 0.5 ?:help ) : 0.05
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
|
Binary file not shown.
After Width: | Height: | Size: 62 KiB |
|
@ -3,11 +3,12 @@
|
|||
Allows you to train using the MS-SSIM (multiscale structural similarity index measure) as the main loss metric,
|
||||
a perceptually more accurate measure of image quality than MSE (mean squared error).
|
||||
|
||||
As an added benefit, you may see a decrease in ms/iteration (when using the same batch size) with Multiscale loss
|
||||
enabled. You may also be able to train with a larger batch size with it enabled.
|
||||
|
||||
- [DESCRIPTION](#description)
|
||||
- [USAGE](#usage)
|
||||
|
||||

|
||||
|
||||
## DESCRIPTION
|
||||
|
||||
[SSIM](https://en.wikipedia.org/wiki/Structural_similarity) is metric for comparing the perceptial quality of an image:
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue