DeepFaceLab/models
Auroir c4e68ef539 Formatted Model Summary (#348)
* Formatted Model Summary

Aligns the model summary output using f-string formatting. The logic structure of the base class has not been changed, only the lines put into `model_summary_text`. Output width is calculated from keys & values and will scale to show a clean summary for any model/platform.

GPU VRAM has been added as an output. Incorrect detection of VRAM is possible in broken environments and GPUs of different sizes can report the same name. Showing it here adds clarity for the user and for issue tickets.

Concatenation changed from "\r\n" to "\n", CRLF end of lines for Windows are handled transparently so using it here caused extra blank lines in the summary txt file.

**Examples:**
Using CUDA + SAE-LIAE
```
============= Model Summary ==============
==                                      ==
==         Model name: SAE              ==
==                                      ==
==  Current iteration: 16               ==
==                                      ==
==----------- Model Options ------------==
==                                      ==
==         batch_size: 4                ==
==        sort_by_yaw: False            ==
==        random_flip: True             ==
==         resolution: 128              ==
==          face_type: f                ==
==         learn_mask: True             ==
==     optimizer_mode: 1                ==
==              archi: liae             ==
==            ae_dims: 256              ==
==          e_ch_dims: 42               ==
==          d_ch_dims: 21               ==
== multiscale_decoder: False            ==
==         ca_weights: False            ==
==         pixel_loss: False            ==
==   face_style_power: 0.0              ==
==     bg_style_power: 0.0              ==
==    apply_random_ct: False            ==
==           clipgrad: False            ==
==                                      ==
==------------- Running On -------------==
==                                      ==
==       Device index: 0                ==
==               Name: GeForce GTX 1080 ==
==               VRAM: 8.00GB           ==
==                                      ==
==========================================
```
Colab
```
========== Model Summary ==========
==                               ==
==         Model name: SAE       ==
==                               ==
==  Current iteration: 39822     ==
==                               ==
==-------- Model Options --------==
==                               ==
==         batch_size: 24        ==
==        sort_by_yaw: True      ==
==        random_flip: False     ==
==         resolution: 128       ==
==          face_type: f         ==
==         learn_mask: True      ==
==     optimizer_mode: 2         ==
==              archi: liae      ==
==            ae_dims: 222       ==
==          e_ch_dims: 34        ==
==          d_ch_dims: 16        ==
== multiscale_decoder: True      ==
==         ca_weights: True      ==
==         pixel_loss: False     ==
==   face_style_power: 2.0       ==
==     bg_style_power: 1.5       ==
==    apply_random_ct: False     ==
==           clipgrad: True      ==
==                               ==
==--------- Running On ----------==
==                               ==
==       Device index: 0         ==
==               Name: Tesla K80 ==
==               VRAM: 11.00GB   ==
==                               ==
===================================
```
Using OpenCL + H128
```
=========================== Model Summary ===========================
==                                                                 ==
==        Model name: H128                                         ==
==                                                                 ==
== Current iteration: 0                                            ==
==                                                                 ==
==------------------------- Model Options -------------------------==
==                                                                 ==
==        batch_size: 4                                            ==
==       sort_by_yaw: False                                        ==
==       random_flip: True                                         ==
==        lighter_ae: False                                        ==
==        pixel_loss: False                                        ==
==                                                                 ==
==-------------------------- Running On ---------------------------==
==                                                                 ==
==      Device index: 0                                            ==
==              Name: Advanced Micro Devices, Inc. gfx900 (OpenCL) ==
==              VRAM: 7.98GB                                       ==
==                                                                 ==
=====================================================================
```
Using CPU (output trimmed)
```
==------- Running On --------==
==                           ==
==       Using device: CPU   ==
==                           ==
===============================
```
multi_gpu support is retained (output trimmed)
```
==------------- Running On -------------==
==                                      ==
==    Using multi_gpu: True             ==
==                                      ==
==       Device index: 1                ==
==               Name: Geforce GTX 1080 ==
==               VRAM: 8.00GB           ==
==       Device index: 2                ==
==               Name: Geforce GTX 1080 ==
==               VRAM: 8.00GB           ==
==                                      ==
==========================================
```

Low VRAM warning (output trimmed)
```
==------------- Running On -------------==
==                                      ==
==       Device index: 0                ==
==               Name: Geforce GTX 1050 ==
==               VRAM: 2.00GB           ==
==                                      ==
==========================================
/!\
/!\ WARNING:
/!\ You are using a GPU with 2GB or less VRAM. This may significantly reduce the quality of your result!
/!\ If training does not start, close all programs and try again.
/!\ Also you can disable Windows Aero Desktop to increase available VRAM.
/!\
```

* Fix indent
2019-08-16 18:35:27 +04:00
..
Model_DEV_FANSEG Trainer: added option for all models 2019-06-20 10:42:55 +04:00
Model_DEV_POSEEST Trainer: added option for all models 2019-06-20 10:42:55 +04:00
Model_DF Trainer: added option for all models 2019-06-20 10:42:55 +04:00
Model_H64 Trainer: added option for all models 2019-06-20 10:42:55 +04:00
Model_H128 Trainer: added option for all models 2019-06-20 10:42:55 +04:00
Model_LIAEF128 Trainer: added option for all models 2019-06-20 10:42:55 +04:00
Model_RecycleGAN fixed error "Failed to get convolution algorithm" on some systems 2019-08-11 11:17:22 +04:00
Model_SAE fixed error "Failed to get convolution algorithm" on some systems 2019-08-11 11:17:22 +04:00
__init__.py removing trailing spaces 2019-03-19 23:53:27 +04:00
archived_models.zip update == 04.20.2019 == (#242) 2019-04-20 08:23:08 +04:00
ModelBase.py Formatted Model Summary (#348) 2019-08-16 18:35:27 +04:00