mirror of
https://github.com/iperov/DeepFaceLive
synced 2025-08-19 21:13:21 -07:00
update readme
This commit is contained in:
parent
d0d29eedd6
commit
57ac29b84d
4 changed files with 7 additions and 22 deletions
|
@ -5,7 +5,7 @@
|
||||||

|

|
||||||
|
|
||||||

|

|
||||||

|

|
||||||

|

|
||||||
|
|
||||||
</td></tr>
|
</td></tr>
|
||||||
|
@ -49,7 +49,7 @@ Here is an <a href="https://www.tiktok.com/@arnoldschwarzneggar/video/6995538782
|
||||||
|
|
||||||
## Minimum system requirements
|
## Minimum system requirements
|
||||||
|
|
||||||
NVIDIA GTX 750 or higher (AMD is not supported <a href="doc/faq/faq.md#when-will-amd-cards-be-supported">yet</a>)
|
any DirectX12-compatible videocard or NVIDIA GTX 750+
|
||||||
|
|
||||||
Modern CPU with AVX instructions
|
Modern CPU with AVX instructions
|
||||||
|
|
||||||
|
@ -60,7 +60,6 @@ Windows 10
|
||||||
</td></tr>
|
</td></tr>
|
||||||
<tr><td colspan=2 align="center">
|
<tr><td colspan=2 align="center">
|
||||||
|
|
||||||
|
|
||||||
## Setup tutorial
|
## Setup tutorial
|
||||||
|
|
||||||
<tr><td colspan=2 align="center">
|
<tr><td colspan=2 align="center">
|
||||||
|
@ -80,8 +79,10 @@ Windows 10
|
||||||
|
|
||||||
</td></tr>
|
</td></tr>
|
||||||
<tr><td align="right"> <a href="https://mega.nz/folder/m10iELBK#Y0H6BflF9C4k_clYofC7yA">Windows 10 x64 (mega.nz)</a>
|
<tr><td align="right"> <a href="https://mega.nz/folder/m10iELBK#Y0H6BflF9C4k_clYofC7yA">Windows 10 x64 (mega.nz)</a>
|
||||||
</td><td align="center">Contains new and prev releases.</td></tr>
|
</td><td align="center">
|
||||||
|
NVIDIA build : NVIDIA cards only
|
||||||
|
|
||||||
|
DirectX12 build : NVIDIA, AMD, Intel cards.
|
||||||
</td></tr>
|
</td></tr>
|
||||||
<tr><td colspan=2 align="center">
|
<tr><td colspan=2 align="center">
|
||||||
|
|
||||||
|
|
|
@ -9,18 +9,11 @@ All CPU-intensive tasks are done by native libraries compiled for Python, such a
|
||||||
|
|
||||||
<img src="architecture.png"></img>
|
<img src="architecture.png"></img>
|
||||||
|
|
||||||
It consists of backend modules that work in separate processes. The modules work like a conveyor belt. CameraSource(FileSource) generates the frame and sends it to the next module for processing. The final output module outputs the stream to the screen with the desired delay. Backend modules manage the abstract controls that are implemented in the UI. Thus, the Model-View-Controller pattern is implemented. To reduce latency, some custom interprocess communication elements are implemented.
|
It consists of backend modules that work in separate processes. The modules work like a conveyor belt. CameraSource(FileSource) generates the frame and sends it to the next module for processing, so the final FPS is equal to the FPS of the slowest module. The final output module outputs the stream to the screen with the desired delay, which helps to synchronize the sound. Backend modules manage the abstract controls that are implemented in the UI. Thus, the Model-View-Controller pattern is implemented. To reduce latency, some custom interprocess communication elements are implemented directly in Python.
|
||||||
|
|
||||||
## What are the current problems for implementation for AMD ?
|
|
||||||
|
|
||||||
* Very slow inference in DirectML build of onnxruntime.
|
|
||||||
|
|
||||||
* no alternative for CuPy. Without CuPy FaceMerger will only work on the CPU, which is only applicable for frames less than 720p.
|
|
||||||
|
|
||||||
|
|
||||||
## What are the current problems for implementation for Linux ?
|
## What are the current problems for implementation for Linux ?
|
||||||
|
|
||||||
No problems. Technically, you only need to write an installer, and check the work of all the modules. You may have to make some adjustments somewhere. I do not use linux, so I do not have time to support development on it.
|
No problems. Technically, you only need to write an installer, and check the work of all the modules. DeepFaceLive supports onnxruntime-gpu and onnxruntime-directml packages. You may have to make some adjustments somewhere. I do not use linux, so I do not have time to support development on it.
|
||||||
|
|
||||||
## How many people were involved in the development?
|
## How many people were involved in the development?
|
||||||
|
|
||||||
|
|
BIN
doc/logo_directx.png
Normal file
BIN
doc/logo_directx.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 25 KiB |
|
@ -17,8 +17,6 @@ It depends on how big the face in the frame, as well as the resolution of the mo
|
||||||
|
|
||||||
Play with different program settings. Any module put on the CPU will consume a lot of CPU time, which is not enough to run a game, for example. If the motherboard allows, you can install additional video cards and distribute the load on them.
|
Play with different program settings. Any module put on the CPU will consume a lot of CPU time, which is not enough to run a game, for example. If the motherboard allows, you can install additional video cards and distribute the load on them.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Will DeepFake detectors be able to detect a fake in my streams?
|
## Will DeepFake detectors be able to detect a fake in my streams?
|
||||||
|
|
||||||
depends on final quality of your picture. Flickering face, abruptly clipping face mask, irregular color will increase chance to detect the fake.
|
depends on final quality of your picture. Flickering face, abruptly clipping face mask, irregular color will increase chance to detect the fake.
|
||||||
|
@ -34,13 +32,6 @@ No you don't. There are public face models that can swap any face without traini
|
||||||
</td></tr>
|
</td></tr>
|
||||||
<tr><td colspan=2 align="center">
|
<tr><td colspan=2 align="center">
|
||||||
|
|
||||||
## When will AMD cards be supported?
|
|
||||||
|
|
||||||
Depends on <a href="https://github.com/microsoft/onnxruntime">microsoft/onnxruntime</a> library developers. AMD cards are supported through DirectML execution provider, which is currently raw and slow.
|
|
||||||
|
|
||||||
</td></tr>
|
|
||||||
<tr><td colspan=2 align="center">
|
|
||||||
|
|
||||||
## I want to have more control when changing faces in a video. Will the new functionality be implemented?
|
## I want to have more control when changing faces in a video. Will the new functionality be implemented?
|
||||||
|
|
||||||
No. DeepFaceLive is designed for face swapping in streams. The ability to change faces in the videos - only for test purposes.
|
No. DeepFaceLive is designed for face swapping in streams. The ability to change faces in the videos - only for test purposes.
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue