mirror of
https://github.com/iperov/DeepFaceLab.git
synced 2025-07-06 21:12:07 -07:00
Sorry, no more linux/mac/docker support. If someone want to support it - create linux/mac fork, I will reference them in readme.
This commit is contained in:
parent
5c0c79e528
commit
64537204bb
7 changed files with 0 additions and 345 deletions
132
DockerCPU.md
132
DockerCPU.md
|
@ -1,132 +0,0 @@
|
||||||
# For Mac Users
|
|
||||||
If you just have a **MacBook**.DeepFaceLab **GPU** mode does not works. However,it can also works with **CPU** mode.Follow the Steps below will help you build the **DRE** (DeepFaceLab Runtime Environment) Easier.
|
|
||||||
|
|
||||||
### 1. Open a new terminal and Clone DeepFaceLab with git
|
|
||||||
```
|
|
||||||
$ git git@github.com:iperov/DeepFaceLab.git
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Change the directory to DeepFaceLab
|
|
||||||
```
|
|
||||||
$ cd DeepFaceLab
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Install Docker
|
|
||||||
|
|
||||||
[Docker Desktop for Mac](https://hub.docker.com/editions/community/docker-ce-desktop-mac)
|
|
||||||
|
|
||||||
### 4. Build Docker Image For DeepFaceLab
|
|
||||||
|
|
||||||
```
|
|
||||||
$ docker build -t deepfacelab-cpu -f Dockerfile.cpu .
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5. Mount DeepFaceLab volume and Run it
|
|
||||||
|
|
||||||
```
|
|
||||||
$ docker run -p 8888:8888 --hostname deepfacelab-cpu --name deepfacelab-cpu -v $PWD:/notebooks deepfacelab-cpu
|
|
||||||
```
|
|
||||||
|
|
||||||
PS: Because your current directory is `DeepFaceLab`,so `-v $PWD:/notebooks` means Mount `DeepFaceLab` volume to `notebooks` in **Docker**
|
|
||||||
|
|
||||||
And then you will see the log below:
|
|
||||||
|
|
||||||
```
|
|
||||||
The Jupyter Notebook is running at:
|
|
||||||
http://(deepfacelab-cpu or 127.0.0.1):8888/?token=your token
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6. Open a new terminal to run DeepFaceLab in /notebooks
|
|
||||||
|
|
||||||
```
|
|
||||||
$ docker exec -it deepfacelab-cpu bash
|
|
||||||
$ ls -A
|
|
||||||
```
|
|
||||||
|
|
||||||
### 7. Use jupyter in deepfacelab-cpu bash
|
|
||||||
|
|
||||||
```
|
|
||||||
$ jupyter notebook list
|
|
||||||
```
|
|
||||||
or just open it on your browser `http://127.0.0.1:8888/?token=your_token`
|
|
||||||
|
|
||||||
PS: You can run python with jupyter.However,we just run our code in bash.It's simpler and clearer.Now the **DRE** (DeepFaceLab Runtime Environment) almost builded.
|
|
||||||
|
|
||||||
### 8. Stop or Kill Docker Container
|
|
||||||
|
|
||||||
```
|
|
||||||
$ docker stop deepfacelab-cpu
|
|
||||||
$ docker kill deepfacelab-cpu
|
|
||||||
```
|
|
||||||
|
|
||||||
### 9. Start Docker Container
|
|
||||||
|
|
||||||
```
|
|
||||||
# start docker container
|
|
||||||
$ docker start deepfacelab-cpu
|
|
||||||
# open bash to run deepfacelab
|
|
||||||
$ docker exec -it deepfacelab-cpu bash
|
|
||||||
```
|
|
||||||
|
|
||||||
PS: `STEP 8` or `STEP 9` just show you the way to stop and start **DRE**.
|
|
||||||
|
|
||||||
### 10. enjoy it
|
|
||||||
|
|
||||||
```
|
|
||||||
# make sure you current directory is `/notebooks`
|
|
||||||
$ pwd
|
|
||||||
# make sure all `DeepFaceLab` code is in current path `/notebooks`
|
|
||||||
$ ls -a
|
|
||||||
# read and write permission
|
|
||||||
$ chmod +x cpu.sh
|
|
||||||
# run `DeepFaceLab`
|
|
||||||
$ ./cpu.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
### Details with `DeepFaceLab`
|
|
||||||
|
|
||||||
#### 1. Concepts
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
In our Case,**Cage**'s Face is **SRC Face**,and **Trump**'s Face is **DST Face**.and finally we get the **Result** below.
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
So,before you run `./cpu.sh`.You should be aware of this.
|
|
||||||
|
|
||||||
#### 2. Use MTCNN(mt) to extract faces
|
|
||||||
Do not use DLIB extractor in CPU mode
|
|
||||||
|
|
||||||
#### 3. Best practice for SORT
|
|
||||||
1) delete first unsorted aligned groups of images what you can to delete.
|
|
||||||
|
|
||||||
2) use `hist`
|
|
||||||
|
|
||||||
#### 4. Use `H64 model` to train and convert
|
|
||||||
Only H64 model reasonable to train on home CPU.You can choice other model like **H128 (3GB+)** | **DF (5GB+)** and so on ,it depends entirely on your CPU performance.
|
|
||||||
|
|
||||||
#### 5. execute the script below one by one
|
|
||||||
|
|
||||||
```
|
|
||||||
root@deepfacelab-cpu:/notebooks# ./cpu.sh
|
|
||||||
1) clear workspace 7) data_dst sort by hist
|
|
||||||
2) extract PNG from video data_src 8) train
|
|
||||||
3) data_src extract faces 9) convert
|
|
||||||
4) data_src sort 10) converted to mp4
|
|
||||||
5) extract PNG from video data_dst 11) quit
|
|
||||||
6) data_dst extract faces
|
|
||||||
Please enter your choice:
|
|
||||||
```
|
|
||||||
|
|
||||||
#### 6. Put all videos in `workspace` directory
|
|
||||||
```
|
|
||||||
.
|
|
||||||
├── data_dst
|
|
||||||
├── data_src
|
|
||||||
├── dst.mp4
|
|
||||||
├── model
|
|
||||||
└── src.mp4
|
|
||||||
|
|
||||||
3 directories, 2 files
|
|
||||||
```
|
|
|
@ -1,17 +0,0 @@
|
||||||
FROM tensorflow/tensorflow:latest-py3
|
|
||||||
|
|
||||||
RUN apt-get update -qq -y \
|
|
||||||
&& apt-get install -y libsm6 libxrender1 libxext-dev python3-tk\
|
|
||||||
&& apt-get install -y ffmpeg \
|
|
||||||
&& apt-get install -y wget \
|
|
||||||
&& apt-get install -y vim \
|
|
||||||
&& apt-get install -y git \
|
|
||||||
&& apt-get clean \
|
|
||||||
&& rm -rf /var/lib/apt/lists/*
|
|
||||||
|
|
||||||
COPY requirements-cpu-docker.txt /opt/
|
|
||||||
RUN pip3 install cmake
|
|
||||||
RUN pip3 --no-cache-dir install -r /opt/requirements-cpu-docker.txt && rm /opt/requirements-cpu-docker.txt
|
|
||||||
|
|
||||||
WORKDIR "/notebooks"
|
|
||||||
CMD ["/run_jupyter.sh", "--allow-root"]
|
|
38
LINUX.md
38
LINUX.md
|
@ -1,38 +0,0 @@
|
||||||
## **GNU/Linux installation instructions**
|
|
||||||
**!!! FFmpeg and NVIDIA Driver shall be already installed !!!**
|
|
||||||
|
|
||||||
First of all, i strongly recommend to install Anaconda, that it was convenient to work with DeepFaceLab.
|
|
||||||
|
|
||||||
Official instruction: https://docs.anaconda.com/anaconda/install/linux
|
|
||||||
|
|
||||||
After, you can create environment with packages, needed by DeepFaceLab:
|
|
||||||
```
|
|
||||||
conda create -y -n deepfacelab python==3.6.6 cudatoolkit==9.0 cudnn
|
|
||||||
```
|
|
||||||
Then activate environment:
|
|
||||||
```
|
|
||||||
source activate deepfacelab
|
|
||||||
```
|
|
||||||
And install the remained packages:
|
|
||||||
```
|
|
||||||
python -m pip install \
|
|
||||||
pathlib==1.0.1 \
|
|
||||||
scandir==1.6 \
|
|
||||||
h5py==2.7.1 \
|
|
||||||
Keras==2.1.6 \
|
|
||||||
opencv-python==3.4.0.12 \
|
|
||||||
tensorflow-gpu==1.8.0 \
|
|
||||||
scikit-image \
|
|
||||||
dlib==19.10.0 \
|
|
||||||
tqdm \
|
|
||||||
git+https://www.github.com/keras-team/keras-contrib.git
|
|
||||||
|
|
||||||
```
|
|
||||||
Now clone the repository and run... Good luck ;-)
|
|
||||||
```
|
|
||||||
git clone https://github.com/iperov/DeepFaceLab
|
|
||||||
cd DeepFaceLab && chmod +x main.sh && ./main.sh
|
|
||||||
```
|
|
||||||
**NOTE !!! Before launching DeepFaceLab, you should convince in that you already executed "source activate deepfacelab" !!!**
|
|
||||||
|
|
||||||
P.S. English is not my native language, so please be kind to my mistakes.
|
|
|
@ -176,9 +176,6 @@ Video tutorial: https://www.youtube.com/watch?v=K98nTNjXkq8
|
||||||
|
|
||||||
Windows 10 consumes % of VRAM even if card unused for video output.
|
Windows 10 consumes % of VRAM even if card unused for video output.
|
||||||
|
|
||||||
### For Mac Users
|
|
||||||
Check out [DockerCPU.md](DockerCPU.md) for more detailed instructions.
|
|
||||||
|
|
||||||
### **Problem of the year**:
|
### **Problem of the year**:
|
||||||
|
|
||||||
algorithm of overlaying neural face onto video face located in ConverterMasked.py.
|
algorithm of overlaying neural face onto video face located in ConverterMasked.py.
|
||||||
|
|
71
cpu.sh
71
cpu.sh
|
@ -1,71 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
INTERNAL_DIR=`pwd`
|
|
||||||
WORKSPACE=$INTERNAL_DIR/workspace
|
|
||||||
PYTHON=`which python`
|
|
||||||
|
|
||||||
PS3="Please enter your choice: "
|
|
||||||
options=("clear workspace" "extract PNG from video data_src" "data_src extract faces" "data_src sort" "extract PNG from video data_dst" "data_dst extract faces" "data_dst sort by hist" "train" "convert" "converted to mp4" "quit")
|
|
||||||
select opt in "${options[@]}"
|
|
||||||
do
|
|
||||||
case $opt in
|
|
||||||
"clear workspace" )
|
|
||||||
echo -n "Clean up workspace? [Y/n] "; read workspace_ans
|
|
||||||
if [ "$workspace_ans" == "Y" ] || [ "$workspace_ans" == "y" ]; then
|
|
||||||
rm -rf $WORKSPACE
|
|
||||||
mkdir -p $WORKSPACE/data_src/aligned
|
|
||||||
mkdir -p $WORKSPACE/data_dst/aligned
|
|
||||||
mkdir -p $WORKSPACE/model
|
|
||||||
echo "Workspace has been successfully cleaned!"
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
"extract PNG from video data_src" )
|
|
||||||
echo -n "File name: "; read filename
|
|
||||||
echo -n "FPS: "; read fps
|
|
||||||
if [ -z "$fps" ]; then fps="25"; fi
|
|
||||||
ffmpeg -i $WORKSPACE/$filename -r $fps $WORKSPACE/data_src/%04d.png -loglevel error
|
|
||||||
;;
|
|
||||||
"data_src extract faces" )
|
|
||||||
echo -n "Detector? [mt | manual] "; read detector
|
|
||||||
$PYTHON $INTERNAL_DIR/main.py extract --input-dir $WORKSPACE/data_src --output-dir $WORKSPACE/data_src/aligned --detector $detector --debug --cpu-only
|
|
||||||
;;
|
|
||||||
"data_src sort" )
|
|
||||||
echo -n "Sort by? [blur | brightness | face-yaw | hue | hist | hist-blur | hist-dissim] "; read sort_method
|
|
||||||
$PYTHON $INTERNAL_DIR/main.py sort --input-dir $WORKSPACE/data_src/aligned --by $sort_method
|
|
||||||
;;
|
|
||||||
"extract PNG from video data_dst" )
|
|
||||||
echo -n "File name: "; read filename
|
|
||||||
echo -n "FPS: "; read fps
|
|
||||||
if [ -z "$fps" ]; then fps="25"; fi
|
|
||||||
ffmpeg -i $WORKSPACE/$filename -r $fps $WORKSPACE/data_dst/%04d.png -loglevel error
|
|
||||||
;;
|
|
||||||
"data_dst extract faces" )
|
|
||||||
echo -n "Detector? [mt | manual] "; read detector
|
|
||||||
$PYTHON $INTERNAL_DIR/main.py extract --input-dir $WORKSPACE/data_dst --output-dir $WORKSPACE/data_dst/aligned --detector $detector --debug --cpu-only
|
|
||||||
;;
|
|
||||||
"data_dst sort by hist" )
|
|
||||||
$PYTHON $INTERNAL_DIR/main.py sort --input-dir $WORKSPACE/data_dst/aligned --by hist
|
|
||||||
;;
|
|
||||||
"train" )
|
|
||||||
echo -n "Model? [ H64 (2GB+) | H128 (3GB+) | DF (5GB+) | LIAEF128 (5GB+) | LIAEF128YAW (5GB+) | MIAEF128 (5GB+) | AVATAR (4GB+) ] "; read model
|
|
||||||
echo -n "Show Preview? [Y/n] "; read preview
|
|
||||||
if [ "$preview" == "Y" ] || [ "$preview" == "y" ]; then preview="--preview"; else preview=""; fi
|
|
||||||
$PYTHON $INTERNAL_DIR/main.py train --training-data-src-dir $WORKSPACE/data_src/aligned --training-data-dst-dir $WORKSPACE/data_dst/aligned --model-dir $WORKSPACE/model --model $model --cpu-only $preview
|
|
||||||
;;
|
|
||||||
"convert" )
|
|
||||||
echo -n "Model? [ H64 (2GB+) | H128 (3GB+) | DF (5GB+) | LIAEF128 (5GB+) | LIAEF128YAW (5GB+) | MIAEF128 (5GB+) | AVATAR(4GB+) ] "; read model
|
|
||||||
$PYTHON $INTERNAL_DIR/main.py convert --input-dir $WORKSPACE/data_dst --output-dir $WORKSPACE/data_dst/merged --aligned-dir $WORKSPACE/data_dst/aligned --model-dir $WORKSPACE/model --model $model --ask-for-params --cpu-only
|
|
||||||
;;
|
|
||||||
"converted to mp4" )
|
|
||||||
echo -n "File name of destination video: "; read filename
|
|
||||||
echo -n "FPS: "; read fps
|
|
||||||
if [ -z "$fps" ]; then fps="25"; fi
|
|
||||||
ffmpeg -y -i $WORKSPACE/$filename -r $fps -i "$WORKSPACE/data_dst/merged/%04d.png" -map 0:a? -map 1:v -r $fps -c:v libx264 -b:v 8M -pix_fmt yuv420p -c:a aac -strict -2 -b:a 192k -ar 48000 "$WORKSPACE/result.mp4" -loglevel error
|
|
||||||
;;
|
|
||||||
"quit" )
|
|
||||||
break
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo "Invalid choice!"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
75
main.sh
75
main.sh
|
@ -1,75 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
INTERNAL_DIR=`pwd`
|
|
||||||
WORKSPACE=$INTERNAL_DIR/workspace
|
|
||||||
PYTHON=`which python`
|
|
||||||
|
|
||||||
PS3="Please enter your choice: "
|
|
||||||
options=("clear workspace" "extract PNG from video data_src" "data_src extract faces" "data_src sort" "extract PNG from video data_dst" "data_dst extract faces" "data_dst sort by hist" "train" "convert" "converted to mp4" "quit")
|
|
||||||
select opt in "${options[@]}"
|
|
||||||
do
|
|
||||||
case $opt in
|
|
||||||
"clear workspace" )
|
|
||||||
echo -n "Clean up workspace? [Y/n] "; read workspace_ans
|
|
||||||
if [ "$workspace_ans" == "Y" ] || [ "$workspace_ans" == "y" ]; then
|
|
||||||
rm -rf $WORKSPACE
|
|
||||||
mkdir -p $WORKSPACE/data_src/aligned
|
|
||||||
mkdir -p $WORKSPACE/data_dst/aligned
|
|
||||||
mkdir -p $WORKSPACE/model
|
|
||||||
echo "Workspace has been successfully cleaned!"
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
"extract PNG from video data_src" )
|
|
||||||
echo -n "File name: "; read filename
|
|
||||||
echo -n "FPS: "; read fps
|
|
||||||
if [ -z "$fps" ]; then fps="25"; fi
|
|
||||||
ffmpeg -i $WORKSPACE/$filename -r $fps $WORKSPACE/data_src/%04d.png -loglevel error
|
|
||||||
;;
|
|
||||||
"data_src extract faces" )
|
|
||||||
echo -n "Detector? [dlib | mt | manual] "; read detector
|
|
||||||
echo -n "Multi-GPU? [Y/n] "; read gpu_ans
|
|
||||||
if [ "$gpu_ans" == "Y" ] || [ "$gpu_ans" == "y" ]; then gpu_ans="--multi-gpu"; else gpu_ans=""; fi
|
|
||||||
$PYTHON $INTERNAL_DIR/main.py extract --input-dir $WORKSPACE/data_src --output-dir $WORKSPACE/data_src/aligned --detector $detector $gpu_ans --debug
|
|
||||||
;;
|
|
||||||
"data_src sort" )
|
|
||||||
echo -n "Sort by? [blur | brightness | face-yaw | hue | hist | hist-blur | hist-dissim] "; read sort_method
|
|
||||||
$PYTHON $INTERNAL_DIR/main.py sort --input-dir $WORKSPACE/data_src/aligned --by $sort_method
|
|
||||||
;;
|
|
||||||
"extract PNG from video data_dst" )
|
|
||||||
echo -n "File name: "; read filename
|
|
||||||
echo -n "FPS: "; read fps
|
|
||||||
if [ -z "$fps" ]; then fps="25"; fi
|
|
||||||
ffmpeg -i $WORKSPACE/$filename -r $fps $WORKSPACE/data_dst/%04d.png -loglevel error
|
|
||||||
;;
|
|
||||||
"data_dst extract faces" )
|
|
||||||
echo -n "Detector? [dlib | mt | manual] "; read detector
|
|
||||||
echo -n "Multi-GPU? [Y/n] "; read gpu_ans
|
|
||||||
if [ "$gpu_ans" == "Y" ] || [ "$gpu_ans" == "y" ]; then gpu_ans="--multi-gpu"; else gpu_ans=""; fi
|
|
||||||
$PYTHON $INTERNAL_DIR/main.py extract --input-dir $WORKSPACE/data_dst --output-dir $WORKSPACE/data_dst/aligned --detector $detector $gpu_ans --debug
|
|
||||||
;;
|
|
||||||
"data_dst sort by hist" )
|
|
||||||
$PYTHON $INTERNAL_DIR/main.py sort --input-dir $WORKSPACE/data_dst/aligned --by hist
|
|
||||||
;;
|
|
||||||
"train" )
|
|
||||||
echo -n "Model? [ H64 (2GB+) | H128 (3GB+) | DF (5GB+) | LIAEF128 (5GB+) | LIAEF128YAW (5GB+) | MIAEF128 (5GB+) | AVATAR (4GB+) ] "; read model
|
|
||||||
echo -n "Multi-GPU? [Y/n] "; read gpu_ans
|
|
||||||
if [ "$gpu_ans" == "Y" ] || [ "$gpu_ans" == "y" ]; then gpu_ans="--multi-gpu"; else gpu_ans=""; fi
|
|
||||||
$PYTHON $INTERNAL_DIR/main.py train --training-data-src-dir $WORKSPACE/data_src/aligned --training-data-dst-dir $WORKSPACE/data_dst/aligned --model-dir $WORKSPACE/model --model $model $gpu_ans
|
|
||||||
;;
|
|
||||||
"convert" )
|
|
||||||
echo -n "Model? [ H64 (2GB+) | H128 (3GB+) | DF (5GB+) | LIAEF128 (5GB+) | LIAEF128YAW (5GB+) | MIAEF128 (5GB+) | AVATAR(4GB+) ] "; read model
|
|
||||||
$PYTHON $INTERNAL_DIR/main.py convert --input-dir $WORKSPACE/data_dst --output-dir $WORKSPACE/data_dst/merged --aligned-dir $WORKSPACE/data_dst/aligned --model-dir $WORKSPACE/model --model $model --ask-for-params
|
|
||||||
;;
|
|
||||||
"converted to mp4" )
|
|
||||||
echo -n "File name of destination video: "; read filename
|
|
||||||
echo -n "FPS: "; read fps
|
|
||||||
if [ -z "$fps" ]; then fps="25"; fi
|
|
||||||
ffmpeg -y -i $WORKSPACE/$filename -r $fps -i "$WORKSPACE/data_dst/merged/%04d.png" -map 0:a? -map 1:v -r $fps -c:v libx264 -b:v 8M -pix_fmt yuv420p -c:a aac -b:a 192k -ar 48000 "$WORKSPACE/result.mp4" -loglevel error
|
|
||||||
;;
|
|
||||||
"quit" )
|
|
||||||
break
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo "Invalid choice!"
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
|
@ -1,9 +0,0 @@
|
||||||
pathlib==1.0.1
|
|
||||||
scandir==1.6
|
|
||||||
h5py==2.7.1
|
|
||||||
Keras==2.2.4
|
|
||||||
opencv-python==3.4.0.12
|
|
||||||
scikit-image
|
|
||||||
dlib==19.10.0
|
|
||||||
tqdm
|
|
||||||
git+https://www.github.com/keras-team/keras-contrib.git
|
|
Loading…
Add table
Add a link
Reference in a new issue