The source code of Deep Sketch-guided Cartoon Video Inbetweening by Xiaoyu Li, Bo Zhang, Jing Liao, Pedro V. Sander, IEEE Transactions on Visualization and Computer Graphics, 2021.
- Linux or Windows
- Python 3
- CPU or NVIDIA GPU + CUDA CuDNN
You can download the pre-trained model here.
Run the following commands for evaluating the frame synthesis model and full model:
python eval_synthesis.py
python eval_full.py
The frame synthesis model takes img_0, img_1, ske_t as inputs and synthesizes img_t. The full model takes img_0, img_1, ske_t as inputs and interpolates five frames between img_0 and img_1.
A dataset is a directory with the following structure:
dataset
├── frame
│ └── ${clip_id}
│ └──${image_id}.png
├── sketch
│ └── ${clip_id}
│ └──${image_id}.png
└── dismap
└── ${clip_id}
└──${image_id}.npy
The sketch images can be generated by the script "sketch.py" and the distance maps can be generated by "dismap.py". Due to the copyright issue of the movie Spirited Away, we can not release our training dataset. You can generate your own dataset if you interest.
Run the following command for training the frame synthesis model and full model:
python train_synthesis.py
python train_full.py
Before you train the full model, you must train the frame synthesis model first and use its parameters to initialize the full model.
If you find our work useful, please consider citing:
@article{li2021deep,
author = {Li, Xiaoyu and Zhang, Bo and Liao, Jing and Sander, Pedro},
journal = {IEEE Transactions on Visualization and Computer Graphics},
year = {2021},
publisher = {IEEE}
}