Music Generation with Local Connected Convolutional Neural Network.
Developed by: Zhihao Ouyang, Yihang Yin, Kun Yan.
Demo of this project is released at Demo Link.
Required Python3 Packages:
keras
python-rtmidi
pretty-midi
progressbar
We've writen these in script, for your convenience, just do this in shell:
$ sh init.sh
Run the following command to launch experiment.
$ cd monophony
$ python3 monophony.py <model_id>
Replace the <model_id> to change the model for comparison.
0 for 'conv1_model_a'
1 for 'conv1_model_b'
2 for 'conv1_model_c'
3 for 'conv1_model_naive'
4 for 'conv1_model_naive_big'
5 for 'resnet_model_naive'
6 for 'resNet_model_local'
7 for 'LSTM_model'
Download the training data from this link
Then extract the data into ./polyphonic/datasets
Run the following command to launch experiment.
$ cd polyphony
$ python3 polyphony.py <model_id>
Replace the <model_id> to change the model for comparison
0 for 'conv1_model_a'
1 for 'conv1_model_b'
2 for 'conv1_model_c'
3 for 'conv1_model_naive'
4 for 'conv1_model_naive_big'
5 for 'resnet_model_naive'
6 for 'resNet_model_local'
7 for 'LSTM_model'