-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking issue for the results of different network architectures #194
Comments
@marco-c I think it'll be a good idea if we kept a list of the current architectures in a todo list format here. And whenever a new architecture was added we would update the list. This could help us keep track |
Yes, this was exactly my idea :) When you finish one, tell me here and I'll update the list. |
:D Oh this is nice idea! One place to show all benchmarks! |
It looks like network.py contains implementations for 'inception', 'vgglike', 'vgg16', 'vgg19', 'simnet', and 'simnetlike' architectures. Are there other architectures that still need to be implemented? |
@marco-c i think we need to start thinking about a benchmark, when we start training these networks we will need to benchmark them against something (like human accuracy in cifar challenges) , what do you think? and we havent added resnet to our networks yet. |
I was working on pretrained VGG16 model. Got validation accuracy of 80%. I was not able to save to file though (because of bug which will be fixed by #201 ).
|
Should we create a directory where models will be saved? And should we change Line 84 in 80fd975
with a name like user_best_VGG16_model or something like that? So that we can get a link between train_info file and the model?
|
As @Shashi456 said, ResNet. There are also other architectures that we might add, but I would focus on getting at least something basic working and then we can try to improve on it.
The benchmark could be #195.
80% is impressive for a first try! But it might be due to class imbalance, we should take it into account.
I'm thinking of creating another repository where we store the models and setting it as a submodule of this repository (like data and tools).
Yes, linking the train_info file and the model should be done, not sure about the name though. |
So should we consider Confusion Matrix for class imbalance? Or should we make the training dataset itself balanced (something similar to
We can simply name the model same as the name of the train_info file if that feels good? Also I wanted to know that is there any particular reason we have implemented VGG16 and others as functions and not used the predefined ones available in |
Update : VGG16 pretrained with imagenet I have attached the text file generated for the training. |
@sagarvijaygupta @marco-c I think we definitely need to handle class imbalance before taking these accuracy values . Because with a class I'm balance too high we would reach a certain amount of accuracy even if all the predictions were 'y'. |
I think we should consider confusion matrix. Making the training dataset balanced is feasible for pretrain.py because we have infinite training examples, but for train.py we have only a limited dataset. |
Sounds good to me! |
If we can reuse them, we definitely should. The first network I wrote was the "vgg-like" one, so clearly it wasn't available in Keras. Then when we added more I forgot there were already some available in Keras. |
Indeed, this is probably what's happening with the 90% accuracy. |
@marco-c Should I create separate PR for each model which is available in https://keras.io/applications/? And for using pretrained models should we pass in argparse like |
Yes, but this is not a high priority. It doesn't matter for now if we keep our own implementation or if we reuse the already existing ones.
Sounds good to me! |
@marco-c using pre-trained weights might be simpler if we directly use keras models. |
@marco-c I totally forgot to remove the |
Just a heads up, I am going to start testing the VGG 19 (from scratch) architecture. I will open a PR for this too. |
Network - ResNet50 Confusion Matrix:
|
@sagarvijaygupta is this for |
@marco-c This is for |
Network - vgg16 Confusion Matrix:
|
Network - vgg19 Confusion Matrix:
|
I have been having a difficult time using Colab over the past few days. Most times that I run my notebook, the process is killed. I have been trying to figure out why, and stumbled upon this post: I am also on the west coast of Canada, where the author of the post is also located. I managed to get one good run late last night, where I ran the training for over 80 epochs. However the output wasn't saved anywhere that I could find. Note that I am running the notebook that exists on my forked repo on github. @sagarvijaygupta where is your output being saved? I am trying to run the training again right now with my Google Drive mounted in Colab, but haven't been able to have a successful run over the past few hours (due to the issue linked above), and the fact that there are no GPU backends available. @marco-c is there another cloud based GPU service that you would recommend? That being said, when running train.py via the notebook on Colab, the best val_accuracy achieved was around 85.7%, after over 50 epochs. However, when I run train.py locally on my machine (with no GPU), I get a val_accuracy of 95.2% after 4 epochs. I am trying to figure out why this is, but wanted to post the info in case the reason is obvious to someone. |
@sdv4 First of all, for the problem of GPU memory being shown nearly 500MB I did find a simple solution which works for me. Whenever you execute |
@sagarvijaygupta Regarding the accuracy issue, here is the text file after only one epoch, where val_accuracy is at 90.6%: Shanes-MacBook-Pro.local_13_01_2018_06_19.txt The number of training, test, and validation samples are the same as in the last txt you shared. Also, thanks for the Colab tips. Good to know it isn't just here where there is a problem. |
@sdv4 your classification type is different. That might be a reason for it. |
@sagarvijaygupta yes, you were right. The numbers are more what was expected once I corrected the classification type: Network - vgg19 258a95a88d5c_01_09_2018_06_21.txt Confusion matrix: [[136 39] |
@sdv4 there is nothing as correcting the classification type. Your results were for different classification and they were correct for that. I guess we want results for both! |
@marco-c do you think there's a neater way to record these observations, the issue will get pretty verbose afterwards and it will get harder to track the benchmarks |
I think I'll just remove the comments at some point, and put the summary of the results in the first comment. |
Heads up, I am going to start testing the VGG16 and VGG-like architectures (from scratch variant). |
Network - VGG16 6c685b649c2b_07_55_2018_06_22.txt Confusion matrix: |
I've added usernames close to the networks people are testing, so we know who's testing what. |
We will use this issue to track the results of the different network architectures and training methodologies.
Add a comment in this issue when you are working on one of them (I'll write
in progress
to mark it), when you're finished with one of them (so I can mark it as done), or when you think of a new one to add.The text was updated successfully, but these errors were encountered: