Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Choose more intelligent hidden and output activation functions #27

Open
xanderdunn opened this issue Aug 26, 2015 · 0 comments
Open

Choose more intelligent hidden and output activation functions #27

xanderdunn opened this issue Aug 26, 2015 · 0 comments
Assignees
Milestone

Comments

@xanderdunn
Copy link
Owner

Choose a hidden activation function that makes sense and an output activation function that makes sense. The output activation function should map to the interesting domain. Look at the min and max Q values of Tabular vs. tabular and compare to neural vs. neural. Same for tabular vs. random and neural vs. random. Compare the distributions of Q values(?)

Read Efficient BackProp
Wiki on activation functions
Matlab Activation Functions

@xanderdunn xanderdunn self-assigned this Aug 26, 2015
@xanderdunn xanderdunn added this to the 1.0 milestone Aug 26, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant