You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the nengo_dl converter's scale_firing_rates will gladly change the effective response that you get from any nonlinearity except for the linear/ReLU activations. There is this warning:
f"Firing rate scaling being applied to activation type "
f"that does not support amplitude "
f"({type(activation).__name__}); "
f"this will change the output"
)
But this is only emitted for neuron types that don't support amplitude. I'd expect a similar (or generalized) warning to be emitted if the neuron's activation function is going to be skewed by the rescaling, as this also changes the output of the neuron. Otherwise the model may perform worse, as the trained weights would be w.r.t. the wrong nonlinearities, which could be confusing for someone who misses this subtlty about scale_firing_rates in the converter's docstring.
The text was updated successfully, but these errors were encountered:
Related to #206.
Currently the nengo_dl converter's
scale_firing_rates
will gladly change the effective response that you get from any nonlinearity except for the linear/ReLU activations. There is this warning:nengo-dl/nengo_dl/converter.py
Lines 568 to 573 in e9b359a
But this is only emitted for neuron types that don't support
amplitude
. I'd expect a similar (or generalized) warning to be emitted if the neuron's activation function is going to be skewed by the rescaling, as this also changes the output of the neuron. Otherwise the model may perform worse, as the trained weights would be w.r.t. the wrong nonlinearities, which could be confusing for someone who misses this subtlty aboutscale_firing_rates
in the converter's docstring.The text was updated successfully, but these errors were encountered: