You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I want to deeply thank you for your work on this implementation.
I can make it work, but there's something getting ouf of my head, it's the difference between input of the model and the ouput of DataGenarator.
DataGenarator output a list of HW3 ndarray, right?
The input model waits for a HW3 shape, so how it can works? Does the model infer there's N images and it inputs sequentially the images in the model? But, in this case, how the Attention layer can work?
Because, I thought to input a sequence of image you had to use the TimeDistributed class from Keras to do that.
Kind regards.
The text was updated successfully, but these errors were encountered:
Hello,
I want to deeply thank you for your work on this implementation.
I can make it work, but there's something getting ouf of my head, it's the difference between input of the model and the ouput of DataGenarator.
DataGenarator output a list of HW3 ndarray, right?
The input model waits for a HW3 shape, so how it can works? Does the model infer there's N images and it inputs sequentially the images in the model? But, in this case, how the Attention layer can work?
Because, I thought to input a sequence of image you had to use the TimeDistributed class from Keras to do that.
Kind regards.
The text was updated successfully, but these errors were encountered: