Replies: 1 comment
-
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
First of all, thank you for the great efforts that you put into creating the keras_cv_attention_models repository.
I am writing to inquire about a specific component within the BEiT model's architecture, particularly including the ChannelAffine layer within the attention_mlp_block. I am interested in understanding whether this layer is part of the original BEiT architecture or represents an additional layer introduced in your implementation.
Also, according to BEiT architecture , the model includes "masked image modeling head" but I couldn't find it in your implementation, could you please elaborate more about your Beit implementation.
Note that an email sent to you in this regard from [email protected]
Thanks,
Beta Was this translation helpful? Give feedback.
All reactions