Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement enable_gqa in aten::scaled_dot_product_attention #1802

Open
titaiwangms opened this issue Aug 13, 2024 · 0 comments
Open

Implement enable_gqa in aten::scaled_dot_product_attention #1802

titaiwangms opened this issue Aug 13, 2024 · 0 comments
Labels
contribution welcome We welcome code contributions for this topic: torch_lib Related to the torch/aten function lib in development

Comments

@titaiwangms
Copy link
Contributor

We should follow https://pytorch.org/docs/main/generated/torch.nn.functional.scaled_dot_product_attention.html to implement enable_gqa

@titaiwangms titaiwangms self-assigned this Aug 13, 2024
@titaiwangms titaiwangms added the topic: torch_lib Related to the torch/aten function lib in development label Aug 13, 2024
titaiwangms added a commit that referenced this issue Aug 13, 2024
Fix #1799 

Add an extra argument: `enable_gqa` to unblock the export.
The real implementation:
#1802
@justinchuby justinchuby added the contribution welcome We welcome code contributions for this label Aug 16, 2024
@titaiwangms titaiwangms removed their assignment Oct 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contribution welcome We welcome code contributions for this topic: torch_lib Related to the torch/aten function lib in development
Projects
None yet
Development

No branches or pull requests

2 participants