-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add aten_convolution_backward function #1707
base: main
Are you sure you want to change the base?
Conversation
# if stride[0] != 1: # dilation | ||
# dz_height = z_height * stride[0] - stride[0] + 1 | ||
# dz_width = z_width * stride[1] - stride[1] + 1 | ||
# pos = _help(z_height, dz_width, stride) | ||
# pos = [] | ||
# for j in range(z_height): | ||
# for i in range(0, dz_width, stride[1]): | ||
# pos.append(i + j * dz_width * stride[0]) | ||
|
||
# index_tensor = op.Constant(value_ints=pos) | ||
# index_tensor = op.Reshape(index_tensor, z_shape) | ||
# # this should not work because the kernel_shape is attribute | ||
# dz = op.MaxUnpool(grad_output, index_tensor, kernel_shape=[dz_height - z_height + 1, dz_width - z_width + 1]) | ||
|
||
# # Computing padding size |
Check notice
Code scanning / CodeQL
Commented-out code Note
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1707 +/- ##
==========================================
- Coverage 75.24% 75.23% -0.01%
==========================================
Files 242 242
Lines 25861 25923 +62
Branches 4660 4671 +11
==========================================
+ Hits 19458 19504 +46
- Misses 5517 5528 +11
- Partials 886 891 +5 ☔ View full report in Codecov by Sentry. |
Test Results 26 files - 1 26 suites - 1 2h 27m 9s ⏱️ - 56m 19s For more details on these failures and errors, see this check. Results for commit 1eb33c3. ± Comparison against base commit c57e9e7. This pull request skips 1 test.
♻️ This comment has been updated with latest results. |
Is it possible to add a unit test? |
def train_loop( | ||
model: Any, | ||
*args, | ||
loss_fn: Any | None = None, | ||
optimizer: Any | None = None, | ||
dump_onnx_models: bool = False, | ||
dump_prefix: str = "dump_train_loop", | ||
dump_clean_first: bool = True, | ||
) -> tuple[Any, tuple[Any, ...]] | tuple[Any, tuple[Any, ...], list[str]]: |
Check notice
Code scanning / CodeQL
Returning tuples with varying lengths Note
tuple of size 2
tuple of size 3
…/onnxscript into xiaowu/addConvBackward
Added. |
|
||
class TestBackward(unittest.TestCase): | ||
@unittest.skipIf(sys.platform == "win32", reason="not supported yet on Windows") | ||
@unittest.skipIf(not has_transformers(), reason="transformers is missing") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@unittest.skipIf(not has_transformers(), reason="transformers is missing") |
import onnxscript.tools.transformers_models | ||
import onnxscript.tools.transformers_models.llama |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
import onnxscript.tools.transformers_models | |
import onnxscript.tools.transformers_models.llama |
I wonder why ruff doesn't warn the unused imports
Roadmap:
col2im
andim2col
to finish this job, but onnx only providecol2im
, NOT provideim2col
.A. Compute dW
But need to transpose X to [1,0,2,3], transpose dZ to [1,0,2,3], then using common op.Conv on them, get dW but also need transpose back to [1,0,2,3].
B. Compute dX
It is similar but more complicated:
To Do list: