Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Op (_upsample_bilinear2d_aa, _upsample_bicubic2d_aa) | feat(torchlib) #1259

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

xiaowuhu
Copy link
Contributor

@xiaowuhu xiaowuhu commented Jan 26, 2024

It seems that the antialias method is different between ONNX and PyTorch, so we can just compare the shape, instead of the value.

Below is the difference between ONXN and PyTorch:

# ONNX
import numpy as np
self = np.array([[[[2,1,1,1],
                   [1,1,1,1],
                   [1,1,1,1],
                   [1,1,1,1]]]]).astype(np.float32)
print(self.shape)
output_size = np.array([1,1]).astype(np.int64)
align_corners = True
r = aten__upsample_bicubic2d_aa(self, output_size, align_corners)
print(r)

ONXN output = [[[[1.390625]]]]

# PyTorch
import torch as t
r = t.ops.aten._upsample_bicubic2d_aa(t.tensor(self), t.tensor(output_size), align_corners)
print(r)

Torch output = tensor([[[[2.2656]]]])

I also tried some other parameters combination but none of them can match with torch.

@xiaowuhu xiaowuhu changed the title AddOp(_upsample_bilinear2d_aa, _upsample_bicubic2d_aa) | feat(TorchLib) Add Op (_upsample_bilinear2d_aa, _upsample_bicubic2d_aa) | feat(torchlib) Jan 26, 2024
Copy link

codecov bot commented Jan 26, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (ce3eb4a) 78.68% compared to head (778b799) 78.85%.
Report is 8 commits behind head on main.

❗ Current head 778b799 differs from pull request most recent head cf4f4af. Consider uploading reports for the commit cf4f4af to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1259      +/-   ##
==========================================
+ Coverage   78.68%   78.85%   +0.17%     
==========================================
  Files         119      119              
  Lines       15762    15700      -62     
  Branches     2486     2481       -5     
==========================================
- Hits        12403    12381      -22     
+ Misses       2950     2911      -39     
+ Partials      409      408       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link

github-actions bot commented Jan 26, 2024

Test Results

     24 files  ±     0      24 suites  ±0   1h 41m 53s ⏱️ + 11m 9s
 11 405 tests +     6   8 439 ✅ +    4    2 952 💤 ±     0   14 ❌ +2 
274 768 runs  +16 882  63 102 ✅ +4 302  211 460 💤 +12 578  206 ❌ +2 

For more details on these failures, see this check.

Results for commit cf4f4af. ± Comparison against base commit 457e52e.

This pull request removes 29 and adds 35 tests. Note that renamed tests count towards both.
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_327_aten_upsample_bilinear2d_vec
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_328_aten_upsample_bicubic2d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_329_aten_upsample_bicubic2d_vec
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_330_aten_upsample_linear1d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_331_aten_upsample_nearest1d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_332_aten_upsample_nearest2d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_333_aten_upsample_nearest3d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_334_aten_upsample_trilinear3d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_335_aten_ones_like
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_336_aten_roll
…
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_327_aten__upsample_bilinear2d_aa
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_328_aten_upsample_bilinear2d_vec
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_329_aten_upsample_bicubic2d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_330_aten_upsample_bicubic2d_vec
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_331_aten__upsample_bicubic2d_aa
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_332_aten_upsample_linear1d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_333_aten_upsample_nearest1d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_334_aten_upsample_nearest2d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_335_aten_upsample_nearest3d
onnxscript.tests.function_libs.torch_lib.ops_test.TestFunctionValidity ‑ test_function_has_op_schema_336_aten_upsample_trilinear3d
…

♻️ This comment has been updated with latest results.

@justinchuby
Copy link
Collaborator

It would be better if we match values because the values should be deterministic. Do we know how PyTorch does it?

@xiaowuhu
Copy link
Contributor Author

It would be better if we match values because the values should be deterministic. Do we know how PyTorch does it?

please see the description for this PR. I add comparison between onnx and torch.

@justinchuby
Copy link
Collaborator

Would it be helpful to consult the PyTorch implementation? I suspect we need additional processing to implement antialiasing.

@justinchuby
Copy link
Collaborator

From our discussion: understanding the PyTorch implementation proved to be harder than anticipated (https://github.com/pytorch/pytorch/blob/bcf35c6ae62bb6560befa3550e37a8283944e5f4/aten/src/ATen/native/cpu/UpSampleKernel.cpp#L2009). We will seek additional help for this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Development

Successfully merging this pull request may close these issues.

2 participants