Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WebNN] Remove workarounds for TFLite backend #23406

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Honry
Copy link
Contributor

@Honry Honry commented Jan 17, 2025

The WebNN CPU device type may now target different backends, such as CoreML. Legacy special workarounds for the TFLite backend should be removed and allowed to fail as is, as these are implementation issues.

Additionally, the WebNN EP should adhere to the WebNN API conformance. We assume all the WebNN ops should be supported, so remove the WebNN op support status for different device types in webnn-operators.md as well.

The WebNN CPU device type may now target different backends, such as CoreML.
Legacy special workarounds for the TFLite backend should be removed and
allowed to fail as is, as these are implementation issues.

Additionally, the WebNN EP should adhere to the WebNN API conformance.
We assume all the WebNN ops should be supported, so remove the WebNN op
support status for different device types in webnn-operators.md as well.
@Honry
Copy link
Contributor Author

Honry commented Jan 17, 2025

@fdwr, @guschmue, PTAL, thanks!

cc/ @fujunwei

Copy link
Contributor

@fdwr fdwr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 YAY.

@fdwr
Copy link
Contributor

fdwr commented Jan 18, 2025

/azp run ONNX Runtime Web CI Pipeline,Windows GPU CI Pipeline,Linux Android Emulator QNN CI Pipeline

@fdwr
Copy link
Contributor

fdwr commented Jan 18, 2025

/azp run Linux CPU CI Pipeline,Linux CPU Minimal Build E2E CI Pipeline,Linux GPU CI Pipeline,Linux GPU TensorRT CI Pipeline,Linux OpenVINO CI Pipeline,Linux QNN CI Pipeline,MacOS CI Pipeline,Windows ARM64 QNN CI Pipeline,Windows CPU CI Pipeline

@fdwr
Copy link
Contributor

fdwr commented Jan 18, 2025

/azp run Windows GPU CUDA CI Pipeline,Windows GPU DML CI Pipeline,Windows GPU Doc Gen CI Pipeline,Win_TRT_Minimal_CUDA_Test_CI

@fdwr
Copy link
Contributor

fdwr commented Jan 18, 2025

/azp run Windows GPU TensorRT CI Pipeline,onnxruntime-binary-size-checks-ci-pipeline,orttraining-linux-ci-pipeline,orttraining-linux-gpu-ci-pipeline,orttraining-ortmodule-distributed,Windows x64 QNN CI Pipeline,Big Models

Copy link

Azure Pipelines successfully started running 2 pipeline(s).

Copy link

Azure Pipelines successfully started running 4 pipeline(s).

1 similar comment
Copy link

Azure Pipelines successfully started running 4 pipeline(s).

Copy link

Azure Pipelines successfully started running 9 pipeline(s).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants