You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I am interested in using v1.20.0 on openvino hardware as the new version claims to have optimized first inference latency. It seems that v1.20.0 has been released for onnxruntime-gpu but not onnxruntime-openvino yet.
Also is there any more information on how much the first inference latency has been improved?
Thanks!
To reproduce
N/A
Urgency
No response
Platform
Linux
OS Version
N/A
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
N/A
ONNX Runtime API
Python
Architecture
X64
Execution Provider
OpenVINO
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered:
Describe the issue
Hello, I am interested in using v1.20.0 on openvino hardware as the new version claims to have optimized first inference latency. It seems that v1.20.0 has been released for onnxruntime-gpu but not onnxruntime-openvino yet.
Also is there any more information on how much the first inference latency has been improved?
Thanks!
To reproduce
N/A
Urgency
No response
Platform
Linux
OS Version
N/A
ONNX Runtime Installation
Released Package
ONNX Runtime Version or Commit ID
N/A
ONNX Runtime API
Python
Architecture
X64
Execution Provider
OpenVINO
Execution Provider Library Version
No response
The text was updated successfully, but these errors were encountered: