-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model having scatterND layer giving different result every time with same input #23396
Comments
I tried build source from latest main branch. It changes slightly in Windows:
If I set thread number to 1, the result becomes stable:
I think the cause is different order of add in multi-threading. Since each add float32 + float32 is "approximated" by another float32 value. a + b + c might not be same as c + b + a. If you need more "stable" result, consider use |
Hi @tianleiwu
I think this approximation (float32 + float32) should not be that much deviated from actual output. Can you explain little more about this behavior. |
Consider the following sequence of numbers: 1.0e+10, 1.0, -1.0e+10 Addition Order 1: Addition Order 2: The precision of float32 is about 7 digits after dot. Normally when I saw some difference in level of e-6, that is acceptable variance. @yuslepukhin, @liqunfu, as you have reviewed or updated the CPU implementation of ScatterND. Could you also take a look to the issue and see whether it is feasible to have an implementation that could be deterministic (For example, given same index, we can accumulate in order). |
Describe the issue
Model has scatterND layer with add reduction, using all zeros for data input, random input for updates. I have fixed seed also so that every time I get same input from np.random
I am getting different result in every run with same input.
To reproduce
Attached the model, run below code to get output. Run multiple times to see difference in each run
Urgency
Yes, stuck at this point. I need a reference of actual onnx output for my scatterND implementation.
Platform
Linux
OS Version
Ubuntu 22.04.3 LTS
ONNX Runtime Installation
Built from Source
ONNX Runtime Version or Commit ID
1.15.0
ONNX Runtime API
Python
Architecture
X86
Execution Provider
Default CPU
Execution Provider Library Version
No response
Model File
testing_model.zip
Is this a quantized model?
No
The text was updated successfully, but these errors were encountered: