We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Original Issue: tensorflow/tensorflow#76672 Original Author: @bas-aarts
On Apple M1 Pro with MacOs Sonoma 14.1.2 (23B92) TensorFlow version 2.16.1 (still an issue with TF 2.17.0)
sources + assets: Gru_quantization_bug.zip
The zip contain a TF saved model and a convert.py script that converts it to a TF-Lite mode, but it crashes
removing either line 11 or 12 from the script makes the compilation pass
The text was updated successfully, but these errors were encountered:
Hi @bas-aarts
You can convert a GRU model with https://github.com/google-ai-edge/ai-edge-torch. Does that work for you?
Sorry, something went wrong.
Hello, is there a way to convert a Keras LSTM model to a quantized Unidirectional Sequence LSTM operator without crashing ?
pkgoogle
No branches or pull requests
Original Issue: tensorflow/tensorflow#76672
Original Author: @bas-aarts
On Apple M1 Pro with MacOs Sonoma 14.1.2 (23B92)
TensorFlow version 2.16.1 (still an issue with TF 2.17.0)
sources + assets:
Gru_quantization_bug.zip
The zip contain a TF saved model and a convert.py script that converts it to a TF-Lite mode, but it crashes
removing either line 11 or 12 from the script makes the compilation pass
The text was updated successfully, but these errors were encountered: