We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
模型转换后缺失 pdmodel. 用export_model导不出 .inferencepdiparams.info 和 inference.pdmodel
只有inference.yml inference.json inference.pdiparams
Code used to export: python tools/export_model.py -c configs/det/ch_PP-OCRv4_det_teacher.yml -o Global.pretrained_model="output/ch_PP-OCRv4/best_model/model" Global.save_inference_dir="output/det_db_inference/"
W1114 08:30:52.289454 41080 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 8.9, Driver API Version: 12.7, Runtime API Version: 11.8 W1114 08:30:52.289454 41080 gpu_resources.cc:164] device: 0, cuDNN Version: 8.9. [2024/11/14 08:30:52] ppocr INFO: load pretrain successful from output/ch_PP-OCRv4/best_model/model [2024/11/14 08:30:53] ppocr INFO: inference model is saved to output/det_db_inference/inference [2024/11/14 08:30:53] ppocr INFO: Export inference config file to output/det_db_inference/inference.yml
Export File:
Window 11 Python 3.10
python tools/export_model.py -c configs/det/ch_PP-OCRv4_det_teacher.yml -o Global.pretrained_model="output/ch_PP-OCRv4/best_model/model" Global.save_inference_dir="output/det_db_inference/"
The text was updated successfully, but these errors were encountered:
paddle 3.0 beta 2 会导出新的 json格式,如果想要pdmodel可以切换到 paddle 3.0 beta 1
Sorry, something went wrong.
请问什么意思, 怎样转? 我用export_model.py
什么是paddle 3.0 beta @GreatV
No branches or pull requests
🔎 Search before asking
🐛 Bug (问题描述)
模型转换后缺失 pdmodel. 用export_model导不出 .inferencepdiparams.info 和 inference.pdmodel
只有inference.yml inference.json inference.pdiparams
Code used to export:
python tools/export_model.py -c configs/det/ch_PP-OCRv4_det_teacher.yml -o Global.pretrained_model="output/ch_PP-OCRv4/best_model/model" Global.save_inference_dir="output/det_db_inference/"
W1114 08:30:52.289454 41080 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 8.9, Driver API Version: 12.7, Runtime API Version: 11.8 W1114 08:30:52.289454 41080 gpu_resources.cc:164] device: 0, cuDNN Version: 8.9. [2024/11/14 08:30:52] ppocr INFO: load pretrain successful from output/ch_PP-OCRv4/best_model/model [2024/11/14 08:30:53] ppocr INFO: inference model is saved to output/det_db_inference/inference [2024/11/14 08:30:53] ppocr INFO: Export inference config file to output/det_db_inference/inference.yml
Export File:
🏃♂️ Environment (运行环境)
Window 11 Python 3.10
🌰 Minimal Reproducible Example (最小可复现问题的Demo)
python tools/export_model.py -c configs/det/ch_PP-OCRv4_det_teacher.yml -o Global.pretrained_model="output/ch_PP-OCRv4/best_model/model" Global.save_inference_dir="output/det_db_inference/"
The text was updated successfully, but these errors were encountered: