Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

flux can only generate 256x256---1344x1344 for tensorrt?diffuer can generate 2048x2048! #4315

Open
bleedingfight opened this issue Jan 7, 2025 · 0 comments

Comments

@bleedingfight
Copy link

Description

I tried using the flux model to generate 2048x2048, but found that the code limited the maximum output to 1344. But I can use the diffuers library to generate 2048x2048 images. So I tried to modify 1244 in the code to 2048, but when I continued running, the following error occurred:

Building TensorRT engine for onnx/vae.opt/model.onnx: engine/vae.trt10.7.0.plan
[E] IBuilder::buildSerializedNetwork: Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[/decoder/mid_block/attentions.0/group_norm/Constant_1_output_0 + ONNXTRT_unsqueezeTensor_21.../decoder/mid_block/resnets.1/nonlinearity/Mul]}.)
Traceback (most recent call last):
  File "/home/username/TensorRT/demo/Diffusion/demo_txt2img_flux.py", line 196, in <module>
    demo.load_engines(
  File "/home/username/TensorRT/demo/Diffusion/diffusion_pipeline.py", line 630, in load_engines
    self._build_engine(obj, engine, model_config, opt_batch_size, opt_image_height, opt_image_width, optimization_level, static_batch, static_shape, enable_all_tactics, timing_cache)
  File "/home/username/TensorRT/demo/Diffusion/diffusion_pipeline.py", line 493, in _build_engine
    engine.build(model_config['onnx_opt_path'],
  File "/home/username/TensorRT/demo/Diffusion/utilities.py", line 315, in build
    engine = engine_from_network(
  File "<string>", line 3, in engine_from_network
  File "/opt/conda/envs/flux-tensorrt/lib/python3.10/site-packages/polygraphy/backend/base/loader.py", line 40, in __call__
    return self.call_impl(*args, **kwargs)
  File "/opt/conda/envs/flux-tensorrt/lib/python3.10/site-packages/polygraphy/util/util.py", line 710, in wrapped
    return func(*args, **kwargs)
  File "/opt/conda/envs/flux-tensorrt/lib/python3.10/site-packages/polygraphy/backend/trt/loader.py", line 624, in call_impl
    return engine_from_bytes(super().call_impl, runtime=self._runtime)
  File "<string>", line 3, in engine_from_bytes
  File "/opt/conda/envs/flux-tensorrt/lib/python3.10/site-packages/polygraphy/backend/base/loader.py", line 40, in __call__
    return self.call_impl(*args, **kwargs)
  File "/opt/conda/envs/flux-tensorrt/lib/python3.10/site-packages/polygraphy/util/util.py", line 710, in wrapped
    return func(*args, **kwargs)
  File "/opt/conda/envs/flux-tensorrt/lib/python3.10/site-packages/polygraphy/backend/trt/loader.py", line 653, in call_impl
    buffer, _ = util.invoke_if_callable(self._serialized_engine)
  File "/opt/conda/envs/flux-tensorrt/lib/python3.10/site-packages/polygraphy/util/util.py", line 678, in invoke_if_callable
    ret = func(*args, **kwargs)
  File "/opt/conda/envs/flux-tensorrt/lib/python3.10/site-packages/polygraphy/util/util.py", line 710, in wrapped
    return func(*args, **kwargs)
  File "/opt/conda/envs/flux-tensorrt/lib/python3.10/site-packages/polygraphy/backend/trt/loader.py", line 557, in call_impl
    G_LOGGER.critical("Invalid Engine. Please ensure the engine was built correctly")
  File "/opt/conda/envs/flux-tensorrt/lib/python3.10/site-packages/polygraphy/logger/logger.py", line 605, in critical
    raise ExceptionType(message) from None
polygraphy.exception.exception.PolygraphyException: Invalid Engine. Please ensure the engine was built correctly
/opt/conda/envs/flux-tensorrt/lib/python3.10/tempfile.py:860: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmp_5cee1ik'>
  _warnings.warn(warn_message, ResourceWarning)

Environment

Tensorrt 10.7:

A800:

NVIDIA Driver Version:550.54.15:

CUDA Version:11.4:

Operating System:

  • Python Version (if applicable):Python 3.10.12
  • Container

Steps To Reproduce

  1. set 1244:2048
  2. run demo_txt2img_flux.py。(no --hf-token=$HF_TOKEN,just download flux and generate converted model):
pytorch_model/
└── flux.1-dev
    └── TXT2IMG
        ├── flowmatcheulerdiscretescheduler
        │   └── scheduler
        │       └── scheduler_config.json
        ├── text_encoder
        │   ├── config.json
        │   └── model.safetensors
        ├── text_encoder_2
        │   ├── config.json
        │   ├── model-00001-of-00003.safetensors
        │   ├── model-00002-of-00003.safetensors
        │   ├── model-00003-of-00003.safetensors
        │   └── model.safetensors.index.json
        ├── tokenizer
        │   ├── merges.txt
        │   ├── special_tokens_map.json
        │   ├── tokenizer_config.json
        │   └── vocab.json
        ├── tokenizer_2
        │   ├── special_tokens_map.json
        │   ├── spiece.model
        │   ├── tokenizer.json
        │   └── tokenizer_config.json
        ├── transformer
        │   ├── config.json
        │   ├── diffusion_pytorch_model-00001-of-00003.safetensors
        │   ├── diffusion_pytorch_model-00002-of-00003.safetensors
        │   ├── diffusion_pytorch_model-00003-of-00003.safetensors
        │   └── diffusion_pytorch_model.safetensors.index.json
        └── vae
            ├── config.json
            └── diffusion_pytorch_model.safetensors
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant