Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where folder put mlx flux dev ? for use in comfyUI ? #67

Open
stef06 opened this issue Sep 28, 2024 · 5 comments
Open

Where folder put mlx flux dev ? for use in comfyUI ? #67

stef06 opened this issue Sep 28, 2024 · 5 comments

Comments

@stef06
Copy link

stef06 commented Sep 28, 2024

Where folder put mlx flux dev ? for use in comfyUI ?
How to install it in local

@raysers
Copy link

raysers commented Oct 5, 2024

If your use case involves LORA or ControlNet, you'll still need to learn how to use the terminal and follow the official tutorial. You can skip the rest of this guide.

However, if you only need basic text2img functionality, you might want to try this:

https://github.com/raysers/Mflux-ComfyUI

The model will be automatically downloaded to the models/mflux folder under the ComfyUI directory. Currently, only the 4-bit quantized version is used, but you can also manually download it and place it in this directory. The model identifiers on Hugging Face are madroid/flux.1-schnell-mflux-4bit or madroid/flux.1-dev-mflux-4bit.

Before official support for ComfyUI is available, this can serve as a temporary solution. You can also find it directly in the ComfyUI Manager by searching for "mflux".

@stef06
Copy link
Author

stef06 commented Oct 5, 2024 via email

@raysers
Copy link

raysers commented Oct 5, 2024

Of course, happy to help. You can try installing the Mflux-ComfyUI plugin in your ComfyUI. For installation instructions, please visit https://github.com/raysers/Mflux-ComfyUI. Alternatively, if you already have ComfyUI Manager installed, you can quickly install the plugin by searching for "Mflux-ComfyUI" in the node manager of ComfyUI Manager.

After installing the plugin and restarting ComfyUI, you can right-click to create a new node and find the Mflux section. Locate the "Quick MFlux Generation" node and create it. Connect the output image of the "Quick MFlux Generation" node to the save image node of ComfyUI, and you can then test your speed. During the first run, some time will be needed for downloading the model from Huggingface, as mentioned above, the model path is your_ComfyUI/models/mflux.

Note that I'm just a beginner developer. Currently, I've only tested this plugin on my M1 PRO 16GB, with macOS Ventura 13.6, Python 3.11.9, and Torch 2.3.1. The plugin runs well in my environment.

@stef06
Copy link
Author

stef06 commented Oct 6, 2024 via email

@raysers
Copy link

raysers commented Oct 6, 2024

I think you're using dev 20 steps or more. I'm sorry I can't help you further, as my Ventuna doesn't support PyTorch's bf precision, so I've never used the regular Flux or GGUF of ComfyUI, and therefore can't provide a comparison of generation times. I once mentioned that Mflux is my only successful attempt. There may be more implementations, like the DrawThings you mentioned, or you could try the recently released ComfyUI version of DiffusionKit. These are all great options for Mac users.

Currently, I still only intend to use Mflux. Although my old 16GB machine can only use Schnell, being able to generate images in two steps that aren't inferior to the past 20-step SDXL results already satisfies me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants