Skip to content
/ LocalAI Public

πŸ€– The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed inference

License

Notifications You must be signed in to change notification settings

mudler/LocalAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation



LocalAI

LocalAI forks LocalAI stars LocalAI pull-requests

LocalAI Docker hub LocalAI Quay.io

Follow LocalAI_API Join LocalAI Discord Community

πŸ’‘ Get help - ❓FAQ πŸ’­Discussions πŸ’¬ Discord πŸ“– Documentation website

πŸ’» Quickstart πŸ–ΌοΈ Models πŸš€ Roadmap πŸ₯½ Demo 🌍 Explorer πŸ›« Examples

testsBuild and Releasebuild container imagesBump dependenciesArtifact Hub

LocalAI is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that’s compatible with OpenAI (Elevenlabs, Anthropic... ) API specifications for local AI inferencing. It allows you to run LLMs, generate images, audio (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families. Does not require GPU. It is created and maintained by Ettore Di Giacinto.

screen

Run the installer script:

curl https://localai.io/install.sh | sh

Or run with docker:

docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
# Alternative images:
# - if you have an Nvidia GPU:
# docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12
# - without preconfigured models
# docker run -ti --name local-ai -p 8080:8080 localai/localai:latest
# - without preconfigured models for Nvidia GPUs
# docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12 

To load models:

# From the model gallery (see available models with `local-ai models list`, in the WebUI from the model tab, or visiting https://models.localai.io)
local-ai run llama-3.2-1b-instruct:q4_k_m
# Start LocalAI with the phi-2 model directly from huggingface
local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
# Install and run a model from the Ollama OCI registry
local-ai run ollama://gemma:2b
# Run a model from a configuration file
local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
# Install and run a model from a standard OCI registry (e.g., Docker Hub)
local-ai run oci://localai/phi-2:latest

πŸ’» Getting started

πŸ“° Latest project news

  • Oct 2024: examples moved to LocalAI-examples
  • Aug 2024: πŸ†• FLUX-1, P2P Explorer
  • July 2024: πŸ”₯πŸ”₯ πŸ†• P2P Dashboard, LocalAI Federated mode and AI Swarms: #2723
  • June 2024: πŸ†• You can browse now the model gallery without LocalAI! Check out https://models.localai.io
  • June 2024: Support for models from OCI registries: #2628
  • May 2024: πŸ”₯πŸ”₯ Decentralized P2P llama.cpp: #2343 (peer2peer llama.cpp!) πŸ‘‰ Docs https://localai.io/features/distribute/
  • May 2024: πŸ”₯πŸ”₯ Openvoice: #2334
  • May 2024: πŸ†• Function calls without grammars and mixed mode: #2328
  • May 2024: πŸ”₯πŸ”₯ Distributed inferencing: #2324
  • May 2024: Chat, TTS, and Image generation in the WebUI: #2222
  • April 2024: Reranker API: #2121

Roadmap items: List of issues

πŸ”₯πŸ”₯ Hot topics (looking for help):

  • Multimodal with vLLM and Video understanding: #3729
  • Realtime API #3714
  • πŸ”₯πŸ”₯ Distributed, P2P Global community pools: #3113
  • WebUI improvements: #2156
  • Backends v2: #1126
  • Improving UX v2: #1373
  • Assistant API: #1273
  • Moderation endpoint: #999
  • Vulkan: #1647
  • Anthropic API: #1808

If you want to help and contribute, issues up for grabs: https://github.com/mudler/LocalAI/issues?q=is%3Aissue+is%3Aopen+label%3A%22up+for+grabs%22

πŸš€ Features

πŸ’» Usage

Check out the Getting started section in our documentation.

πŸ”— Community and integrations

Build and deploy custom containers:

WebUIs:

Model galleries

Other:

πŸ”— Resources

πŸ“– πŸŽ₯ Media, Blogs, Social

Citation

If you utilize this repository, data in a downstream project, please consider citing it with:

@misc{localai,
  author = {Ettore Di Giacinto},
  title = {LocalAI: The free, Open source OpenAI alternative},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/go-skynet/LocalAI}},

❀️ Sponsors

Do you find LocalAI useful?

Support the project by becoming a backer or sponsor. Your logo will show up here with a link to your website.

A huge thank you to our generous sponsors who support this project covering CI expenses, and our Sponsor list:


🌟 Star history

LocalAI Star history Chart

πŸ“– License

LocalAI is a community-driven project created by Ettore Di Giacinto.

MIT - Author Ettore Di Giacinto [email protected]

πŸ™‡ Acknowledgements

LocalAI couldn't have been built without the help of great software already available from the community. Thank you!

πŸ€— Contributors

This is a community project, a special thanks to our contributors! πŸ€—

About

πŸ€– The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed inference

Topics

Resources

License

Security policy

Stars

Watchers

Forks