Skip to content

Pinned Loading

  1. vllm vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 38.8k 5.8k

  2. llm-compressor llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    Python 995 84

Repositories

Showing 10 of 14 repositories
  • vllm Public

    A high-throughput and memory-efficient inference and serving engine for LLMs

    vllm-project/vllm’s past year of commit activity
    Python 38,809 Apache-2.0 5,812 1,297 (12 issues need help) 442 Updated Feb 21, 2025
  • aibrix Public

    Cost-efficient and pluggable Infrastructure components for GenAI inference

    vllm-project/aibrix’s past year of commit activity
    Jupyter Notebook 28 Apache-2.0 2 100 (11 issues need help) 7 Updated Feb 21, 2025
  • vllm-spyre Public

    Community maintained hardware plugin for vLLM on Spyre

    vllm-project/vllm-spyre’s past year of commit activity
    Python 2 Apache-2.0 0 0 0 Updated Feb 21, 2025
  • vllm-project/vllm-project.github.io’s past year of commit activity
    HTML 4 10 0 1 Updated Feb 21, 2025
  • llm-compressor Public

    Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM

    vllm-project/llm-compressor’s past year of commit activity
    Python 995 Apache-2.0 84 17 41 Updated Feb 21, 2025
  • vllm-ascend Public

    Community maintained hardware plugin for vLLM on Ascend

    vllm-project/vllm-ascend’s past year of commit activity
    Python 160 Apache-2.0 30 20 (1 issue needs help) 11 Updated Feb 21, 2025
  • buildkite-ci Public
    vllm-project/buildkite-ci’s past year of commit activity
    HCL 8 18 0 3 Updated Feb 21, 2025
  • production-stack Public

    Scale from single vLLM instance to distributed vLLM deployment without changing any application code.

    vllm-project/production-stack’s past year of commit activity
    Python 490 Apache-2.0 60 22 (2 issues need help) 6 Updated Feb 20, 2025
  • flash-attention Public Forked from Dao-AILab/flash-attention

    Fast and memory-efficient exact attention

    vllm-project/flash-attention’s past year of commit activity
    Python 44 BSD-3-Clause 1,488 0 9 Updated Feb 17, 2025
  • vllm-project/vllm-project.github.io-static’s past year of commit activity
    HTML 7 MIT 7 0 1 Updated Feb 7, 2025