Skip to content

The project delivers a comprehensive full-stack solution for the Intel® Enterprise AI Foundation on the OpenShift platform to provision Intel AI and Xeon accelerators, integrate AI software, and enable key AI workloads, such as LLM inferencing and fine-tuning for enterprise AI. GPU network provisioning is currently in the planning stage.

License

Notifications You must be signed in to change notification settings

intel/intel-technology-enabling-for-openshift

Intel® Technology Enabling for OpenShift*

Overview

The project delivers a comprehensive full-stack solution for the Intel® Enterprise AI Foundation on the OpenShift platform, applicable across data center, cloud, and edge environments. It utilizes innovative General Operators technology to provision AI accelerators, including the Intel Gaudi Processor, Flex and Max GPUs, and Xeon CPU accelerators such as QAT, SGX, and DSA. Additionally, the project introduces solutions for integrating Gaudi Software or OneAPI-based AI software into OpenShift AI. Key AI workload integrations, such as LLM inferencing, fine-tuning, and post-training for enterprise AI, are under development. The plans also include the GPU network provisioning and full-stack integration with OpenShift.

Alt text

Figure-1 Intel Technology Enabling for OpenShift Architecture

Releases and Supported Platforms

Getting started

See reference BIOS Configuration required for each feature.

Provisioning RHOCP cluster

Use one of these two options to provision an RHOCP cluster:

In this project, we provisioned RHOCP 4.16 on a bare-metal multi-node cluster. For details about the supported RHOCP infrastructure, see the Supported Platforms page.

Provisioning Intel hardware features on RHOCP

If you are familiar with the steps mentioned below to provision the accelerators, you can use One-Click solution as a reference to provision the accelerator automatically.

Follow Setting up HabanaAI Operator to provision Intel Gaudi AI accelerator.

Please follow the steps below to provision the hardware features

  1. Setting up Node Feature Discovery
  2. Setting up Machine Configuration
  3. Setting up Out of Tree Drivers
  4. Setting up Device Plugins

Verifying hardware feature provisioning

You can use the instructions in the link to verify the hardware features provisioning.

Upgrade (To be added)

Reference end-to-end solution

The reference end-to-end solution is based on Intel hardware feature provisioning provided by this project.

Intel AI Inferencing Solution with OpenVINO and RHOAI

Reference workloads

Here are the reference workloads built on the end-to-end solution and Intel hardware feature provisioning in this project.

Advanced Guide

This section discusses architecture and other technical details that go beyond getting started.

Release Notes

Check the link for the Release Notes.

Support

If users encounter any issues or have questions regarding Intel Technology Enabling for OpenShift, we recommend them to seek support through the following channels:

Commercial support from Red Hat

This project relies on features developed and released with the latest RHOCP release. Commercial RHOCP release support is outlined in the Red Hat OpenShift Container Platform Life Cycle Policy and Intel collaborates with Red Hat to address specific requirements from our users.

Open-Source Community Support

Intel Technology Enabling for OpenShift is run as an open-source project on GitHub. Project GitHub issues can be used as the primary support interface for users to submit feature requests and report issues to the community when using Intel technology provided by this project. Please provide detailed information about your issue and steps to reproduce it, if possible.

Contribute

See CONTRIBUTING for more information.

Security

To report a potential security vulnerability, please refer to security.md file.

License

Distributed under the open source license. See LICENSE for more information.

Code of Conduct

Intel has adopted the Contributor Covenant as the Code of Conduct for all of its open source projects. See CODE_OF_CONDUCT file.

About

The project delivers a comprehensive full-stack solution for the Intel® Enterprise AI Foundation on the OpenShift platform to provision Intel AI and Xeon accelerators, integrate AI software, and enable key AI workloads, such as LLM inferencing and fine-tuning for enterprise AI. GPU network provisioning is currently in the planning stage.

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published