Xorbits Inference(Xinference) is a powerful and versatile library designed to serve language,
speech recognition, and multimodal models. With Xorbits Inference, you can effortlessly deploy
and serve your or state-of-the-art built-in models using just a single command. Whether you are a
researcher, developer, or data scientist, Xorbits Inference empowers you to unleash the full
potential of cutting-edge AI models.
🌟 Model Serving Made Easy: Simplify the process of serving large language, speech
recognition, and multimodal models. You can set up and deploy your models
for experimentation and production with a single command.
⚡️ State-of-the-Art Models: Experiment with cutting-edge built-in models using a single
command. Inference provides access to state-of-the-art open-source models!
🖥 Heterogeneous Hardware Utilization: Make the most of your hardware resources with
ggml. Xorbits Inference intelligently utilizes heterogeneous
hardware, including GPUs and CPUs, to accelerate your model inference tasks.
⚙️ Flexible API and Interfaces: Offer multiple interfaces for interacting
with your models, supporting OpenAI compatible RESTful API (including Function Calling API), RPC, CLI
and WebUI for seamless model management and interaction.
🌐 Distributed Deployment: Excel in distributed deployment scenarios,
allowing the seamless distribution of model inference across multiple devices or machines.
🔌 Built-in Integration with Third-Party Libraries: Xorbits Inference seamlessly integrates
with popular third-party libraries including LangChain, LlamaIndex, Dify, and Chatbox.
Feature | Xinference | FastChat | OpenLLM | RayLLM |
---|---|---|---|---|
OpenAI-Compatible RESTful API | ✅ | ✅ | ✅ | ✅ |
vLLM Integrations | ✅ | ✅ | ✅ | ✅ |
More Inference Engines (GGML, TensorRT) | ✅ | ❌ | ✅ | ✅ |
More Platforms (CPU, Metal) | ✅ | ✅ | ❌ | ❌ |
Multi-node Cluster Deployment | ✅ | ❌ | ❌ | ✅ |
Image Models (Text-to-Image) | ✅ | ✅ | ❌ | ❌ |
Text Embedding Models | ✅ | ❌ | ❌ | ❌ |
Multimodal Models | ✅ | ❌ | ❌ | ❌ |
Audio Models | ✅ | ❌ | ❌ | ❌ |
More OpenAI Functionalities (Function Calling) | ✅ | ❌ | ❌ | ❌ |
Please give us a star before you begin, and you'll receive instant notifications for every new release on GitHub!
The lightest way to experience Xinference is to try our Juypter Notebook on Google Colab.
Nvidia GPU users can start Xinference server using Xinference Docker Image. Prior to executing the installation command, ensure that both Docker and CUDA are set up on your system.
docker run --name xinference -d -p 9997:9997 -e XINFERENCE_HOME=/data -v </on/your/host>:/data --gpus all xprobe/xinference:latest xinference-local -H 0.0.0.0
Install Xinference by using pip as follows. (For more options, see Installation page.)
pip install "xinference[all]"
To start a local instance of Xinference, run the following command:
$ xinference-local
Once Xinference is running, there are multiple ways you can try it: via the web UI, via cURL,
via the command line, or via the Xinference’s python client. Check out our docs for the guide.
Platform | Purpose |
---|---|
Github Issues | Reporting bugs and filing feature requests. |
Slack | Collaborating with other Xorbits users. |
Staying up-to-date on new features. |
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》