Working from home
Highlights
- Pro
Pinned Loading
-
aibrix
aibrix PublicForked from vllm-project/aibrix
Cost-efficient and pluggable Infrastructure components for GenAI inference
Go
-
kserve
kserve PublicForked from kserve/kserve
Standardized Distributed Generative and Predictive AI Inference Platform for Scalable, Multi-Framework Deployment on Kubernetes
Go 1
-
vllm
vllm PublicForked from vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Python
-
kubeai
kubeai PublicForked from kubeai-project/kubeai
AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports VLMs, LLMs, embeddings, and speech-to-text.
Go
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.