Install in Local
This guide will help you set up and install the Semantic Router on your system. The router runs entirely on CPU and does not require GPU for inference.
System Requirements
Note: No GPU required - the router runs efficiently on CPU using optimized BERT models.
Software Dependencies
- Go: Version 1.24.1 or higher (matches the module requirements)
- Rust: Version 1.90.0 or higher (for Candle bindings)
- Python: Version 3.8 or higher (for model downloads)
- HuggingFace CLI: For model downloads (
pip install huggingface_hub
)
Local Installation
1. Clone the Repository
git clone https://github.com/vllm-project/semantic-router.git
cd semantic-router
2. Install Dependencies
Install Go (if not already installed)
# Check if Go is installed
go version
# If not installed, download from https://golang.org/dl/
# Or use package manager:
# macOS: brew install go
# Ubuntu: sudo apt install golang-go
Install Rust (if not already installed)
# Check if Rust is installed
rustc --version
# If not installed:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source ~/.cargo/env
Install Python (if not already installed)
# Check if Python is installed
python --version
# If not installed:
# macOS: brew install python
# Ubuntu: sudo apt install python3 python3-pip (Tips: need python3.8+)
Install HuggingFace CLI
pip install huggingface_hub
3. Build the Project
# Build everything (Rust + Go)
make build
This command will:
- Build the Rust candle-binding library
- Build the Go router binary
- Place the executable in
bin/router
4. Download Pre-trained Models
# Download all required models (about 1.5GB total)
make download-models
This downloads the CPU-optimized BERT models for:
- Category classification
- PII detection
- Jailbreak detection
Tip:
make test
invokesmake download-models
automatically, so you only need to run this step manually the first time or when refreshing the cache.
5. Configure Backend Endpoints
Edit config/config.yaml
to point to your LLM endpoints:
# Example: Configure your vLLM or Ollama endpoints
vllm_endpoints:
- name: "your-endpoint"
address: "127.0.0.1" # MUST be IP address (IPv4 or IPv6)
port: 11434 # Replace with your port
models:
- "your-model-name" # Replace with your model
weight: 1
model_config:
"your-model-name":
pii_policy:
allow_by_default: false # Deny all PII by default
pii_types_allowed: ["EMAIL_ADDRESS", "PERSON", "GPE", "PHONE_NUMBER"] # Only allow these specific PII types
preferred_endpoints: ["your-endpoint"]
⚠️ Important: Address Format Requirements
The address
field must contain a valid IP address (IPv4 or IPv6). Domain names are not supported.
✅ Correct formats:
"127.0.0.1"
(IPv4)"192.168.1.100"
(IPv4)
❌ Incorrect formats:
"localhost"
→ Use"127.0.0.1"
instead"your-server.com"
→ Use the server's IP address"http://127.0.0.1"
→ Remove protocol prefix"127.0.0.1:8080"
→ Use separateport
field
The default configuration includes example endpoints that you should update for your setup.
Running the Router
1. Start the Services
Open two terminals and run:
Terminal 1: Start Envoy Proxy
make run-envoy
Terminal 2: Start Semantic Router
make run-router
Step 2: Manual Testing
You can also send custom requests:
curl -X POST http://localhost:8801/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "auto",
"messages": [
{"role": "user", "content": "What is the derivative of x^2?"}
]
}'
Next Steps
After successful installation:
- Configuration Guide - Customize your setup and add your own endpoints
- API Documentation - Detailed API reference
Getting Help
- Issues: Report bugs on GitHub Issues
- Documentation: Full documentation at Read the Docs
You now have a working Semantic Router that runs entirely on CPU and intelligently routes requests to specialized models!