archgw
The smart edge and AI gateway for agents. Arch is a high-performance proxy server that handles the low-level work in building agents: like applying guardrails, routing prompts to the right agent, and unifying access to LLMs, etc. Natively designed to handle and process prompts, Arch helps you build agents faster.
Top Related Projects
Manages Envoy Proxy as a Standalone or Kubernetes-based Application Gateway
Connect, secure, control, and observe services.
The Cloud Native Application Proxy
🦍 The Cloud-Native Gateway for APIs & AI
Contour is a Kubernetes ingress controller using Envoy proxy.
Quick Overview
ArchGW is an open-source API gateway designed for microservices architectures. It provides a scalable and secure solution for managing API traffic, authentication, and authorization in distributed systems. ArchGW aims to simplify the process of building and maintaining microservices-based applications.
Pros
- Lightweight and efficient, optimized for microservices architectures
- Built-in support for authentication and authorization
- Easily extensible through plugins and custom modules
- Designed with scalability in mind, suitable for high-traffic applications
Cons
- Relatively new project, may lack extensive community support
- Documentation could be more comprehensive
- Limited out-of-the-box integrations compared to some established API gateways
- May require additional configuration for complex deployment scenarios
Code Examples
// Initialize ArchGW
gw := archgw.New()
// Configure a route
gw.AddRoute("/api/users", "http://user-service:8080")
// Add authentication middleware
gw.Use(archgw.JWTAuth(secretKey))
// Start the gateway
gw.Start(":8000")
This example demonstrates how to initialize ArchGW, configure a route, add JWT authentication middleware, and start the gateway.
// Custom rate limiting middleware
func customRateLimit(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Implement custom rate limiting logic here
if exceedsLimit(r) {
http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
return
}
next.ServeHTTP(w, r)
})
}
// Add custom middleware to ArchGW
gw.Use(customRateLimit)
This example shows how to create and add a custom rate limiting middleware to ArchGW.
// Configure CORS
corsOptions := archgw.CORSOptions{
AllowedOrigins: []string{"https://example.com"},
AllowedMethods: []string{"GET", "POST", "PUT", "DELETE"},
AllowedHeaders: []string{"Content-Type", "Authorization"},
}
gw.UseCORS(corsOptions)
This example demonstrates how to configure CORS (Cross-Origin Resource Sharing) settings for ArchGW.
Getting Started
To get started with ArchGW, follow these steps:
-
Install ArchGW:
go get github.com/katanemo/archgw -
Create a new Go file (e.g.,
main.go) and add the following code:package main import "github.com/katanemo/archgw" func main() { gw := archgw.New() gw.AddRoute("/api", "http://backend-service:8080") gw.Start(":8000") } -
Run the application:
go run main.go
This will start ArchGW on port 8000, routing requests from /api to your backend service.
Competitor Comparisons
Manages Envoy Proxy as a Standalone or Kubernetes-based Application Gateway
Pros of Gateway
- More mature and widely adopted project with a larger community
- Extensive documentation and examples for various use cases
- Built on top of the battle-tested Envoy proxy, providing robust performance and features
Cons of Gateway
- Steeper learning curve due to its complexity and extensive feature set
- Heavier resource footprint compared to lighter alternatives
- Configuration can be verbose and require more setup time
Code Comparison
Gateway configuration example:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: example-gateway
spec:
gatewayClassName: envoy
listeners:
- name: http
port: 80
protocol: HTTP
Archgw configuration example:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: example-gateway
spec:
gatewayClassName: archgw
listeners:
- name: http
port: 80
protocol: HTTP
Both projects implement the Kubernetes Gateway API, but Gateway offers more advanced features and customization options, while Archgw aims for simplicity and ease of use. Gateway is better suited for complex, high-traffic environments, whereas Archgw may be more appropriate for smaller-scale deployments or teams looking for a lightweight solution.
Connect, secure, control, and observe services.
Pros of Istio
- Mature and widely adopted service mesh solution with extensive features
- Strong community support and regular updates
- Comprehensive traffic management and security capabilities
Cons of Istio
- Complex setup and configuration process
- Higher resource overhead compared to lighter alternatives
- Steep learning curve for new users
Code Comparison
Istio configuration example:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- route:
- destination:
host: reviews.prod.svc.cluster.local
subset: v2
ArchGW configuration example:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: http-route
spec:
parentRefs:
- name: example-gateway
rules:
- matches:
- path:
type: PathPrefix
value: /
While both projects aim to improve service networking, Istio offers a more comprehensive service mesh solution with advanced features, whereas ArchGW focuses on providing a simpler, Kubernetes-native API Gateway. Istio's configuration tends to be more complex, while ArchGW aims for a more straightforward approach aligned with Kubernetes Gateway API standards.
The Cloud Native Application Proxy
Pros of Traefik
- More mature and widely adopted project with a larger community
- Extensive feature set including automatic HTTPS, service discovery, and load balancing
- Better documentation and examples for various use cases
Cons of Traefik
- Can be complex to configure for advanced scenarios
- Higher resource usage compared to simpler reverse proxies
- Steeper learning curve for newcomers
Code Comparison
Traefik configuration (YAML):
http:
routers:
my-router:
rule: "Host(`example.com`)"
service: my-service
services:
my-service:
loadBalancer:
servers:
- url: "http://backend1:8080"
- url: "http://backend2:8080"
ArchGW configuration (JSON):
{
"routes": [
{
"path": "/",
"upstream": "http://backend:8080"
}
]
}
Summary
Traefik is a more feature-rich and mature reverse proxy solution, offering advanced capabilities like automatic HTTPS and service discovery. However, it can be more complex to set up and may consume more resources. ArchGW, on the other hand, appears to be a simpler solution with a focus on API gateway functionality. The choice between the two depends on specific project requirements and the desired level of complexity.
🦍 The Cloud-Native Gateway for APIs & AI
Pros of Kong
- Mature and widely adopted API gateway with extensive documentation
- Large ecosystem of plugins and integrations
- Supports multiple deployment options (Kubernetes, cloud, on-premises)
Cons of Kong
- Can be complex to set up and configure for smaller projects
- Resource-intensive, may require significant infrastructure
- Steeper learning curve for newcomers
Code Comparison
Kong (Lua):
local plugin = {
name = "my-custom-plugin",
priority = 1000,
version = "1.0",
}
function plugin:access(conf)
kong.service.request.set_header("X-Custom-Header", "Hello World")
end
return plugin
Archgw (Go):
func (p *Plugin) ProcessRequest(ctx context.Context, req *http.Request) (*http.Request, error) {
req.Header.Set("X-Custom-Header", "Hello World")
return req, nil
}
Key Differences
- Kong is written in Lua and uses OpenResty, while Archgw is written in Go
- Kong offers a more extensive feature set, but Archgw may be simpler for basic use cases
- Archgw focuses on cloud-native environments, while Kong supports various deployment options
Use Cases
- Kong: Large-scale enterprise applications with complex API management needs
- Archgw: Cloud-native applications requiring a lightweight, easy-to-deploy gateway
Contour is a Kubernetes ingress controller using Envoy proxy.
Pros of Contour
- More mature and widely adopted project with a larger community
- Offers advanced traffic routing and load balancing features
- Supports multiple protocols including HTTP, HTTPS, and gRPC
Cons of Contour
- More complex setup and configuration compared to ArchGW
- Requires more resources to run and maintain
- May be overkill for simpler use cases or smaller deployments
Code Comparison
ArchGW configuration example:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: example-gateway
spec:
gatewayClassName: archgw
listeners:
- name: http
port: 80
protocol: HTTP
Contour configuration example:
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: example-proxy
spec:
virtualhost:
fqdn: example.com
routes:
- conditions:
- prefix: /
services:
- name: example-service
port: 80
The code examples show that ArchGW uses the standard Gateway API, while Contour uses its custom HTTPProxy resource for configuration. This difference reflects Contour's more advanced features and flexibility, but also its increased complexity compared to ArchGW's simpler approach.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
Arch is a modular ai-native edge and AI gateway for agents.
Arch handles the pesky low-level work in building agentic apps â like applying guardrails, clarifying vague user input, routing prompts to the right agent, and unifying access to any LLM. Itâs a language and framework friendly infrastructure layer designed to help you build and ship agentic apps faster.
Quickstart ⢠Demos ⢠Route LLMs ⢠Build agentic apps with Arch ⢠Documentation ⢠Contact
About The Latest Release:
[0.3.15] Preference-aware multi LLM routing for Claude Code 2.0 
Overview
AI demos are easy to hack. But once you move past a prototype, youâre stuck building and maintaining low-level plumbing code that slows down real innovation. For example:
- Routing & orchestration. Put routing in code and youâve got two choices: maintain it yourself or live with a frameworkâs baked-in logic. Either way, keeping routing consistent means pushing code changes across all your agents, slowing iteration and turning every policy tweak into a refactor instead of a config flip.
- Model integration churn. Frameworks wire LLM integrations directly into code abstractions, making it hard to add or swap models without touching application code â meaning youâll have to do codewide search/replace every time you want to experiment with a new model or version.
- Observability & governance. Logging, tracing, and guardrails are baked in as tightly coupled features, so bringing in best-of-breed solutions is painful and often requires digging through the guts of a framework.
- Prompt engineering overhead. Input validation, clarifying vague user input, and coercing outputs into the right schema all pile up, turning what should be design work into low-level plumbing work.
- Brittle upgrades. Every change (new model, new guardrail, new trace format) means patching and redeploying application servers. Contrast that with bouncing a central proxyâone upgrade, instantly consistent everywhere.
With Arch, you can move faster by focusing on higher-level objectives in a language and framework agnostic way. Arch was built by the contributors of Envoy Proxy with the belief that:
Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests including secure handling, intelligent routing, robust observability, and integration with backend (API) systems to improve speed and accuracy for common agentic scenarios â all outside core application logic.*
Core Features:
ð¦ Route to Agents: Engineered with purpose-built LLMs for fast (<100ms) agent routing and hand-offð Route to LLMs: Unify access to LLMs with support for three routing strategies.⨠Guardrails: Centrally configure and prevent harmful outcomes and ensure safe user interactionsâ¡ Tools Use: For common agentic scenarios let Arch instantly clarify and convert prompts to tools/API callsðµ Observability: W3C compatible request tracing and LLM metrics that instantly plugin with popular toolsð§± Built on Envoy: Arch runs alongside app servers as a containerized process, and builds on top of Envoy's proven HTTP management and scalability features to handle ingress and egress traffic related to prompts and LLMs.
High-Level Sequence Diagram:

Jump to our docs to learn how you can use Arch to improve the speed, security and personalization of your GenAI apps.
[!IMPORTANT] Today, the function calling LLM (Arch-Function) designed for the agentic and RAG scenarios is hosted free of charge in the US-central region. To offer consistent latencies and throughput, and to manage our expenses, we will enable access to the hosted version via developers keys soon, and give you the option to run that LLM locally. For more details see this issue #258
Contact
To get in touch with us, please join our discord server. We will be monitoring that actively and offering support there.
Demos
- Sample App: Weather Forecast Agent - A sample agentic weather forecasting app that highlights core function calling capabilities of Arch.
- Sample App: Network Operator Agent - A simple network device switch operator agent that can retrive device statistics and reboot them.
- User Case: Connecting to SaaS APIs - Connect 3rd party SaaS APIs to your agentic chat experience.
Quickstart
Follow this quickstart guide to use Arch as a router for local or hosted LLMs, including dynamic routing. Later in the section we will see how you can Arch to build highly capable agentic applications, and to provide e2e observability.
Prerequisites
Before you begin, ensure you have the following:
- Docker System (v24)
- Docker compose (v2.29)
- Python (v3.13)
Arch's CLI allows you to manage and interact with the Arch gateway efficiently. To install the CLI, simply run the following command:
[!TIP] We recommend that developers create a new Python virtual environment to isolate dependencies before installing Arch. This ensures that archgw and its dependencies do not interfere with other packages on your system.
$ python3.12 -m venv venv
$ source venv/bin/activate # On Windows, use: venv\Scripts\activate
$ pip install archgw==0.3.15
Use Arch as a LLM Router
Arch supports three powerful routing strategies for LLMs: model-based routing, alias-based routing, and preference-based routing. Each strategy offers different levels of abstraction and control for managing your LLM infrastructure.
Model-based Routing
Model-based routing allows you to configure specific models with static routing. This is ideal when you need direct control over which models handle specific requests. Arch supports 11+ LLM providers including OpenAI, Anthropic, DeepSeek, Mistral, Groq, and more.
version: v0.1.0
listeners:
egress_traffic:
address: 0.0.0.0
port: 12000
message_format: openai
timeout: 30s
llm_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
default: true
- model: anthropic/claude-3-5-sonnet-20241022
access_key: $ANTHROPIC_API_KEY
You can then route to specific models using any OpenAI-compatible client:
from openai import OpenAI
client = OpenAI(base_url="http://127.0.0.1:12000/v1", api_key="test")
# Route to specific model
response = client.chat.completions.create(
model="anthropic/claude-3-5-sonnet-20241022",
messages=[{"role": "user", "content": "Explain quantum computing"}]
)
Alias-based Routing
Alias-based routing lets you create semantic model names that map to underlying providers. This approach decouples your application code from specific model names, making it easy to experiment with different models or handle provider changes.
version: v0.1.0
listeners:
egress_traffic:
address: 0.0.0.0
port: 12000
message_format: openai
timeout: 30s
llm_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
- model: anthropic/claude-3-5-sonnet-20241022
access_key: $ANTHROPIC_API_KEY
model_aliases:
# Model aliases - friendly names that map to actual model names
fast-model:
target: gpt-4o-mini
reasoning-model:
target: gpt-4o
creative-model:
target: claude-3-5-sonnet-20241022
Use semantic aliases in your application code:
# Your code uses semantic names instead of provider-specific ones
response = client.chat.completions.create(
model="reasoning-model", # Routes to best available reasoning model
messages=[{"role": "user", "content": "Solve this complex problem..."}]
)
Preference-aligned Routing
Preference-aligned routing provides intelligent, dynamic model selection based on natural language descriptions of tasks and preferences. Instead of hardcoded routing logic, you describe what each model is good at using plain English.
version: v0.1.0
listeners:
egress_traffic:
address: 0.0.0.0
port: 12000
message_format: openai
timeout: 30s
llm_providers:
- model: openai/gpt-4o
access_key: $OPENAI_API_KEY
routing_preferences:
- name: complex_reasoning
description: deep analysis, mathematical problem solving, and logical reasoning
- name: creative_writing
description: storytelling, creative content, and artistic writing
- model: deepseek/deepseek-coder
access_key: $DEEPSEEK_API_KEY
routing_preferences:
- name: code_generation
description: generating new code, writing functions, and creating scripts
- name: code_review
description: analyzing existing code for bugs, improvements, and optimization
Arch uses a lightweight 1.5B autoregressive model to intelligently map user prompts to these preferences, automatically selecting the best model for each request. This approach adapts to intent drift, supports multi-turn conversations, and avoids brittle embedding-based classifiers or manual if/else chains. No retraining required when adding models or updating policies â routing is governed entirely by human-readable rules.
Learn More: Check our documentation for comprehensive provider setup guides and routing strategies. You can learn more about the design, benchmarks, and methodology behind preference-based routing in our paper:
Build Agentic Apps with Arch
In following quickstart we will show you how easy it is to build AI agent with Arch gateway. We will build a currency exchange agent using following simple steps. For this demo we will use https://api.frankfurter.dev/ to fetch latest price for currencies and assume USD as base currency.
Step 1. Create arch config file
Create arch_config.yaml file with following content,
version: v0.1.0
listeners:
ingress_traffic:
address: 0.0.0.0
port: 10000
message_format: openai
timeout: 30s
llm_providers:
- access_key: $OPENAI_API_KEY
model: openai/gpt-4o
system_prompt: |
You are a helpful assistant.
prompt_guards:
input_guards:
jailbreak:
on_exception:
message: Looks like you're curious about my abilities, but I can only provide assistance for currency exchange.
prompt_targets:
- name: currency_exchange
description: Get currency exchange rate from USD to other currencies
parameters:
- name: currency_symbol
description: the currency that needs conversion
required: true
type: str
in_path: true
endpoint:
name: frankfurther_api
path: /v1/latest?base=USD&symbols={currency_symbol}
system_prompt: |
You are a helpful assistant. Show me the currency symbol you want to convert from USD.
- name: get_supported_currencies
description: Get list of supported currencies for conversion
endpoint:
name: frankfurther_api
path: /v1/currencies
endpoints:
frankfurther_api:
endpoint: api.frankfurter.dev:443
protocol: https
Step 2. Start arch gateway with currency conversion config
$ archgw up arch_config.yaml
2024-12-05 16:56:27,979 - cli.main - INFO - Starting archgw cli version: 0.3.15
2024-12-05 16:56:28,485 - cli.utils - INFO - Schema validation successful!
2024-12-05 16:56:28,485 - cli.main - INFO - Starting arch model server and arch gateway
2024-12-05 16:56:51,647 - cli.core - INFO - Container is healthy!
Once the gateway is up you can start interacting with at port 10000 using openai chat completion API.
Some of the sample queries you can ask could be what is currency rate for gbp? or show me list of currencies for conversion.
Step 3. Interacting with gateway using curl command
Here is a sample curl command you can use to interact,
$ curl --header 'Content-Type: application/json' \
--data '{"messages": [{"role": "user","content": "what is exchange rate for gbp"}], "model": "none"}' \
http://localhost:10000/v1/chat/completions | jq ".choices[0].message.content"
"As of the date provided in your context, December 5, 2024, the exchange rate for GBP (British Pound) from USD (United States Dollar) is 0.78558. This means that 1 USD is equivalent to 0.78558 GBP."
And to get list of supported currencies,
$ curl --header 'Content-Type: application/json' \
--data '{"messages": [{"role": "user","content": "show me list of currencies that are supported for conversion"}], "model": "none"}' \
http://localhost:10000/v1/chat/completions | jq ".choices[0].message.content"
"Here is a list of the currencies that are supported for conversion from USD, along with their symbols:\n\n1. AUD - Australian Dollar\n2. BGN - Bulgarian Lev\n3. BRL - Brazilian Real\n4. CAD - Canadian Dollar\n5. CHF - Swiss Franc\n6. CNY - Chinese Renminbi Yuan\n7. CZK - Czech Koruna\n8. DKK - Danish Krone\n9. EUR - Euro\n10. GBP - British Pound\n11. HKD - Hong Kong Dollar\n12. HUF - Hungarian Forint\n13. IDR - Indonesian Rupiah\n14. ILS - Israeli New Sheqel\n15. INR - Indian Rupee\n16. ISK - Icelandic Króna\n17. JPY - Japanese Yen\n18. KRW - South Korean Won\n19. MXN - Mexican Peso\n20. MYR - Malaysian Ringgit\n21. NOK - Norwegian Krone\n22. NZD - New Zealand Dollar\n23. PHP - Philippine Peso\n24. PLN - Polish ZÅoty\n25. RON - Romanian Leu\n26. SEK - Swedish Krona\n27. SGD - Singapore Dollar\n28. THB - Thai Baht\n29. TRY - Turkish Lira\n30. USD - United States Dollar\n31. ZAR - South African Rand\n\nIf you want to convert USD to any of these currencies, you can select the one you are interested in."
Observability
Arch is designed to support best-in class observability by supporting open standards. Please read our docs on observability for more details on tracing, metrics, and logs. The screenshot below is from our integration with Signoz (among others)

Debugging
When debugging issues / errors application logs and access logs provide key information to give you more context on whats going on with the system. Arch gateway runs in info log level and following is a typical output you could see in a typical interaction between developer and arch gateway,
$ archgw up --service archgw --foreground
...
[2025-03-26 18:32:01.350][26][info] prompt_gateway: on_http_request_body: sending request to model server
[2025-03-26 18:32:01.851][26][info] prompt_gateway: on_http_call_response: model server response received
[2025-03-26 18:32:01.852][26][info] prompt_gateway: on_http_call_response: dispatching api call to developer endpoint: weather_forecast_service, path: /weather, method: POST
[2025-03-26 18:32:01.882][26][info] prompt_gateway: on_http_call_response: developer api call response received: status code: 200
[2025-03-26 18:32:01.882][26][info] prompt_gateway: on_http_call_response: sending request to upstream llm
[2025-03-26 18:32:01.883][26][info] llm_gateway: on_http_request_body: provider: gpt-4o-mini, model requested: None, model selected: gpt-4o-mini
[2025-03-26 18:32:02.818][26][info] llm_gateway: on_http_response_body: time to first token: 1468ms
[2025-03-26 18:32:04.532][26][info] llm_gateway: on_http_response_body: request latency: 3183ms
...
Log level can be changed to debug to get more details. To enable debug logs edit (supervisord.conf)[arch/supervisord.conf], change the log level --component-log-level wasm:info to --component-log-level wasm:debug. And after that you need to rebuild docker image and restart the arch gateway using following set of commands,
# make sure you are at the root of the repo
$ archgw build
# go to your service that has arch_config.yaml file and issue following command,
$ archgw up --service archgw --foreground
Contribution
We would love feedback on our Roadmap and we welcome contributions to Arch! Whether you're fixing bugs, adding new features, improving documentation, or creating tutorials, your help is much appreciated. Please visit our Contribution Guide for more details
Top Related Projects
Manages Envoy Proxy as a Standalone or Kubernetes-based Application Gateway
Connect, secure, control, and observe services.
The Cloud Native Application Proxy
🦍 The Cloud-Native Gateway for APIs & AI
Contour is a Kubernetes ingress controller using Envoy proxy.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot