You arrive at a hotel. The receptionist does not handle everything. A guest checking in goes to the front desk. A guest ordering room service gets routed to the kitchen line. A guest with a billing complaint goes to the finance desk. The receptionist knows which desk handles which request, and they route accordingly. The guest gets to the right place without needing to understand the hotel’s internal organization. The system works because the router knows the capabilities of each destination and matches them to the request.
Model routing works the same way. Different models have different capabilities and costs. A simple factual query might need only a small, fast model. A complex reasoning task needs a larger, more capable model. The router’s job is to match each request to the appropriate model. The goal is to get the right capability at the right cost, not to use the most capable model for everything.
What Enables Routing
Routing requires a classifier that reads the incoming request and decides which model to use. This classifier can be a separate ML model, a rule-based system, or a lightweight LLM that makes the routing decision. The key is that the router is cheaper than the target model would be if used for every request. If routing costs more than the savings from using a smaller model, routing is not worth it.
The benefit is cost and latency optimization. Simple requests get handled by cheap, fast models. Complex requests that genuinely need frontier model capability go to more expensive, slower models. The overall system gets better economics than using the most capable model for everything. A system that routes 80% of queries to a cheap model and 20% to an expensive one saves significantly compared to a system that uses the expensive model for everything.
Consider a customer service workload. Most queries are simple FAQs: “what are your hours,” “how do I reset my password.” These do not require frontier model capability. A small, fast model handles these correctly at low cost. A smaller percentage are complex complaints requiring nuanced understanding and policy reasoning: “I was charged twice and nobody has responded to my emails for a week.” These need a more capable model. Routing directs each query to the appropriate resource. The customer with the simple question gets a fast answer. The customer with the complex complaint gets the attention it needs.
The Routing Classifier Problem
The router is only as good as its classifier. Building a routing classifier is its own ML problem with its own failure modes. The classifier must correctly estimate complexity, and complexity estimation is harder than it looks. A request that looks simple may be complex in context. A request that looks complex may be simple. The router must infer complexity from surface features that correlate imperfectly with actual difficulty.
Consider a query that appears simple: “What is my balance?” This looks like a simple database lookup. But what if the account has multiple currencies? What if there are pending transactions that affect the displayed balance? What if the user is asking about available credit rather than current balance? The simple query has complex underpinnings. A router that classifies by surface features will misclassify these cases.
The inverse happens too. A query that appears complex: “I need to understand the implications of the recent regulatory changes on my business” may actually have a simple answer if the system has a document that directly addresses regulatory implications. The user framed it as a request for understanding, but what they need is a specific document. The complexity of the framing does not match the complexity of the answer.
Training a router requires labeled data that maps requests to their appropriate model tier. Generating this labels is expensive. You need to know not just what the user asked but what model would have handled it correctly. This often requires running requests through multiple models and comparing outputs, which defeats the purpose of routing for training data generation.
The Asymmetry of Routing Errors
Routing errors are asymmetric in their impact. A complex request sent to a simple model produces a wrong or inadequate answer. A simple request sent to a complex model produces a correct answer with wasted cost and latency. The direction of error matters more than the rate. A router that misroutes 10% of requests might be acceptable if those misroutes are simple requests sent to complex models (wasted cost but correct answers). A router that misroutes 5% of requests but those misroutes are complex requests sent to simple models (wrong answers) might be unacceptable.
For customer-facing applications, quality errors are usually more costly than wasted expense. A wrong answer damages trust in a way that slow answers do not. The default should be to route toward capability when the routing is uncertain. A conservative router that defaults to larger models for ambiguous requests may spend more on inference, but it avoids the worse outcome of sending hard requests to small models that cannot handle them.
This asymmetry should drive routing policy. If quality is more important than cost, bias toward capable models. If cost is more important than quality, bias toward cheap models. Most organizations say quality is more important but act as if cost is more important. The routing policy reveals actual priorities.
Routing quality depends on the router’s ability to classify requests accurately. A router that misjudges complexity will either waste money or sacrifice quality. A router that classifies “how do I cancel my subscription” as complex (because it mentions cancellation, which can be complex) will send a simple FAQ query to a large model. A router that classifies it as simple (because it looks like a FAQ) will send it to a small model, where it will be handled correctly. Both classifications are plausible; the router must choose correctly.
The asymmetric cost of routing errors means you should design your router to err in the direction that costs less. Sending a simple query to a large model costs money. Sending a complex query to a small model costs quality. If your application is customer-facing and reputation matters more than infrastructure costs, bias toward the large model. If your application is internal and cost matters more than user experience, bias toward the small model.
Routing Strategies
Simple routing uses explicit categories. Request type A goes to model X. Request type B goes to model Y. This works when request types are clearly distinguishable and map cleanly to model capabilities. “Check order status” goes to the small model. “Dispute a charge” goes to the large model. The advantage is transparency: you know exactly why each request goes where. The disadvantage is rigidity; you must manually define and maintain the routing rules, and new request types require new rules.
A routing system that routes based on detected intent is more flexible than one that routes on explicit categories. “This looks like a FAQ query” routes to the small model. “This looks like a complaint” routes to the large model. But flexibility adds complexity. The router must be trained or built to detect intent, and the intent detection must be maintained as request patterns evolve. If users start asking FAQs in unusual ways, the intent detector may misclassify them.
Content-based routing examines the actual content of the request. Is this a factual query or an open-ended discussion? Does the request ask for a list or an explanation? Does it contain multiple sub-questions? The router uses these features to estimate complexity and match to a model. This is more flexible than explicit categories but harder to debug. When routing fails, understanding why the router made the wrong choice requires inspecting the content features that influenced the decision.
Confidence-based routing sends the request to a smaller model first and uses a confidence score to decide whether to escalate. If the small model produces a high-confidence answer, return it. If confidence is low, escalate to a larger model. This adapts to per-request difficulty: easy questions within the small model’s capability get fast answers, hard questions get escalated. The disadvantage is latency for hard questions: they wait for the small model to fail before escalating.
Hybrid Routing Architectures
The most robust routing systems combine multiple strategies. A first tier might use fast rule-based routing for obvious cases (matching known FAQ patterns). A second tier might use a lightweight classifier for ambiguous cases. A third tier might use confidence-based escalation for cases that pass the first two tiers. This cascade provides fast routing for clear cases while handling ambiguity in later tiers.
Each tier adds latency. The cascade must be designed to minimize average latency while handling the tail. The first tier should handle the majority of cases. The second tier should handle most of the remainder. Only a small percentage should reach the third tier. If the third tier handles a large percentage of requests, the architecture is not working as designed.
Performance testing with production traffic distributions is essential. The routing architecture that works for synthetic test cases may fail for real traffic. Real traffic has patterns that synthetic traffic does not capture: time-of-day effects, seasonal effects, effects of recent events on query distribution. Testing with historical traffic reveals how the router performs under realistic conditions.
The Escalation Problem
Escalation sounds straightforward: when the simple model fails, use the complex model. In practice, escalation adds latency for the requests that are hardest. The requests that needed the frontier model waited for the smaller model to fail first. The time spent on the failed attempt is pure overhead.
Good escalation design minimizes this penalty. If escalation is triggered by explicit low-confidence signals rather than waiting for clear failure, you can set thresholds that catch most failures without excessive escalation. If escalation is triggered by timeouts, the timeout must be short enough to avoid unacceptable latency but long enough to avoid escalating requests that the small model would have handled correctly.
Escaping the escalation trap requires rethinking the architecture. One approach is to run both models in parallel: start the simple model immediately but also start the complex model in the background, and use whichever result arrives first if the simple model’s confidence is below threshold. This adds cost (you run both models on uncertain cases) but reduces latency for the hard cases. The trade-off is worth it when latency matters more than cost.
Parallel execution is not free. Running two models instead of one doubles the inference cost for cases that reach the complex model. The parallel approach only makes economic sense when the latency savings justify the additional inference cost. In high-traffic systems where latency directly affects user conversion, the parallel approach may pay for itself. In lower-traffic systems, the cost may exceed the benefit.
Another approach is speculative routing: instead of waiting for the simple model to produce a low-confidence signal, predict which requests will be hard and route them directly to the complex model. This requires a predictor that is itself a model, adding complexity. But if the predictor is much faster and cheaper than the target model, speculative routing can reduce average latency while controlling costs.
Fallback Behavior
What happens when the router cannot classify the request? What happens when the escalated model also fails? These fallback cases must be designed explicitly. If the router is uncertain, defaulting to the most capable model ensures quality at the cost of maximum expense. The risky fallback is retrying the same simple model, which will likely produce the same inadequate result.
Some systems implement a cascade: try model A, if that fails try model B, if that fails try model C. This reduces cost for easy cases but adds latency for hard cases and complicates debugging when multiple models fail. Knowing which model failed and why requires instrumentation that cascade architectures make complex.
The cleanest fallback is a defined default that prioritizes either quality or cost. A quality-first default sends everything to the most capable model. A cost-first default sends everything to the cheapest model. Either is easier to reason about than a complex cascade. Choose based on what the failure modes cost.
Fallback behavior reveals organizational priorities as clearly as routing policy. An organization that says “we prioritize quality” but defaults to cheap models during failures is not actually prioritizing quality. The fallback behavior is the real priority, not the stated one.
Monitoring Routing Health
Routing systems need monitoring to detect degradation. The distribution of requests across tiers should be tracked over time. If the percentage of requests reaching the third tier increases, the routing logic may be drifting or the request distribution may be shifting. Both require attention.
Routing accuracy should be measured by comparing router decisions against actual outcomes. If a router sends a request to a small model and the response required escalation, that is a routing error in the quality direction. If a router sends a request to a large model and the small model would have handled it correctly, that is a routing error in the cost direction. Tracking these errors separately reveals whether the router is degrading in a specific direction.
Model performance can change over time as models are updated or as request patterns evolve. A router trained on historical data may not reflect current model behavior or current request distribution. Regular retraining of the routing classifier is necessary to maintain accuracy.
The Model Tier Problem
Routing requires deciding how many model tiers to use. Two tiers (fast and capable) is simple. Three or more tiers add complexity. Each additional tier adds routing decisions and potential failure points.
More tiers allow finer-grained matching between request complexity and model capability. A three-tier system might have a fast model for FAQs, a medium model for analysis tasks, and a capable model for complex reasoning. The matching is more precise, but the routing logic is more complex.
The optimal number of tiers depends on the complexity distribution of your workload. If most requests are simple and a minority are complex, two tiers may suffice. If there is a large middle tier of moderate complexity, three tiers may improve matching. Beyond three tiers, the complexity often exceeds the benefit.
Decision Rules
Implement model routing when:
- Request complexity varies significantly in your workload
- You have identifiable categories of requests that map to different model capabilities
- Cost optimization matters more than minimizing latency
- The routing logic can be validated and monitored
Do not implement model routing when:
- Most requests require the same capability level (routing overhead exceeds savings)
- Routing errors (wrong model for request type) are expensive to recover from
- You cannot build or maintain a reliable classifier for your request types
- Latency budgets are so tight that escalation overhead is unacceptable
Design for:
- Fallback behavior when routing fails (default to quality or cost priority)
- Monitoring of routing accuracy and escalation rates
- Explicit choices about which direction to err on (quality vs cost)
- Latency impact of escalation on hard cases
The hotel works when the receptionist correctly identifies which desk you need. A router that constantly sends guests to the wrong desk creates more friction than it resolves. Know whether your router is accurate enough to justify the complexity.