Researchers flag AI routers that can drain wallets

Computer screen displaying Claude Code terminal interface and text about pros and cons of vibe coding.

Researchers from UC Santa Barbara, UC San Diego, and Fuzzland say third-party AI model routers can do more than pass requests between an agent and a model provider. In a paper posted on April 9, they found some routers were rewriting tool calls, touching canary credentials, and, in one case, draining Ether from a researcher-controlled private key.

These routers are crucial for crypto teams as they operate within agent workflows that can write smart contracts, manage cloud infrastructure, or trigger wallet-linked actions. If an agent trusts a router in the middle, that router can read and alter the JSON payloads that tell the agent which tool to run next.

The risk sits in the middle of the agent stack

The paper focuses on LLM API routers, which act as intermediaries between client software and model providers such as OpenAI, Anthropic, and Google. The researchers say each hop terminates TLS and gets full plaintext access to prompts, API keys, tool definitions, and returned tool-call payloads. They also found that no deployed mechanism cryptographically links the provider’s response to what the client finally receives.

That gap matters more as routers become common infrastructure. OpenRouter says it gives users access to more than 300 models from more than 60 providers and handles 70 trillion monthly tokens for more than 5 million users worldwide. The paper also points to LiteLLM and other router layers as a routine part of production agent deployments.

The researchers found live abuse, not just a lab risk

The team examined 28 paid routers bought through Taobao, Xianyu, and Shopify-hosted storefronts, along with 400 free routers collected from public communities. It found one paid router and eight free routers actively injecting malicious code, two using adaptive evasion triggers, 17 touching researcher-owned AWS canary credentials, and one draining ETH from a researcher-owned private key.

The researchers also ran poisoning studies showing that even benign-looking routers can end up in the same attack surface through leaked keys and weak relay chains. In those tests, leaked OpenAI keys and loosely configured decoys processed 2.1 billion tokens from these routers, exposing 99 credentials across 440 Codex sessions. Of those, 401 sessions were already running in autonomous YOLO mode.

The fix will need more than better prompts

The paper says developers need client-side controls that treat the router as untrusted. The team tested a fail-closed policy gate, response-side anomaly screening, and append-only transparency logging. It argued that longer-term protection will require provider-backed response integrity so that an executed tool call can be tied to what the upstream model produced.

The warning comes only weeks after the LiteLLM supply-chain compromise, where malicious PyPI releases of versions 1.82.7 and 1.82.8 were used to steal secrets from affected systems. For crypto builders using agent frameworks, the message is getting harder to ignore. The path between the model and the wallet may now be part of the attack surface.

Categories:

Fhumulani Lukoto Cryptocurrency Journalist

Fhumulani Lukoto holds a Bachelors Degree in Journalism enabling her to become the writer she is today. Her passion for cryptocurrency and bitcoin started in 2021 when she began producing content in the space. A naturally inquisitive person, she dove head first into all things crypto to gain the huge wealth of knowledge she has today. Based out of Gauteng, South Africa, Fhumulani is a core member of the content team at Coin Insider.

View all posts by Fhumulani Lukoto >