PocketBrains logo PocketBrains

Model Card

Published March 27, 2026

PocketPlanning-1

The planning core behind PocketBrains Agentic Model.

Built on Qwen3.5 27B and trained on specialized personalized data, PocketPlanning-1 is designed for agent products that need stronger task completion and better unit economics at the same time. The cost advantage does not come from one trick. It comes from a planning-first architecture, task-specific training, and routing that only uses expensive frontier reasoning when the workflow actually needs it.

τ²-Bench telecom

0.935

Reported task pass rate for PocketBrains Agentic Model.

Pricing

$0.20 / $1.00

Input / output API pricing built for scaled agent usage.

Positioning

Better than mini

Above GPT-5.4 mini on the reported benchmark, at materially lower cost.

Overview

Why PocketPlanning-1 exists

Most agent stacks still spend too much on the wrong tokens. The expensive part is rarely raw generation alone. The expensive part is bad planning, weak routing, and letting cheap requests escalate into costly ones.

PocketPlanning-1 is the planning layer behind PocketBrains Agentic Model. The goal is not to chase the absolute top of every frontier benchmark regardless of price. The goal is to shift the price-performance curve in a direction that actually matters for teams shipping agents as products.

Why It Is Cheaper

The price advantage comes from architecture, training, and routing together

PocketBrains Agentic Model is cheaper because it is not paying frontier-model rates for every token in the loop. The system is designed so the planning layer is efficient by default, and expensive reasoning is used selectively instead of universally.

Architecture

Planning-first instead of frontier-first

PocketPlanning-1 is optimized for planning, decomposition, and orchestration. That means the stack does not need to run a much larger, more expensive model for the full request path just to decide what should happen next.

Training

Specialized data reduces wasted reasoning

Training on specialized personalized data improves decision quality in the planning layer. Better planning means fewer unnecessary steps, fewer bad escalations, and fewer expensive output tokens spent recovering from weak orchestration.

Routing

Claude is used when it adds value, not by default

PocketBrains Agentic Model combines PocketPlanning-1 with Claude 4.6 Sonnet, but the economics come from routing. PocketPlanning-1 handles the planning core, and frontier reasoning is invoked only for the parts of the workflow that truly need it.

Short version

This is not just a cheaper base model. It is a cheaper agent system design. Architecture lowers the default cost, training improves planning quality, and routing prevents premium model spend from leaking into every request.

Headline Result

0.935 on τ²-Bench telecom at $0.20 input and $1.00 output

On the benchmark figures provided here, PocketBrains Agentic Model posts a 0.935 task pass rate on τ²-Bench telecom. That places it above GPT-5.4 mini at 0.934, above GPT-5.4 nano at 0.925, and well above MiniMax M2.1 at 0.870.

What matters

PocketBrains does not need to beat every frontier model on absolute score to be a strong product decision. It needs to beat the efficient tier where real agent workloads are budgeted. On the supplied numbers, it does.

Competitive Snapshot

Where the model lands

Model Provider τ²-Bench telecom Input Output
Claude Opus 4.6 Anthropic 0.993 $5.00 $25.00
GPT-5.4 OpenAI 0.989 $2.50 $15.00
GPT-5.1 OpenAI 0.956 $1.25 $10.00
PocketBrains Agentic Model PocketBrains 0.935 $0.20 $1.00
GPT-5.4 mini OpenAI 0.934 $0.75 $4.50
GPT-5.4 nano OpenAI 0.925 $0.20 $1.25
MiniMax M2.1 MiniMax 0.870 $0.30 $1.20

Snapshot uses the benchmark and pricing figures supplied for public model entries on March 27, 2026.

Economics

The point is better margin, not just a nicer chart

vs GPT-5.4 mini

Higher score at a much lower price

PocketBrains Agentic Model edges past GPT-5.4 mini on the reported benchmark while cutting input pricing from $0.75 to $0.20 and output pricing from $4.50 to $1.00.

vs GPT-5.4 nano

Same input price, better output economics

At the same $0.20 input price, PocketBrains delivers a higher reported score and a lower output price than GPT-5.4 nano.

vs MiniMax M2.1

Cheaper on both sides of the token bill

PocketBrains comes in below MiniMax M2.1 on both input and output pricing while also outperforming it on the supplied τ²-Bench telecom result.

Intended Use

Built for agent products, not benchmark theater

PocketPlanning-1 is intended for the part of an agent stack that decides what happens next: planning, decomposition, routing, escalation, and orchestration under cost pressure.

If your product economics depend on high task completion without paying frontier prices on every request, the model belongs in the conversation. That is the core claim this release is making.