5-minute wow, once per framework.
Each example ships with the same four things: the problem it solves, exact before/after token counts, a copy-paste install, and a 2-minute demo. Runnable code lives in the repo. Numbers come from our 15-turn product-support benchmark.
LangChain
ConversationBufferMemory replays every turn — cost explodes after 10 turns.
OpenAI Agents SDK
Every tool call re-sends the prior conversation into the agent's context.
CrewAI
Each crew member receives all prior crew outputs as context — token cost scales with crew size.
MCP
Claude Desktop, Cursor, and Continue accumulate tool outputs across the session — context window fills with repetition.
Vercel AI SDK
streamText() receives the full message history on every request — token cost climbs with every turn.
Your framework isn't here yet?
The Nocturnus compression layer is framework-agnostic — you can call POST /context from any language. We're adding integrations based on what developers ask for. Open a request issue
or send a pull request with your own example.