json-to-toon

How We Cut AI Token Costs by 30-60% with a Simple Data Format Swap

November 27, 20253 min read

For decades, JSON has been the default format for data interchange — it’s structured, predictable, and almost universally supported. But in the world of LLMs, the cost of communication depends on more than just semantics.

Every bracket, brace, quote and comma counts toward the token tally, and therefore toward your API bill. In many AI-driven systems, token inefficiency can silently drive up costs and even limit what you can send to the model in a single prompt.

At first glance, that doesn’t sound like much—but the cost is not just financial — it’s a design constraint. The more tokens consumed by syntax rather than data, the less room remains for real content, context, and reasoning.

So we asked ourselves: what if we could make our data lighter without changing the meaning?

It is this challenge that prompted a broader conversation in the community, and led to the emergence of TOON — a format specifically designed for LLM workflows, built to minimize token waste.

TOON

✨ The Strategic Upgrade: TOON (Token Oriented Object Notation)

TOON (Token-Oriented Object Notation) is a lightweight format built for LLM workflows. It removes the extra symbols that JSON depends on and keeps only the essential key-value structure by letting you do more with every request.

As LLMs continue to reshape how we build, automate, and interact with software, the demand for smarter, lighter data formats is exploding. This is where TOON shines. In side-by-side comparisons with JSON, TOON consistently cuts token usage significantly—directly improving performance and lowering API expenses.

If you're building for the GenAI era, TOON isn’t just an alternative format—it’s a strategic upgrade.

Let's start with a real example. Imagine you're building something that sends user data to an LLM for analysis:

In JSON:

{ "name": "John",
  "age": 30,
  "city": "Austin"
}

In TOON:

name=John
age=30
city=Austin

We don’t need the structure of full JSON for most of our LLM-based automations, No braces, no commas, no quotes—just keys and values.

Industry benchmarks show that for many data structures—especially flat arrays and tabular formats — TOON can reduce token usage by 30–60% compared to JSON.



💸 Real Savings in AI Automation

In one of our automation suites for AI , we replaced JSON with TOON for a simple data set and observed a consistent drop in token usage. The idea was simple: if we could send the same data with fewer tokens, we could reduce costs and increase prompt capacity.

We evaluated TOON in one of our automation testing workflows and found consistent gains: context windows stretched further, costs dropped, and prompts stayed easier to read.

Results

That’s a ~20 % reduction for a single workflow.

Now imagine hundreds of these workflows running across your systems—that’s where the savings compound.

Applied to one automation suite running 30 times a day, that’s a 20% drop in per-run cost. Multiply that by hundreds of automations and it adds up.

Growth

📊 Where TOON really fits and where it doesn't

TOON shines when data is uniform, flat, or tabular — the kinds of structures common in many AI prompts: flat objects, tabular records, and prompt templates. In these contexts, TOON dramatically reduces token overhead while preserving semantic clarity.

However, TOON isn’t a silver-bullet replacement for JSON across the board. For deeply nested, hierarchical, or schema-rich data, JSON’s explicit nesting syntax remains more suitable, reliable, and easier to validate. For inter-service APIs or components outside LLM contexts, JSON retains its advantages of universality, schema support, and ecosystem compatibility. These trade-offs are echoed in broader community analysis


🚀 Final Thoughts

JSON wasn’t built for a token-billed world. TOON helps reclaim wasted space so prompts go further and cost less. Sometimes efficiency isn’t about adding complexity — it’s about removing what you don’t need.

In the LLM era, every token counts. And small changes, like swapping {} for =, can create surprisingly big wins

Let's keep pushing boundaries and never stop engineering smarter, leaner solutions—because innovation never rests, and neither should we 💫.

Back to Blog