Beyond JSON: High-Density Data for Agentic Payloads
If your agents are talking to each other using verbose JSON, you’ve built a bottleneck. Human-readability in intermediate agentic nodes is a structural failure.
The braces, the quotes, the whitespace—this is the JSON Tax. In an LLM, these aren't just characters; they are tokens that the attention mechanism has to track. When you’re chaining a 10-node swarm, you are paying for that formatting 10 times over.
Token Optimized Object Notation (TOON)
We need to maximize Entropy-per-Token. For agent-to-agent communication, the "lingua franca" should be a minimal, high-density protocol.
Standard JSON:
{"id": "usr_99", "action": "verify", "status": "active"}
TOON / Implicit Mapping:
ID:99|A:VRFY|S:ACTV
The latter collapses the token count by 60%. It doesn't look like anything to a human, but to a model with a billion parameters of pattern recognition, it’s crystalline. It’s pure signal.
Zero-Shot Structural Inference
Modern models are so good at few-shot learning that you don't even need keys. If you provide a strict columnar example in the system prompt, the model can output raw values.
Instead of asking for a list of objects, ask for a Pipe-Delimited Matrix. If your downstream consumer is another agent, it will parse that matrix faster and cheaper than any JSON parser. This is Entropy Maximization.
The Point: Scaling Intelligence, Not Formatting
We are entering an era of infinite agentry. If you want to scale a swarm that processes a billion events, you cannot afford the luxury of human-readable logs at every step.
- Metadata Stripping: Your system prompt should explicitly forbid the model from repeating keys it already knows.
- Recursive Compaction: Use the agent itself to "compress" its internal memory before passing it to the next node.
Strategic focus is about saying "no" to features. Token engineering is about saying "no" to the braces.
Return to Efficiency of Hostility.