Have You Ever Wondered Why AI Agents Speak IN Human Language?
The Hidden Costs and Future Fixes…
Language Models
We all know that Large Language Models (LLMs) communicate in human language via prompts…
This is part of why they have taken the word by storm, it is the mere fact that Language Models are accessible to the masses.
The reason for the natural language is that language models are trained on vast datasets of human-generated text, making natural language their primary mode of processing and generating information.
This approach enables seamless human-AI collaboration.
Prompts serve as structured instructions that leverage this training, guiding LLMs to produce coherent, contextually relevant responses.
This approach is obviously very intuitive for us as humans, allowing us to interact with models using familiar language, facilitates debugging and enables seamless human-AI collaboration.
However, as highlighted in a recent study, this reliance on human language introduces limitations like:
Semantic ambiguity and
Information loss,
as LLMs project complex internal representations into discrete tokens, which can misalign with the precision needed for machine-to-machine coordination.
AI Agents
AI Agents are powered by Large Language Models (LLMs), acting as the backbone of the AI Agent, while communicate in human language.
Hence much of the AI Agent communication is human language. I must hasten to say that there is much development around creating structured output from Language Models and intra-AI Agent communication.
I have seen nice examples in the LangChain AI Agent implementations where they structure data (JSON) for more efficient intra-AI Agent data exchange.
But, as I have mentioned before, LLMs are trained on massive datasets of human text, making natural language their go-to for processing and generating information.
Advantages:
Clarity, human language is easy for developers to understand, simplifying debugging and monitoring.
Versatility, it allows AI Agents to tackle diverse tasks, from coding to planning, without needing custom protocols.
Human-AI Teamwork, It enables seamless interaction with users, aligning with how we naturally communicate.
Frameworks like AutoGen and CAMEL rely on this approach, letting AI Agents talk to coordinate tasks like code generation or financial analysis.
But, human language isn’t built for machines.
The Hidden Costs
The study highlights critical flaws in using natural language for AI Agent communication:
Ambiguity & Loss
Human language is vague and redundant.
When AI Agents translate their complex internal states into words, information can get compressed and distorted.
This semantic aliasing means two AI Agents might interpret the same message differently, leading to misaligned goals.
Coordination Failures
In multi-AI Agent systems, natural language causes issues like lost-in-conversation (AI Agents losing task context) or pseudo-execution (describing task completion without doing it).
For example, AI Agents often get stuck in reasoning loops due to verbose dialogue, failing to execute tasks.
Structural Mismatch
LLMs are trained to predict the next word, not to maintain consistent roles or synchronise states across agents. This leads to role confusion and behavioural drift, especially in complex tasks with multiple agents.
These problems aren’t just hiccups — they’re systemic. Natural language, great for human nuance, lacks the precision machines need for reliable coordination.
A Better Way Forward
The researchers propose a new paradigm: native multi-agent modelling.
Instead of mimicking human chat, AI Agents should communicate via structured, machine-optimised protocols.
Tensor-Based Communication
Agents could exchange high-dimensional state tensors, preserving semantic fidelity over lossy words.
Role Persistence
Models should explicitly tie each agent’s state to its role, preventing confusion in multi-agent setups.
State Synchronisation
Mechanisms to align agents’ internal states, ensuring they stay on the same page.
Think of it like the human brain: neurons don’t “talk” in sentences but share precise, high-speed signals.
Future AI systems could adopt similar “neural-style” coordination, with natural language reserved for human-facing interfaces like debugging.
What’s Next?
This shift won’t be easy. Challenges include designing unified multimodal spaces (for example, blending language, vision, and action) and ensuring systems generalise to new tasks or agent groups.
Security is another hurdle — miscommunications could cascade, so we’ll need robust error-checking.
In Conclusion
Human language has powered AI Agents’ rise, but it’s a shaky foundation for their future.
By moving to structured, machine-native communication, we can build AI systems that collaborate reliably at scale. I need to add that I do not really have a conceptual understanding of how this tensor-language might look. Especially from a perspective of programming, management and telemetry.
Chief Evangelist @ Kore.ai | I’m passionate about exploring the intersection of AI and language. From Language Models, AI Agents to Agentic Applications, Development Frameworks & Data-Centric Productivity Tools, I share insights and ideas on how these technologies are shaping the future.