Defining Cognitive Tools To Make Language Models Reason
Mimicking AI Agent architecture by Introducing Tools In Prompts
In Short
In-Context Learning (ICL) implemented via RAG, became popular for two main reasons:
Its ability to rival RL-trained models without additional training…
and to unlock the latent and underlying capabilities of language models. In the case of ICL, it was discovered that models prioritises context at inference above fine-tuned data.
It also introduced a level of model independence, circumnavigating the need of reinforcement learning of models.
This study uncovers another latent capability of Language Models, Tools can be defined in via prompt engineering which serves as internal facing mechanisms. Cognitive Tools.
Hence structuring the Language Model’s reasoning process…eliciting capabilities which were not introduced via model training or fine-tuning.
Are Cognitive Tools As Important as ICL?
Introduction
We have been pushing the boundaries of what Large Language Models (LLMs) can achieve, particularly in reasoning tasks and in-context learning (ICL).
A groundbreaking study from IBM Research introduces an innovative approach to eliciting reasoning in LLMs using “cognitive tools.”
Unlike traditional tools within an AI Agent that connect to external systems, cognitive tools are inwardly focused, designed to guide the internal thought processes of LLMs.
By mimicking the modular architecture of AI Agents, this method significantly boosts reasoning performance, offering a fresh perspective on how we can unlock the latent capabilities of language models.
The Concept of Cognitive Tools
In conventional AI Agent frameworks, tools serve as integration points to the external world — web searches, calculators, MCP Servers or APIs.
However, the IBM study reimagines tools as cognitive operations encapsulated within the LLM itself, via prompt engineering .
The tools are implemented through prompt engineering, creating a modular, agent-like structure that guides the model’s reasoning process.
The study identifies four key cognitive tools:
Understand Question
Breaks down a problem into its core components, identifying key concepts, variables and relevant theorems.
Recall Related
Retrieves analogous problems and their solutions to guide reasoning through examples.
Examine Answer
Verifies the current reasoning trace for errors, miscalculations, or overlooked constraints, enabling self-reflection.
Backtracking
Identifies flawed steps in the reasoning process and suggests alternative approaches, akin to exploring new paths in problem-solving.
Each tool operates as a prompt-driven module, executed by the same LLM in a sandboxed context.
The output is fed back into the main reasoning loop, allowing the model to refine its approach dynamically.
Why Cognitive Tools Matter
The introduction of cognitive tools addresses a critical limitation of traditional prompting methods, such as flat prompts or monolithic chain-of-thought (CoT) approaches.
By compartmentalising reasoning steps, cognitive tools reduce interference between operations, enabling clearer and more focused problem-solving.
A New Perspective on Reasoning in LLMs
The IBM research contributes to the ongoing debate about the origins of reasoning in large language models (LLMs).
The study’s findings demonstrate that base models, when equipped with cognitive tools, can reveal latent reasoning abilities developed during pre-training.
This supports the idea that structured, modular prompts can unlock capabilities without heavy reliance on post-training techniques, such as reinforcement learning.
The use of modular prompting provides a more transparent and potentially more efficient alternative or complement to traditional fine-tuning methods.
From an AI Agent perspective, the approach bridges the gap between traditional tool-calling, which relies on external APIs and functions, and the need for modular internal reasoning.
By associating each reasoning step with a specific cognitive tool, the method enhances transparency and explainability.

Are Cognitive Tools As Important as ICL?
The discovery of cognitive tools can be argued to be as important and significant as in-context learning, particularly for tasks requiring structured reasoning and transparency.
While ICL revolutionised AI by enabling flexible task adaptation, cognitive tools advance the field by enhancing reasoning depth and interpretability.
Their ability to rival RL-trained models without additional training underscores their potential to reshape AI development.
But, ICL’s broader applicability and foundational role probably makes it more significant.
Chief Evangelist @ Kore.ai | I’m passionate about exploring the intersection of AI and language. From Language Models, AI Agents to Agentic Applications, Development Frameworks & Data-Centric Productivity Tools, I share insights and ideas on how these technologies are shaping the future.