Agentic Context Engineering (ACE): the evolution of learning in LLMs without changing their weights
Large Language Models (LLMs) have revolutionized artificial intelligence but have also revealed their limits. One of today’s biggest questions is how to make them smarter without retraining them from scratch. The answer may lie in a new approach: Agentic Context Engineering (ACE).
A new paradigm: improving the context, not the model
Instead of modifying model weights or rewriting prompts—common but limited strategies—ACE proposes a different idea: optimizing the context the model works with.
This framework focuses on how the model learns from its own interactions without changing its internal architecture.
The problems with traditional approaches
ACE identifies two main weaknesses in how models are currently adapted:
- Brevity Bias: LLMs tend to prefer short instructions, sacrificing nuance or specialized knowledge.
- Context Collapse: when the model rewrites or summarizes its own context, it gradually loses valuable information.
Both phenomena limit the models’ accuracy and their ability to handle complex tasks.
The ACE solution: evolving contexts
Inspired by how humans learn—experiment, reflect, and consolidate—ACE turns the context into a dynamic playbook that grows and improves with each use.
The process is structured into three modules:
- Generator: produces reasoning paths for new queries, identifying patterns and errors.
- Reflector: analyzes those paths and extracts lessons for improvement.
- Curator: synthesizes those lessons into incremental context updates, preserving coherence and usefulness.
The result: the model learns from its own performance, without relying on direct human supervision.
Promising results
Tests with ACE show consistent improvements over traditional methods:
- +10.6% performance increase in agent tasks.
- +8.6% improvement in specialized benchmarks (such as financial analysis).
- Lower adaptation cost and latency, achieving continuous self-improvement without retraining.
Toward self-adaptive LLMs
ACE represents a paradigm shift: we move from designing static prompts to creating living, evolving contexts.
With this approach, models not only process information but also learn how to learn.
Instead of chasing ever-larger models, frameworks like ACE invite us to focus on smarter, more adaptive, and sustainable models capable of growing with every interaction.

