Why Your Next Robot Won't Follow Orders
Imagine a robot that doesn't just solve problems but has problems of its own. Not the "my-battery-is-low" kind of problem, but the kind that arises from having its own preferences, habits, and ways of engaging with the world. This isn't science fiction—it's the frontier of enactive robotics, a radical new approach that's turning artificial intelligence inside out.
Traditional robotics has been dominated by what scientists call the "problem-solving paradigm." For decades, the goal has been to build robots that efficiently solve puzzles we give them: navigate mazes, recognize objects, or beat us at chess. But what if this approach misses something fundamental about what it means to be intelligent? 1
Enactive robotics proposes a dramatic shift: instead of building robots to solve our problems, what if we could build robots that have their own problems? Robots that, like living creatures, must constantly work to maintain their existence and develop their own habits and concerns? This isn't just a philosophical thought experiment—it's leading to robots with surprising abilities, all thanks to a powerful concept called "sensorimotor contingencies."
Since the birth of artificial intelligence in the 1950s, the field has been obsessed with problem-solving. Early pioneers like Allen Newell and Herbert Simon famously declared that "the ability to solve problems is generally taken as a prime indicator that a system has intelligence" 1 . This perspective has driven incredible advances, from chess-playing computers to self-driving cars.
However, this exclusive focus on problem-solving has come at a cost. As researchers note, "the essence of intelligence is to act appropriately when there is no simple pre-definition of the problem or the space of states in which to search for a solution" 1 . Real intelligence isn't just about finding solutions—it's about figuring out what problems are worth having in the first place.
This limitation becomes starkly apparent when we compare even the most advanced robots with the simplest living organisms. A bacterium doesn't solve problems in the same way a computer does, yet it displays a remarkable capacity to maintain its existence and adapt to changing circumstances. What are we missing?
The enactive approach to robotics proposes a fundamentally different starting point. Instead of taking inspiration from computers, it looks to biology—not just the mechanisms of living systems, but their fundamental organization.
Living organisms, from the simplest bacteria to humans, must continually work to maintain their existence. They have their own needs and concerns that emerge from this precarious situation.
This perspective enables robots to develop what enactive researchers call "sensorimotor agency"—the capacity to generate their own goals and concerns .
As one research team puts it, we can "take inspiration from the precarious, self-maintaining organization of living systems to investigate forms of cognition that are also precarious and self-maintaining and that thus also, like life, have their own problems" 1 .
The key to this approach lies in understanding sensorimotor contingencies—the lawful relationships between an agent's actions and the resulting changes in its sensory input 7 8 .
Consider a simple example: when you turn your head to the left, the visual scene shifts to the right in a predictable way. These patterns of change aren't random—they follow laws determined by the structure of your body and the environment. Mastering these laws isn't about building an internal model of the world; it's about developing skillful engagement with it 8 .
As researchers O'Regan and Noë explain in their seminal paper, seeing isn't something that happens in our brains—it's something we do. It's "a kind of give-and-take between you and the environment" 8 . The "feel" of seeing comes from our mastery of the sensorimotor contingencies specific to vision, just as the "feel" of driving a Porsche comes from mastering how it handles 8 .
To test these ideas, a research team created simulated robots controlled by what they call an Iterative Deformable Sensorimotor Medium (IDSM) 1 4 . Rather than being programmed for specific tasks, these robots were designed to spontaneously develop their own sensorimotor habits through interaction with their environment.
The researchers designed robots with self-sustaining but precarious patterns of sensorimotor activity. Like a living organism that must eat to maintain its energy, these robotic systems needed to continually engage in certain activities to maintain their functional integrity 1 .
The robots weren't programmed with specific behaviors. Instead, they had the capacity to develop self-reinforcing sensorimotor habits—patterns of activity that become more likely to recur simply because they've been performed before 1 4 .
The team tested robots with different sensory capabilities, particularly vision and audition, to see how the specific sensorimotor contingencies of each modality would shape the habits that emerged 1 .
Without any central controller directing them, the researchers observed how different habits spontaneously organized into what they called an "ecology of habits" within each robot 1 .
The results were striking. The robots didn't just produce random behaviors—they developed structured habits that reflected their sensorimotor capabilities.
| Sensory Modality | Characteristic Habit Patterns | Similarity Between Habits |
|---|---|---|
| Vision | Spatially-oriented, object-tracking | High similarity to each other |
| Audition | Temporally-structured, sound-responsive | High similarity to each other |
| Cross-Modal | Distinct patterns for each modality | Low vision-audition similarity |
Most remarkably, the form of the emergent habits was tightly constrained by the robot's sensory modality. Robots with vision developed habits that were similar to each other but distinct from the habits of robots with audition 1 4 . This suggests that the sensorimotor contingencies specific to each modality—the different "rules" governing how vision and audition change with movement—directly shape the kinds of habits that emerge.
| Habit Property | Description | Significance |
|---|---|---|
| Self-sustaining | Habits maintain themselves through repetition | Creates behavioral stability |
| Precarious | Habits can be disrupted or lost | Allows for adaptation and change |
| Modality-constrained | Habit form depends on sensory capabilities | Links body to mind |
| Ecologically organized | Habits relate to each other systematically | Emergence of behavioral coherence |
The robots didn't just accumulate random behaviors—their habits organized into coherent ecologies where different habits supported and influenced each other, much like how our various skills and tendencies form a cohesive personality rather than a random collection of traits.
What does it take to create these unconventional robots? The tools look quite different from traditional AI research.
| Research Tool | Function | Enactive Perspective |
|---|---|---|
| Iterative Deformable Sensorimotor Medium (IDSM) | Generates self-sustaining sensorimotor patterns | Creates foundation for habit formation |
| Simulation Environments | Enable testing of agent-environment dynamics | Recognizes the inseparability of agent and world |
| Precarious Organization Designs | Systems that must act to maintain themselves | Source of intrinsic motivation and concerns |
| Habit Formation Mechanisms | Allows development of self-reinforcing patterns | Basis for behavioral consistency and learning |
What's notably absent from this toolkit? Traditional programming of specific behaviors, detailed internal world models, and predefined problem spaces. Instead, the focus is on creating conditions where structured behavior can emerge spontaneously from the interaction between the robot's organization and its environment.
The implications of enactive robotics extend far beyond academic debates. This approach may be essential for creating robots that can:
Without explicit programming
Rather than following scripts
With humans and complex environments
That goes beyond their programming
As one research team notes, by "abandoning problem solving (or at least putting it down for a time), other useful explanatory targets and ways to explain minds are given space to emerge" 1 . This isn't just about building better robots—it's about understanding the nature of mind itself.
The enactive approach also bridges traditionally separate disciplines. As a recent paper notes, this perspective "contributes to contemporary debates in the philosophy of habit by challenging the widespread tendency to conceive of habits primarily as automatic, rigid responses to environmental cues" . It offers a richer conception of habit as the foundation of intelligence.
Enactive robotics represents more than just a technical shift—it's a fundamental rethinking of what artificial intelligence could be. By moving beyond the problem-solving box, researchers are creating robots that don't just solve puzzles but develop their own concerns, habits, and ways of engaging with the world.
These robots demonstrate how intelligence might emerge from the self-sustaining patterns of interaction between an agent and its environment, constrained by the specific sensorimotor contingencies of its body. They suggest a future where robots aren't just sophisticated tools but autonomous agents with their own ways of being.
As this research progresses, we may find that the most intelligent machines aren't those that can solve the most problems, but those that have the most interesting problems of their own. The enactive approach reminds us that intelligence isn't just about having the right answers—it's about having questions worth asking.