The rise of artificial intelligence is transforming our professional, and personal lives. From automated email responses to complex decision-making algorithms, AI is seeping into our screens, and minds. Set amid the unraveling political landscape of 2025 it increasingly feels as if we are living in a science fiction scenario. Among the uncomfortable questions that this raises is for how much longer we will be producers and screenwriters, rather than mere actors who say what has been written, and do what they are told. The ongoing, seemingly relentless AI integration, has introduced a subtle but significant risk: agency decay.
What is happening does not relate to a dystopian takeover by machines; it’s about the gradual erosion of our capacity, and volition to make decisions autonomously and exert control, on screen and offline. Agency, fundamentally, is the ability to act intentionally. It’s the sense that we are the initiators of our actions, capable of influencing our environment and shaping the outcomes of our actions. In the context of AI, it means preserving the power to initiate and execute an act independently, while deliberately harnessing the power of technology. It is a balancing act between using AI as a tool and becoming dependent on it.
The 4 Stages Of Agency Decay Amid AI
Our interaction with AI often follows a predictable pattern, a progression that can lead to diminished agency if we’re not attentive. Here’s a breakdown of the four stages:
1. Exploration: Initial Engagement
This stage marks our first encounters with AI. We’re driven by curiosity, experimenting with new tools and exploring their potential applications without a full understanding of their mechanisms.
- Characterized by: Low ability and low affinity. We’re interested in AI but lack the expertise or understanding to utilize it effectively.
2. Integration: Growing Familiarity
As we gain experience, we begin to integrate AI into our daily workflows. We recognize its efficiency gains and start relying on it for routine tasks.
- Characterized by: Increasing ability and increasing affinity. We’re developing the skills to use AI and appreciating its benefits
3. Reliance: Developing Dependence
AI transitions from being a helpful tool to a critical component of our operations. We depend on it for decision support and task execution, sometimes without critically evaluating its outputs.
- Characterized by: Strong technological ability, but a potential decrease in independent thought. We become proficient in using AI, but our growing skills subtly diminishes our critical thinking and problem-solving skills; and our appetite to push our intellectual boundaries, rather than asking ChatGTP to do so.
4. Dependency: Diminished Autonomy
We find ourselves struggling to perform tasks, such as writing a text or code, or to make decisions without AI (when did you last go into Netflix with a clear plan of the movie that you want to watch?), resulting in a decrease in our sense of agency. We’ve become overly reliant, losing the capacity, and desire to act, autonomously.
- Characterized by: High affinity, but low ability to function without AI. We are comfortable with AI’s convenience but have lost the skills and confidence to operate independently.
This progression illustrates a potential slide from empowered use to unhealthy dependence.
Agency decay isn’t a sudden event; it’s a gradual process, often unnoticed until it’s deeply entrenched. It manifests in several interconnected ways. We increasingly outsource cognitive tasks to AI, from memory recall to complex analysis, leading to what’s known as cognitive offloading. While this can enhance efficiency, it may also lead to an atrophy of our cognitive abilities. We become more efficient, but especially if AI is taking over tasks that are closely related to our professional pride, and self identity this delegation may make us less satisfied with our job.
At the same time as our ability decreases our models become more sophisticated while still being prone to hallucinations. The “black box” nature of many AI systems, where the decision-making process is opaque, combined with dwindling human desire and capacity for fact-checking is dangerous, and erodes trust.
Mitigate the risk of agency decay, means to counteract, individually and institutionally.
Curating Agency Amid AI Use: The Four A’s
To mitigate the risk of acute agency decay, we must proactively manage AI integration and navigate the aforementioned stages effectively. The key to this proactive management lies in 4 A’s:
- Awareness: Cultivating awareness of both AI’s capabilities and limitations is the first step. Individuals and organizations must foster a deep understanding of how AI works, its potential impact, and the importance of maintaining human oversight. This awareness should extend to the ethical considerations surrounding AI, promoting responsible development and deployment.
- Appreciation: Building on awareness, it becomes possible to develop appreciation for the value of both natural intelligence and artificial intelligence. Moving beyond the binary understanding of either-or this means recognizing that AI is a tool to augment, not replace, human capabilities. Fostering a culture of collaboration between humans and AI can lead to more effective problem-solving and innovation.
- Acceptance: Acceptance involves embracing AI as a fundamental part of the modern landscape. This doesn’t mean blindly adopting every new shape that this technology is going to take; but rather strategically integrating AI into personal decisions that are cumbersome and time consuming, such as shopping, or inefficient workflows where it can provide the benefit. Acceptance also entails adapting organizational structures and roles to optimize human-AI collaboration, with careful attention to the optimization of human wellbeing.
- Accountability: Finally, accountability is crucial for maintaining agency. Organizations must establish clear lines of responsibility for AI systems, ensuring that humans remain accountable for decisions and actions, even when AI is involved. This includes developing robust governance frameworks, auditing AI systems for bias and errors, and implementing mechanisms for redress when things go wrong.
Mastering AI As Mean To An End
Understanding the dynamics of agency in the AI age, involves recognizing the slippery slope from experimentation to dependency, and proactively cultivating our own agency. AI is a means to an end, not an end in itself. If it can make us happier and our coexistence with nature more sustainable (which is a big IF considering the energy footprint of current models) then we have cracked the code. To master this balance we need to curate hybrid intelligence, as a stronghold to agency decay.