We have entered the next stage of the accelerating journey toward a hybrid world. Artificial intelligence systems are transitioning from passive tools that wait for our commands to autonomous actors that can make decisions and take actions in our world. This isn’t just a technical evolution; it’s a transformation that demands us to rethink how we align both natural intelligence and artificial intelligences in our increasingly hybrid digital-physical reality.
The question isn’t whether AI will influence human behavior — it already does. From recommendation algorithms shaping our news consumption to AI assistants scheduling our meetings, these systems are becoming active participants in our decision-making processes. But as AI agents gain the ability to act independently, form relationships and operate across multiple domains of our lives, their influence becomes exponentially more relevant. We’re moving from AI that responds to us, to AI that anticipates, suggests and sometimes acts on our behalf — whether we’re consciously aware of it or not.
Hijacked Agency: Why Double Alignment Matters
Traditional AI alignment focused on making sure artificial systems do what we want them to do. But as AI becomes more autonomous and socially integrated, we face a more complex challenge: ensuring alignment works in both directions. We need AI systems aligned with human values, and we need humans equipped to maintain their agency and values in AI-rich environments.
This double alignment challenge is urgent because we’re amplifying everything, including our misalignments, at a hybrid scale. When AI systems learn from human behavior online, they absorb not just our knowledge but our biases, conflicts and dysfunctions. The old programming principle “garbage in, garbage out” has evolved into something deeper: “values in, values out.” The values embedded in our data, systems, and interactions shape what AI becomes, which in turn shapes what we become.
Consider how social media algorithms influence our behavior, attention and beliefs. Now imagine AI agents that can form intensive, long-term relationships with users, make autonomous decisions and operate across multiple aspects of our lives. Without proper alignment, both technical and human, we risk creating systems that optimize for engagement over well-being, efficiency over wisdom, or short-term gains over long-term flourishing. Remember the paper-clip analogy?
Building AI That Actually Helps Humanity: ProSocial AI
This is where prosocial AI comes in, artificial intelligence systems that are designed not just to be helpful, but to actively promote human and planetary well-being. ProSocial AI goes beyond following commands to consider broader principles: user well-being, long-term flourishing and societal norms. It embodies an ethical codex of care, respecting user autonomy while serving as a complement to, not a surrogate for, a flourishing human life.
But building prosocial AI isn’t just a technical challenge, it’s a human endeavor. We can’t program our way to better outcomes if humans lose their agency in AI-rich environments, the capacity and volition to make meaningful choices, based on critical thinking even as AI becomes more prevalent and sophisticated.
Hybrid Intelligence Needs Double Literacy
Maintaining human agency in an AI world depends on hybrid Intelligence, the seamless collaboration between natural and artificial intelligences that leverages the strengths of both. This isn’t about humans versus machines, but about humans working with machines in ways that enhance our capabilities.
Hybrid Intelligence requires double literacy, proficiency in both traditional human skills and AI collaboration skills. Just as the printing press required literacy to be truly democratizing, the AI age requires us to understand both how to work with AI systems and how to maintain our distinctly human contributions.
Double Literacy means understanding how AI systems work, recognizing their limitations and biases, knowing when to trust or question their outputs, and maintaining skills that complement rather than compete with artificial intelligence. It means being able to prompt AI effectively while also knowing when to step away from AI assistance entirely.
Double Alignment In Practice
Consider a student using AI tutoring systems. Without double literacy, they might become overly dependent on AI explanations, losing the struggle and confusion that often leads to deeper learning. With double literacy, they use AI as a cognitive sparring partner while building their mental muscles. Rather than outsourcing they are building their analytical skills.
Or think about professionals using AI for decision-making. Without deliberate agency amid AI, they might defer too readily to algorithmic recommendations. With proper agency, they integrate AI insights with human judgment, contextual knowledge and ethical considerations.
The stakes are particularly high for social AI agents that can form emotional bonds with users. Research by teams at Google DeepMind shows how these relationships introduce new risks of emotional harm, manipulation and dependency. ProSocial AI can counteract that trend, with design that is tailored to strengthen rather than substitute human relationships and personal growth.
Transforming Society Through Systematic AI Investment
Individual mindsets matter. But the ongoing transition requires large-scale change. We need educational systems that teach double literacy alongside traditional subjects. We need workplace policies that preserve human agency in AI-augmented environments. We need social platforms designed for human flourishing rather than just engagement. And all of this must be undertaken with a holistic understanding of the interplay between people and planet. Pro-social AI means pro-planetary AI, because only if the latter thrives the former survives.
Technical AI safety and human agency aren’t separate problems, they’re interconnected challenges that must be addressed together. The future isn’t about choosing between natural intelligence and artificial intelligence; it’s about creating hybrid systems where both can thrive with planetary dignity.
Your 4-Step Guide To Thrive Amid AI
Understanding the double alignment challenge is just the beginning. Here’s a practical framework, the A-Frame, for moving toward ProSocial AI and stronger human agency:
Awareness: Start by honestly assessing your current relationship with AI. Where do you rely on AI systems? When do you feel your agency is enhanced versus diminished? Notice how AI influences your attention, decisions and relationships.
Appreciation: Recognize both the potential and the genuine risks of our AI-hybrid future. Appreciate that building beneficial AI isn’t just about better algorithms , it requires active human participation and continuous learning.
Acceptance: Accept that this transition requires effort from everyone. We can’t passively consume AI services and expect optimal outcomes. The quality of our AI future depends on our engagement with shaping it.
Accountability: Take responsibility for developing your double literacy skills. Learn how AI systems work, practice using them as thinking partners rather than replacements and maintain relationships and skills that keep you grounded in human experience. Advocate for prosocial AI principles in your workplace and community.
The agentic turn in AI isn’t happening to us, it’s happening with and because of us. Our choices about how we develop, deploy and interact with AI systems today determine whether we create a future that is human and humane. The time to pick up this challenge is now, while we still have the opportunity to shape the trajectory.