From Web 2.0 to Agent 3.0: How We Extend Ourselves Next
Some people double-check everything ChatGPT writes; others accept it at face value. Some employees trust their IT department to provide AI agents; others go and build/procure them on their own.
Since its mainstream breakthrough roughly three years ago, Generative AI has proven to be a major disruptor and is on its way to fundamentally reshape our daily lives. Encountering this topic daily with friends, family, enterprises, and startups, there’s agreement that it’s disruptive; the divides appear around roles, responsibility and risk.
With this topic keeping my mind busy, I decided to take some time and dive deeper. This finally took place a few weeks ago during my vacation in Greece – going to the beach, armed with a few podcasts and Stiglitz’s The Road to Freedom alongside Bostrom’s evergreen Superintelligence books in my backpack.

As per usual, to organize and clarify my thoughts, I’ve noted them down in a blog post, outlining historical context and useful mental frameworks for AI Agent creation, shaping and management. I hope you enjoy reading it as much as I enjoyed writing it.
And if you’re wondering whether this article was written by a GPT — I have to both delight and disappoint you: all the ideas are mine. Copilot only helped with a quick spelling check and a bit of formatting.
Extension of self
Extension-of-self 1.0
Looking back at history, the extension-of-self was possible only through others — by force in the case of monarchs, through books and lectures by philosophers, or through teachings and rituals by religious leaders. In those times, only a tiny fraction of humanity — perhaps a few thousand worldwide — had the means to project their will beyond themselves.
With the French Revolution and the independence of the United States, the possibility to extend oneself to a wider population came along. The emerging middle class could rise to leadership, build companies, and outsource tasks in private life such as home cleaning or childcare — an early, practical form of self-extension through others. We’ve seen individuals like John D. Rockefeller, through Standard Oil, build influential companies by sharing a collective vision, shaping values, and inspiring action. Their influence rose to rival that of politicians.
Later, with the advent of mass media and the television screen, athletes and artists such as Michael Jordan and Madonna became global icons — shaping how people live, what they listen to, how they dress, and even the slang they use.
Yet even though the freedom to rise and influence expanded to a wider population — and prolific figures emerged who shaped global culture — the underlying principle remained the same. The extension-of-self was limited to an individual creating input and taking care of the output. Rockefeller had to use the company and newspapers to influence others and delegate tasks himself while the likes of Madonna and Michael Jordan had to put in the performances which got spread through radio and TV.
Given the nature of self-extension, by the late 20th century, perhaps a few hundred thousand individuals worldwide could truly scale their presence through media and industry and project their influence, identity, and actions beyond their immediate presence. Beyond exceptional figures like Rockefeller, Jordan, and Madonna, the input and output of any given task remained confined to the individual’s own
domain — for the average person, the reach of influence ended at the boundaries of their inner circle of friends, family and coworkers. Let’s call this period Extension-of-Self 1.0. A drastic change in this concept took place in 2004 with wider adoption of social media and the internet — Extension-of-Self 2.0.
Extension-of-self 2.0
In this era, an average Joe had a much better shot at doing what was previously limited to few such as Michael Jordan. To put it in example, Chiara Ferragni grew from a student fashion blogger into one of the world’s leading style figures with over 30 million Instagram followers. PewDiePie (Felix Kjellberg), a Swedish gamer, started with bedroom videos and built an audience of more than 110 million subscribers.
Social media superstars have become proficient in conveying lifestyle, skills, or values, and then scaling them endlessly through digital platforms, thereby influencing a wider population towards a certain cause, belief and/or action. To put it in perspective, about 0.4% of Instagram users — roughly 8 million people — have over a million followers, and that’s on Instagram alone.
While 8+ million influencers on Instagram is a lot, it remains fairly small when compared to 9Bn+ global population. As opposed to global influencers, the impact of social media on average Joe’s influence is much less profound. While it allows him to share parts of his life with friends and family through his favourite apps, his self-extension — the ability to project one’s influence, identity, and actions beyond
immediate presence — remains confined to his own network.
As fascinating and engaging as self-extension through social media and apps is, it’s now being transformed by something far greater: Extension-of-Self 3.0.
Extension-of-self 3.0
In this new age — born with the emergence of Generative AI, embodied in both digital and physical agents such as robots — we’re witnessing not merely an evolution, but a reset in how individuals extend themselves. These AI-driven agents carry out tasks on an individual’s behalf, ranging from the mundane to the deeply personal.
Unlike in previous eras, where only a few such as monarchs, philosophers, or social media influencers managed to reach that level of self-extension, this shift will penetrate far deeper into everyday life, enabling personal and professional expansion with great ease and on a vastly greater scale. Moreover, due to widespread adoption of Agents, self-extension in 3.0 will not be a choice but a necessity — a prerequisite for belonging to the social fabric.
Just as the rise of social media demanded a new kind of digital literacy, the rise of agents will demand a new competence: knowing how to design, shape, and manage them. This applies not only when acting as the owner or creator of agents, but also when interacting with the agents of others — whether those agents represent another individual or an entire organization. Fitting into this world will require an entirely new skill set. But we’ll return to that challenge later. In this new
landscape, two categories of agents emerge.
Mirror Agents and Shadow Agents
Mirror Agents
These are the extended self. They represent an individual directly — filing a tax report with the authorities, putting a child to sleep with a song, or autonomously engaging with customers as part of the sales and support process. If they make a mistake, accountability rests with the person who created them, both in professional and personal settings. Essentially, the agent mirrors and represents creator’s intent, voice, and values.
Shadow Agents
These are the detached self. They operate with the same capabilities as Mirror Agents but without direct representation and accountability. Instead of extending the individual in name, they act as independent entities — running anonymous online accounts to promote content or causes on distinct forums, running a micro e-commerce business, taking over a work process such as insurance claim evaluation or act as a security robot patrolling the backyard of a house. These agents work,
adapt, and interact with the world as their own entity.
The key twist is that both types can be created by the same individual. One agent might send emails in its creator’s style as a Mirror Agent, while another might independently build followers and revenue streams under a separate identity as a Shadow Agent. Same creator, same platform — completely different risks and responsibilities, which get substantiated depending on whether the agent is designed to autonomously evolve or not. This is a critical point I want to stress, self-evolution of any given agent is one of the most critical aspects in this context.
Whether an agent evolves autonomously or not depends on the creator’s choice and above all, how it is designed. A curated agent such as Microsoft Copilot and OpenAI’s ChatGPT typically comes with built-in guardrails and providers such as Microsoft and OpenAI constraining the evolution. On the other hand, a custom-built Agent has the freedom to grow and adapt within the guardrails set by its creator.
To paint the picture, if an individual decided to let an agent autonomously evolve, he or she might end up operating with 15 distinct agents, of which seven behave differently every few months as they learn and adapt due to their autonomous setting. It is a bit like parenting 15 children who all develop — and occasionally misbehave — at ten times the normal speed.
Beyond autonomy in evolution and the number of agents managed by a given individual, the impact, risks and responsibilities will evolve once generative AI matures in robotics. Picture an elderly support agent embedded in a household robot — reminding a parent to take medication, monitoring vital signs, offering companionship through conversation, and calling for help if something goes wrong.

Shift in Skills & Responsibilities
Individuals will not just get to choose and benefit from the agents — they will increasingly need to also shape them, manage them, and be accountable for the impact of those agents on their behalf. With this transformation comes a fundamental change in responsibility and the need for new skills and knowledge.
Each era of self-extension has demanded a different skillset:
- Era 1.0: Monarchs extended themselves through politics and military strategy, while religious leaders reached believers through storytelling and ritual. Artists and athletes leveraged centralized media such as television to showcase their talent. Influence rested on social role, skill, and narrative.
- Era 2.0: Social media influencers extended themselves through charisma, creativity, and relentless content production and broadcasting through social media. Success required spotting trends, creating content, and scaling it to global audiences.
- Era 3.0: The focus shifts again. Every individual gains influence not over people, but over agents and must now define clear objectives, select or build the right agents, and manage them effectively and responsibly. Most agents will arrive as curated offerings from large providers, which means the core skills will not be technical coding but rather goal-setting, boundary-shaping, and agent management.
In other words: extension-of-self 3.0 will depend less on inheriting influence, producing content and more about being the orchestrator, knowing what to delegate to agents, how to guide them, and how to take responsibility for their outcomes.
Every individual — whether in a company or at home — must recognize that the agent they choose and the way they configure it carries responsibility. It must align with regulation, corporate guidance and follow responsible AI standards. This should reach a level where it feels like a routine of choosing an insurance plan or signing an
employment contract. Key elements in doing so will be choosing, shaping, and managing the agent.
Choosing
This is where the distinction between Mirror and Shadow becomes decisive.
- A Mirror Agent is self-extended. It carries intent, tone, and values. When it acts, it feels as though the individual acted.
- A Shadow Agent is a process detached from self, it is it’s own entity and does not necessarily mirror one’s personality, values, or intent. Instead, it
scales the system.
In personal life, a parent might create a Mirror Agent to help their child with homework. It mirrors their teaching style and values, keeping learning fun but structured. At the same time, they might deploy a Shadow Agent that boosts a favourite musician or football player by streaming songs and posting comments. The first carries personal accountability (if the homework agent teaches nonsense, that’s on the parent). The second operates as a detached entity — its impact is real,
but it isn’t tied back to the parent.
In professional life, a salesperson might use a Mirror Agent to reply to client emails or post LinkedIn updates in their own style. A medical doctor could use a Mirror Agent to automate appointment reminders or draft follow-up notes for patients in their own professional voice. In parallel, the salesperson could set up a Shadow Agent to act as a product evangelist in online forums, promoting a new product, or to streamline repetitive internal tasks like warranty claims review as part of a wider process. The doctor, meanwhile, might use a Shadow Agent to double-check medical results across the department — spotting mistakes, flagging anomalies, and improving accuracy without the findings being traced directly back to them.
Shaping
Once chosen, every agent must be shaped. The core challenge isn’t building agents but achieving performance quality, i.e., deciding how creative, assertive, or cautious they should be. The real art lies in defining boundaries — deciding how autonomous the agent should be, what tone it adopts, and where its influence begins and ends. This is anything but easy, looking at LLM-choice surveys once can notice which are the key consideration factors of engineers building and adopting Agents.

Agents can be autonomous or confined, creative or rigid, amicable or factual — and each configuration reflects the intent of its creator.
- A journalist might fine-tune a writing agent to be exploratory yet restrained in opinion.
- A financial analyst might enforce strict factual accuracy and narrow scope.
- A designer might encourage playfulness and open-ended ideation.
It’s less about prompt engineering and more about ethics engineering — aligning performance with values, purpose, and context.
Managing
Much like managing a calendar, overseeing finances, or coordinating projects with colleagues, operating agents will soon become part of everyday life. The average manager today oversees four people; before long, every individual will effectively become a manager of 15 or more — only most of them won’t be human. Every one of us will become not just a manager of people, but of something both closer and more distant at the same time — our own digital and physical extension.
According to the World Economic Forum’s 2025 Responsible AI Playbook,
over 80% of organizations say they follow ethical AI principles, yet only 15% actively track outcomes. In personal life, that number likely drops close to zero, with individuals relying instead on informal habits and cultural intuition.
Managing a constellation of agents — each interpreting goals, values, and data — will make management far more complex than managing people ever was. The complexity of governing Agents stems not just from
accounting for the direct impact of a given agent but also from interdependencies between individuals’ Mirror and Shadow agents across digital and physical space. In resolving a given task there could be collaboration of several Mirror and Shadow agents and a person as well.

The system is becoming increasingly complex and frameworks for agent–agent and person–agent are still emerging.
Managing through storytelling
Understanding signals from dozens of proprietary agents across performance, regulation, and ethics is impossible without dedicated technology acting as the interpreter between human intent and machine execution.
Most of today’s management tools still live deep within IT dashboards — numerical, dry, and foreign to anyone who doesn’t speak data. But new paradigms are emerging.
One of them is what designers are beginning to call Mythic Design: the art of turning systems into stories people can understand. You can already see it taking shape in the Microsoft ecosystem. Copilot itself is a perfect metaphor — not a tool,
but a teammate, a presence beside you that listens, interprets, and acts. Around it, new role-based agents are appearing — a Researcher that explores sources, a Planner that organizes priorities, an Analyst that draws conclusions, even a Coach that advises on communication or tone. Each of them carries a familiar human archetype, turning the interface into a cast of collaborators rather than a grid of menus. These are early signs of Mythic Design: systems that speak human, not data.
In the coming years, this idea will extend far beyond productivity apps. Managing agents will feel less like analytics and more like storytelling — interpreting signals, guiding actions, and ensuring that what we set in motion still serves its creator’s intent.
Final words
Era 2.0 flooded us with information. Before the internet, an average person managed perhaps 10–20 tasks a day and communicated through just a few channels. With social media and digital workspaces, that number has exploded to 90–120 micro-tasks daily across ten or more platforms. We scroll, reply, post, schedule, approve, and purchase — an ongoing dialogue between our intent and the digital systems around us.
With Extension-of-Self 3.0 emerging, the number of completed tasks reaches an entirely new level, and many of them are conducted by agents. Choosing between a Mirror and a Shadow Agent is anything but simple. Shaping these agents to deliver the right outcomes requires guidance, and managing fifteen or more across work and personal life can, without the right skills, tools, and processes, become a nightmare.
Extension-of-Self 3.0 is inevitable, and it spans both digital and physical worlds — worlds where each of us is an agent creator, manager, and steward. And hopefully, it not only strengthens and protects fundamental freedoms for those who already have
them, but also extends them to those who never have.
Further reading: Responsible AI, Stiglitz’s The Road to Freedom, and Bostrom’s Superintelligence — as well as my earlier reflections on GEO and User Experiece shifts; and a Cambodia/Vietnam meditation on AI’s real-world stakes and governance