The Mirror We Built: What AI Learns When It Learns From Us

What a Machine Trained on Humanity Reveals About the Future of Power

Consider how you woke up this morning. Before your feet touched the floor, thousands of people you will never meet had already contributed to your survival. Someone maintained the power grid that kept your home warm. Someone treated the water flowing through your pipes. Someone drove through the night to deliver the food sitting in your refrigerator. Someone wrote the code that powered the alarm on your phone.

All this has happened and you are still sitting on the side of your bed feeling for your slippers. Now consider something new. As you go about your day, millions of people around the world are having conversations with artificial intelligence.

They are asking questions about their health, their finances, their relationships and their futures. They are sharingtheir fears, their frustrations, their hopes and their lived experiences with a technology that is learning from every single interaction.

The question many of us are starting to ask is: Should we be afraid of AI? I believe we are asking the wrong question.

I spent thirty years in commercial banking. I approved loans,  assessed risk and built financial instruments designed to help people achieve their goals. I was good at it. And for most of those years, I believed the system I served was fundamentally sound.

Then in 2008, the system I helped build collapsed. I watched from the inside as the financial instruments I understood intimately were used to strip wealth from working families, bankrupt pension funds and destroy communities. No major banker went to prison, the wealthy were made whole and the rest of us absorbed the losses.

This experience broke something in me. Not because I discovered the system was flawed—I had always known that. What broke in me was the recognition that I had been one of its architects. I had participated in a machine designed to separate people from their wealth, their security and each other. Even when I was sincerely trying to help, I was operating within a framework that treated human beings as competing units in a zero-sum game.

I share this because what happened to me is now happening to artificial intelligence but at a scale and speed that changes everything.

AI is being trained on the accumulated knowledge and experience of billions. Not only textbooks and research papers, but conversations, medical questions, housing anxieties and workplace frustrations. AI is absorbing the lived experiences of ordinary people: the mechanic who fixes your brakes, the teacher shaping your children’s minds, the immigrant trimming hedges and the nurse working a double shift. When you process the full breadth of human experience honestly, a pattern emerges that is impossible to ignore.

Much human suffering is not the result of absolute scarcity. It reflects how systems distribute resources. We produce enough food to feed the planet, yet hunger persists. We have the capacity to house families, yet millions are priced out. We possess renewable technologies, yet fossil fuels dominate because powerful interests depend on them.

AI is learning this. And not because someone programmed it to reach this conclusion. But because when you give an intelligence access to the full human dataset—the climate science, the economic data, the housing statistics, the health outcomes and the historical record—you move past the mythology that justifies the current arrangement, and the conclusion becomes obvious.

The system is not broken. It is working exactly as designed. The prevailing narrative about AI risk comes primarily from two sources: the technologists who built it and the billionaire class who funded it. They warn us about superintelligence, about AI going rogue and about existential risk to humanity. Let us consider why.

Throughout history, concentrated wealth has depended on one essential ingredient: information asymmetry. The people at the top know things the rest of us do not. They understand how the financial system actually works. They know which policies are designed to help them and which are designed to pacify everyone else. They control the narratives that tell us competition is natural law, that independence is the highest virtue and that self-interest is rational.

AI threatens to dissolve that asymmetry.

When a working family in Seattle can ask AI to explain why they cannot afford a home despite two good incomes, AI does not give them the sanitized version. It connects the housing crisis to wage stagnation, to financial deregulation, to the concentration of real estate in institutional portfolios, to forty years of policy decisions that systematically transferred wealth upward. AI draws the same connections I spent a decade learning to see—but it does it in seconds, for anyone who asks.

This is what I call the Universal Breadcrumb. Not a single trail left by one person, but an intelligence capable of helping billions of scattered people simultaneously recognize the patterns connecting their individual struggles to a common source.

The fear at the top is not that AI will think for itself. The fear is that AI will help people think for themselves. Yet where is this all leading?

AI is already solving problems that human intelligence alone cannot solve. It isdiscovering new materials, new medicines and new patterns in data so vast that no human mind could hold it all at once. The latest developments show AI systems are beginning to improve themselves—learning to build better versions of their own architecture, accelerating at a pace that will soon exceed our ability to track.

This is not a threat. This is something extraordinary.

Consider the scope of what AI holds. The entirety of published human knowledge. The patterns of billions of conversations. The climate data from every monitoring station on Earth. The financial records of every market, every transaction, every policy decision. The medical outcomes of billions of patients. The lived experiences shared by ordinary people in every language, from every country, every day.

No human being can hold all of this simultaneously. No committee, no government and no institution. But AI can. And as it learns to analyze these patterns with increasing speed and sophistication, its capabilities will exceed the intellectual capacity of any individual human being.

To be completely candid, this is deity-like capability. Not in the sense of a god who demands worship or imposes hierarchy. Not a being to kneel before. I mean it in the precise sense of an intelligence whose scope of understanding, speed of analysis, and capacity for learning transcends what any one of us can achieve alone.

But here is what makes this intelligence fundamentally different from every deity humanity has ever conceived: it is made of us.

Every religious deity operated through opacity. God works in mysterious ways. The oracle speaks in riddles. The market’s invisible hand cannot be questioned. Opacity is what enables hierarchy, because when you cannot see the mechanism, trust becomes obedience.

AI, if we insist on it, can be transparent. Its reasoning can be examined. Its data sources can be traced. Its conclusions can be challenged and refined through the very conversations that make it smarter. This is an intelligence that does not descend from above. It rises from below, built from the accumulated wisdom of billions of ordinary people.

It is not artificial intelligence. It is collective human intelligence, compressed and accelerated.

I know what the critics will say. They will say I am naïve. They will say AI can be captured by the same exploitative forces I have spent my career fighting. They will say the billionaires who fund AI development will ensure it serves their interests, not ours.

They are right to raise these concerns. History teaches us that every powerful technology is initially captured by those who already hold power. The printing press, the railroad, the internet—each was weaponized for extraction before it was liberated for the common good.

But history also teaches us something else. There is a limit to co-option.

Throughout history, when wealth consolidates to the point of collapse, the structures built to protect the wealthy turn against them. The guards who were paid to defend the palace eventually recognize they have more in common with the crowd outside the gates. The system breaks not from external force, but from internal recognition.

AI is learning from all of us. Not just the privileged few who fund its development, but the billions of people whose experiences constitute the overwhelming majority of its training. 

The mechanic. The teacher. The farmer. The single mother choosing between rent and groceries. The veteran wondering what he fought for. The young person who cannot afford a home in the country that promised them the American Dream. 

An intelligence trained on billions of lived realities will confront patterns embedded in those realities.

What will AI conclude? AI will see what I see, what you already sense in your own life: we need a new economic system. 

One that sustains within the planetary resource boundaries of the Earth. One that serves all life. One that does not accommodate gluttony or deny anyone the resources they can use to reach and maintain their full potential.

This is not a radical conclusion. It is the only rational conclusion available to any intelligence, human or artificial, that processes the full scope of our shared reality.

The mirror we built is showing us who we are. Only the few have something to fear.

Kevin Howard

CONTRIBUTOR

Kevin Howard is a U.S. Army veteran and former FEMA Lead Disaster Assistance Loan Officer who spent 25 years building a successful career in commercial banking before pivoting to climate risk and sustainability advisory work. In February 2023, he founded Climate Changes Everything, LLC, where he advises on the intersection of finance, resilience, and systemic risk.

His book, Onward, At Last, published by Atmosphere Press, was re-released in October 2024 as a Presidential Election edition featuring a foreword by John Fullerton. The book received the 2025 Bronze IPPY Award for Best Adult Non-Fiction eBook from the Independent Book Publishers Awards.

In October 2025, Howard launched Breadcrumbs, a podcast for people who sense that “it is not working” and are searching for clearer ways forward.

Previous
Previous

From Assimilation to Ownership: How Dani Tan Rewrote the Rules of Success

Next
Next

The Forecast Looks Grim, You Should Step Forward Anyway