AInequalities
The hidden dimensions of disparities in tech development.
I am pretty sure that regardless of where you are coming from, you must have many reasons to be proud of your origins, your home country, and, most importantly, your family. When I meet someone, I’m always curious to learn about their family background, because it is fascinating to see how countless experiences shape people’s minds and give us all a unique way of interpreting and experiencing life. Imagine: families of small farmers who grow the food we all need; families of doctors who devote their lives to healing others; families of front-line workers like janitors, without whom our dense cities would collapse in days. And there are so many other beautiful examples that could come to mind.
In my case, I had a mix that also felt very special. Both my parents are artists. They work at the cutting edge of contemporary art, in fields that can be challenging for the untrained eye but rewarding for those willing to explore new horizons. As you might imagine, they made their living not from art itself — especially given the intellectual, less commercial nature of their work — but from teaching. Both are professors at universities.
I mention this for two reasons. First, because many AI champions claim it could one day replace even creative workers, artists and composers included. I will return to that in the next dispatch. Second, because growing up in this environment, I often joke that all I will inherit is a library full of books and one certainty: we live in a profoundly unequal world. And just as common sense suggests that an older sibling should protect the younger, I believe that at a societal level we should prioritize making the world less unequal and improving the lives of those living below our standards.
Bear with me, and set aside any judgments that might affect your thinking. The reason I say this is because, somehow throughout economic history, people became afraid of the words equality and inequality, depending on where they stand on the ideological spectrum. I, however, welcome these two words. Being trained as an economist, I have always been fascinated by their study.
I wouldn’t even categorize this discussion as ideological. I don’t care if you call it right or left, up or down. What I care about is our ability to make data-driven decisions paired with our moral compass. I also care about two virtues that I believe should be at the core of our moral compass: honesty and integrity. To keep it short, honesty is the responsibility to tell the truth, and integrity is the responsibility to do what is right. Yes, the latter might look different depending on our values, but at the very least, it should be aligned with the former.
With this background in mind, I went on to study economics almost 20 years ago, and since then I’ve been fascinated by the study of inequality. I can summarize my conclusion after these two decades of reflection: simply put, the data show us that the more unequal societies become, the more we face dire consequences — rising rates of depression, alcoholism, suicide, and healthcare costs; more violence; weaker social bonds; and, taken together, a sharp decline in perceived happiness.
There are countless peer-reviewed, highly credible works of every kind — research, papers, books — showing the strong correlations between inequality and these problems. To deny that is like denying that 1+1=2. And denying 1+1=2 is simply dishonest.
If you’ve been following The Interweave, you already know the question that drives this writing: What is the purpose of progress?
I don’t yet know the answer, because progress has historically been driven by opposing forces. But this writing is a dialectic exercise, meant to shed light on truths and help us navigate a noisy world flooded by information and rapid change. For now, I think the best affirmation I can offer is a quote from historian E.H. Carr:
“Change is certain. Progress is not.”
Enter Artificial Intelligence. There are countless dimensions of AI inequality. There are first-order inequalities in the way the technology is being developed, and second-order inequalities that will arise from its deployment.
In its current path of development, as discussed in my previous article, the technology will require ever more compute power to build models with trillions of parameters, making them increasingly capable of tasks ranging from Nobel-level scientific research to everyday labor-intensive work. That implies, however, massive amounts of capital — financial, natural, and human. Only a handful of companies and individuals can afford to raise such resources. This, in itself, shows AI is not a democratic technology. Beyond financial capital, it also consumes natural capital, particularly common goods like water, and increases demand for energy in a world still largely powered by fossil fuels.
Then come the second-order consequences: the impacts of deployment, which will almost certainly shake the already fragile “social contract” we have in place. In other words, AI will reshape how we organize ourselves as a productive society, disrupting the global workforce. My perspective here is dialectical. On one hand, mass displacement could, in time, benefit those freed from repetitive labor. On the other hand, the transition will almost certainly produce what I would call transitioning social inequality, as millions risk being left behind during automation’s acceleration.
With so many potential negative societal impacts — and I am simplifying here, since the ramifications are vast — Carr’s quote is on point: there is a lot of change coming. But if progress is uncertain, why has AI attracted so much attention?
The pace of AI development is often justified in terms of an “arms race,” a quest to prevent the “enemies” of the U.S. from harnessing its unprecedented power. On those grounds, we can simplify the motivations of AI’s main sponsors into two scenarios.
One scenario is that the western founders behind most of the foundational AI models truly believe that Artificial General Intelligence is near. They assume it will be achieved by scaling compute power, and they fear it could become a tool as disruptive as quantum computing, capable of breaking cryptography and accessing bank accounts. From that perspective, they think an arms race makes sense: nation-states would want control of such power to defend themselves.
The second scenario is that they say this publicly but privately know LLMs will not evolve into sentient intelligence. If so, they are spreading fear not out of belief but because it is effective for attracting capital and building dominance in the AI agents market — one that promises vast profits by automating jobs worldwide.
In either case, the question remains: from a societal and moral perspective, at what point do we — as citizens, workers, consumers, or voters — raise our collective voices to demand guardrails? Should a few hundred business people and researchers turned entrepreneurs within a handful of companies decide how to develop and deploy a technology that will affect entire societies? And, just as importantly, should we allow this future to be written by only a part of the globe, while a great portion remains in the shadows, disconnected from the decisions that will shape us all?
If scenario 1 is real, should nations rely on a small group of companies in San Francisco (and a few elsewhere) to dictate the future of AI? If scenario 2 is real, should we allow the same handful of companies to accelerate energy consumption for ever more powerful chatbots, driving greenhouse gas emissions in a world already on the brink of irreversible warming?
Seen this way, we are in a classic game-theory problem, headed toward a negative Nash equilibrium. And the underlying cause is inequality: a concentration of power within a few countries and corporations that, even within their own borders, are not delivering healthier or happier lives for their people.
Had we embraced a more collaborative and equitable approach, rather than competitive, unequal, and belligerent ones, we might be on a path toward a win-win outcome for people and the planet.
The discussion about the concentration of power in the hands of tech empires deserves its own essay, which I will return to in the future. For now, I want to conclude with another first-order impact of AI: its climate footprint.
I won’t dive into issues like water use or rare earth materials, but instead focus on global warming. There are two dimensions: the carbon footprint of training models in data centers, and the carbon footprint of deployment (the inference phase).
On the latter, studies are still emerging, and methodologies differ. But researchers like Hannah Ritchie of Our World in Data and even Google’s own teams suggest the impact per prompt is relatively small. “Don’t feel ashamed for using it,” Hannah says. I would add: use it responsibly, because aggregated across hundreds of millions of users, the impact will matter.
The real climate concern, however, lies in training. This topic is dear to me — I’ve spent the past decade investing in startups tackling global warming. And as I write this, from a coffee shop in New York during Climate Week, the latest data points are fresh in my mind.
Today, the world’s 12,000 data centers consume about 460 gigawatts annually. By comparison, HVAC systems for heating and cooling consume around 3,000 gigawatts. One panelist at a Climate Week debate argued that AI could optimize HVAC energy use, saving as much as 600 gigawatts — 150% of current data center use.
But there is a major flaw in this argument: timing. It assumes rapid, global adoption of AI-driven efficiency. In reality, adoption is slow, fragmented, and uncertain. Meanwhile, new data centers are coming online at breakneck speed, outpacing the growth of renewable energy and reactivating coal and gas plants.
In climate, timing matters. The earlier we avoid emissions, the better our chances, because carbon accumulates in the atmosphere for centuries. This is the time value of carbon. Delaying reductions now makes future solutions less effective.
I’ve covered a lot here, and this discussion could go much deeper. My goal is not to exhaust it, but to share a way of thinking: to reflect on how new technologies intersect with philosophy, economics, and society, and to accelerate our collective understanding.
The main point of this dispatch is simple: the AI revolution carries two first-order inequalities. First, that a very small, non-diverse group is deciding how this plays out in the real world. Second, that the short-term spike in emissions may erase the long-term benefits AI could bring — because it may be too late.
In my next dispatch, I’ll return to the second-order inequalities: the impacts on us, human beings.
Earlier, I defined honesty as telling the truth. When I said 1+1=2, I lied. To borrow the words of Paul Polman, a respected business leader I’ve had the privilege of knowing: 1+1=11. But only when we come together as a society to solve challenges and make life better for everybody…and not just for somebodies.
May I ask you a small favor?
If you’ve been enjoying The Interweave, there are a few ways you could support this work:
- If you haven’t yet, consider becoming a paid subscriber. None of the resources will be used for personal purposes, they will go entirely toward amplifying this messaging, starting with expanding the reach of The Interweave.
- If a subscription isn’t possible right now, you can still help by sharing any Dialectic Dispatch that resonates with you. Forward it to a friend, a colleague, or someone you think would value the conversation.
- And if you are a writer yourself, I’d be deeply grateful if you referenced The Interweave in your own channels.
Whichever way you choose to support, know that it means a lot. Thank you for walking this journey with me.


