AI: neither Artificial, nor Intelligent
Human Circuit and AI's Billion-Scale Illusions
It is very hard to grasp the magnitude of a billion. Human beings struggle with logarithmic scales, with exponential thinking. Take compounding, for example: if people truly understood it, most would make an extra effort to start saving and investing much earlier in their careers.
To illustrate this difficulty, I often ask a simple question and I insist people not calculate, just say the first number that comes to mind: how much time is a million seconds? The answer is about 11 days. Then I ask: what about a billion seconds? Some say months, others a few years. In fact, a billion seconds is nearly 32 years. That is the gulf between a million and a billion in this case: 11 days versus 32 years. If you are under 32, that’s longer than your whole life. If you are around 60, it is half of it. If you are 90, it’s a third. In any case, a billion seconds is incomparably more representative of your life than a million seconds.
Now that we have a sense of how vast a billion really is, imagine not 32 years but 14 billion years! That is the estimated age of the universe. Or consider the 86 billion neurons in the human brain. That is…a LOT.
And it reminds us of what we barely comprehend: our brains are super-powered biological computers, capable of staggering amounts of processing and association, but also of creating language, communication, emotion, and meaning.
This is how unique and powerful we are. Machines can never be this. And yet, paradoxically, it is precisely this intelligence of ours that has driven us to create something that some hope might outsmart us: so-called Artificial Intelligence, or Artificial General Intelligence (AGI), which is the term used for a machine that could not only outperform humans in every task but even behave autonomously like us.
But what is so special about this AI trend? Is it worth the billions of dollars being poured into it? Will it bring humanity productivity gains and scientific breakthroughs, or will it become just another technological development that enriches a small group who control it? Will it create jobs, or displace billions of them? What are the implications for society? Is it really “Artificial”? Is it truly “Intelligent”?
These are questions, practical, economic, and philosophical, that I’ve been wrestling with for three years, since the first chatbot, ChatGPT from OpenAI, was released. By accident of geography, I was among the first to try it: as I live in San Francisco, the gravitational center of this revolution. That fact alone says a lot about the privilege conferred by the zip code you are born into, or happen to inhabit.
The next dispatches will be deep dives into these dialectical questions emerging from this technology: its goods and bads, its contradictions and virtues.
For now, I should state plainly that I echo some of the brightest minds who argue: so-called Artificial Intelligence is, in truth, neither artificial nor intelligent.
It is difficult to define intelligence. There is no scientifically agreed-upon definition. Miguel Nicolelis, the brilliant Brazilian neuroscientist who pioneered neural network studies at Duke University, argues that intelligence is cerebral, organic, biological. Human intelligence is the result of thousands of years of evolution, and it goes far beyond reasoning or mathematics. Perhaps its most mysterious dimension is our emotional life. Would you not agree that part of what makes us intelligent is that we combine rational computation with passion and emotion? Psychologists Peter Salovey and John Mayer gestured toward this in the 1990s when they coined the term “Emotional Intelligence,” later popularized by Daniel Goleman’s 1995 bestseller: Emotional Intelligence: Why It Can Matter More Than IQ.
The way we learn, behave, feel, and express our emotions is intrinsic to our biology. Machines don’t feel, physically or emotionally. Machines don’t “know” facts; they associate words. Could we ever reduce emotions like trust, fear, anger, or love to a line of code? Would a machine ever be sentient about a feeling that only humans can articulate through language and experience?
Nicolelis emphasizes that the real challenge is not understanding a single neuron — we already do — but understanding how billions of them work together, like dominos falling in collective patterns. If you, like me, were amazed to think about the magnitude of a billion, using the time scale, the numbers are even more staggering here. One of the godfathers of AI, Geoffrey Hinton — who continued the works of the most notorious scientists from the 50s like Alan Turing who started to ask whether machines could think —, reminds us that the human brain has around 100 trillion synaptic connections formed among its 86 billion neurons, while the latest models like GPT-4 have “only” about 1 trillion parameters. Still, Hinton points out, these models can “know far more” than any individual human, since we are constrained by time, memory, and the need to sleep, which are limits machines do not face.
If we were free of those biological constraints, perhaps we too could make associations that unlock breakthroughs in life sciences. Machines might unlock this potential. But does the ability to generate more associations mean they are intelligent? Do they really know and understand?
Large Language Models (LLMs) like ChatGPT, Gemini, or Claude are prediction engines, not thinking beings. They excel at pattern recognition and word prediction, generating convincing responses by guessing what should come next in a sequence. But prediction is not understanding. Even Yann LeCun, Meta’s chief AI scientist and another “godfather of AI,” has said: “Human-level AI won’t be achieved by simply scaling up Large Language Models.”
I've been marveled by the reading of The Empire of AI, by Karen Hao, which I'll refer to a few times in the upcoming dialectic dispatches of the Interweave. One of the best examples in the book about how AI is not actually intelligent is the one about autonomous driving. An adult who learns the basics of driving can adapt to nearly any environment, even to unusual conditions. But an AI-powered car encountering a situation outside its training data, like a vandalized road sign, may crash or stop entirely. Humans see the trick, understand the risk, and realize they should continue in the right direction. Machines do not. The same holds with AI hallucinations: when a chatbot confidently gives us false information. In those cases, prediction has replaced real knowledge.
If, based on those grounds, we agree that these programs are not truly intelligent, can we at least call them Artificial? Nicolelis plays with words here to make his point and argues no: they are not artificial because they depend on humans at every stage, from design to training to moderation. This is all true, and everyone should know it! Behind the shiny interfaces lies, for example, the hidden labor of thousands of poorly paid workers in the global South, exposed to disturbing content so that Western users have “guardrails.” But I will discuss the societal challenges in another upcoming dialectic dispatch.
But I have to disagree with Nicolelis here, as the original use of the word “Artificial” was exactly to say that this is a man-made and not naturally occurring intelligence, which is exactly what Nicolelis is saying, using other words. If we all agree it is Artificial, assuming it is man-made and still relies heavily on humans, the reason why I question the term Artificial is that there is yet another term that is more accurate to explain one aspect of this technology. The term is Alien. Yuval Harari argues that those systems behave in ways that are alien even to their creators.
Engineers can adjust parameters in large language models, one of which is called “temperature”, to make outputs more literal or more creative. Yet the deeper reasons for how these systems behave remain opaque even to their creators. To oversimplify: imagine a user asks a chatbot, “Who was Einstein?”. At a low temperature, the model might give the most statistically likely answer, for instance, with 80% probability, that Einstein was a genius who revolutionized physics in many ways. But with a higher temperature, the model may select a less probable continuation, perhaps one with only 15% likelihood, and respond: Einstein was a brilliant scientist who transformed our understanding of physics, but he also laid the groundwork for the atomic bomb.
This "alien" behavior is one of the core problems of this technology and one of its typifications is also called AI interpretability: "the lack of transparency in complex AI models, which are often "black boxes" that make decisions without humans understanding why. This "black box" nature creates significant issues, including technical challenges in understanding model complexity, operational barriers in trusting and regulating high-stakes applications like healthcare and finance, and ethical concerns such as embedded biases and lack of accountability." This was the answer provided to me by the chatbot Gemini.
In that sense, Harari is right: models that make decisions without us really knowing why are less artificial than alien. This alien behavior is one of the deepest challenges of the technology, with consequences ranging from technical to ethical. How can we trust, regulate, or delegate responsibility to systems we don’t truly understand?
To be fair to Turing and all the brilliant minds who followed him, no single thinker — not Nicolelis, not Harari, and certainly not me — will change the powerful narrative around the term “Artificial Intelligence,” especially now that corporations have seized it. But as a collective, as human beings, we must exercise critical thinking and humanism to see through the story-selling at work here.
Make no mistake: AI is here to stay, and it will reshape the near future. But it comes with billions of problems. It must, in the current narrative, outsmart not only what I call the human circuit — billions of neurons and billions of humans — but also justify the billions, soon trillions, of dollars being poured into it under the narrative of an “arms race”.
It has many contradictions. Already, we hear of researchers abandoning life-saving work to hand over scarce chips and computing power to the next chatbot inside their own organizations. Investigative journalist Karen Hao is convinced this is another tool of the money-lust driving us, and that AI is another race to power grab. Others are questioning whether this generative form of AI (the LLMs in chatbots) is what will allow a superintelligence to emerge. And many are asking if this is even a desirable goal.
In this first dispatch on AI, I have tried to highlight that intelligence is not just word prediction, but the outcome of billions of years of evolution, billions of neurons, and trillions of associations, giving us the ability to trust, to love, to fear, to create, to confront ideas. I echo the concerns of many: will investing tons of money, using scarce resources, and building ever-bigger models ever produce anything close to intelligence as we know it? An intelligence that is emotional, embodied, biological.
Technology brings wonders, yes. But it also forces us to ask the most fundamental question: what is the purpose of progress?
Especially if it is not shared.
As someone deeply concerned about social welfare, I urge all of us to practice dialectical inquiry in this changing world: celebrate what is worth celebrating, but resist the narratives we should not prioritize. Will new technologies empower us as a species, or deepen what I recently called the crisis of the self?
I invite you to reflect with me as we continue this journey into the dialectics of AI.
Special thanks to those, like my brother, who work in the field and have been challenging me to see through the AI story-selling and dive deeper into those existential questions.



Hello there Murilo, I’ve been a quiet observer of your posts, always interesting, thank you.
I thought you may enjoy this article:
https://open.substack.com/pub/jordannuttall/p/laws-of-thought-before-ai?r=4f55i2&utm_medium=ios