AI is everywhere. You interact or rely on such systems in one form or another, every day. Yet, AI is a commonly misunderstood technology, where the realm of science fiction seems to encroach on the real world, relying on science fact. To overcome this, we all must develop our thinking and gain a basic understanding of the inner-workings of AI.
In this post, I’d like to tackle the misconception that your brain is like a computer, explaining why the AI = human brain analogy is, at best, misleading and, at worst, dangerous.
Human brains are not computers
It’s easy to see how the human mind and computers are often talked about in an analogous manner. To make sense of the world, we model reality and use analogies in this process. Terms like “neural networks” certainly have not helped and, from Musk to Hawking, some of the greatest minds have propagated this myth.
But this is simply incorrect. Your brain is not a computer. We are organisms, not computers. Get over it.
The widely used brain-as-computer metaphor is fundamentally flawed and this is recognized by leading scientists in the field, including eminent psychologist Robert Epstein:
“Setting aside the formal language, the idea that humans must be information processors just because computers are information processors is just plain silly, and when, some day, the Information Processing metaphor is finally abandoned, it will almost certainly be seen that way by historians, just as we now view the hydraulic and mechanical metaphors to be silly.”
“Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer”
Human brains and artificial intelligence systems are fundamentally different and both limited in what they can do. To assume otherwise has dangerous repercussions and leads to instances of fake news. Namely:
“AI is already getting smarter than us, at an exponential rate.”
“We can make human intelligence in silicon.”
“This intelligence can be expanded without limit to solve the world’s problems.”
Such headlines are false, misleading and can have expensive repercussions where, for example, we have seen billions of dollars spent on projects like the Human Brain Project, which was launched by the European Commission in 2013.
Convinced by the charismatic Henry Markram, the European Commission believed that he could create a simulation of the entire human brain on a supercomputer by the year 2023. This model, he promised, would revolutionize the treatment of Alzheimer’s disease and other disorders. EU officials funded his project with virtually no restrictions.
Less than two years into it, the project is now a ‘brain wreck’, and Markram was asked to step down.
Let’s get real and realistic about AI
Humans rely on intuition, worldviews, thoughts, beliefs, our conscience. Machines rely on algorithms, which are inherently dumb. Here’s David Berlinski’s definition of an algorithm:
“An algorithm is a finite procedure, written in a fixed symbolic vocabulary, governed by precise instructions, moving in discrete steps, 1, 2, 3, . . ., whose execution requires no insight, cleverness, intuition, intelligence, or perspicuity, and that sooner or later comes to an end.”
But not every machine relies on dumb algorithms alone. Some machines are capable of learning. So, we must dive a little deeper to understand the inner workings of AI. I like this definition from John C. Lennox PhD, DPhil, Dsc – Professor of Mathematics (Emeritus) at the University of Oxford:
“An AI system uses mathematical algorithms that sort, filter and select from a large database.
The system can ‘learn’ to identify and interpret digital patterns, images, sound, speech, text data, etc.
It uses computer applications to statistically analyse the available information and estimate the probability of a particular hypothesis.
Narrow tasks formerly (normally) done by a human can now be done by an AI system. It’s simulated intelligence is uncoupled from conscience.”
Sort, filter and select. If you put it as simply as this, which to my opinion is the case, then you realize that AI is completely different from the human brain, let alone who we are as human beings.
Let’s get realistic, be accountable and govern
We must make sure we have realistic expectations around AI. Yes, it’s a revolutionary technology in so many ways. But AI is not anything close to what our human brain is – and it never will be.
Hopefully, by recognizing its limitations, we can use AI in such a way to serve humanity. However, as with every human invention, I suspect there will be misuse and uncontrolled applications for AI. But the human mind is more than adept at recognizing and resolving these challenges. We just need to find the right ways to govern AI.
David Watson, a doctoral candidate at the Oxford Internet Institute focusing on the epistemological foundations of machine learning, summises the careful steps we must take to protect humanity and the growing prevalence of artificial intelligence:
“The temptation to grant algorithms decision-making authority in socially sensitive applications threatens to undermine our ability to hold powerful individuals and groups accountable for their technologically-mediated actions. Supervised learning provides society with some of its most powerful tools—and like all tools, they can be used either to help or to harm. The choice, as ever, is ours.”
Yes, AI is everywhere. How we choose to use it is a decision left to the human brain. But let’s not degrade ourselves by equating our brain to that of a computer….
Sources and extra reading
Dr Robert Epstein – The Empty Brain. Online essay for Aeon.
Stefan Theil – Why the Human Brain Project Went Wrong – and How to Fix It. Online essay for Scientific American
John C. Lennox – Artificial Intelligence and the Future of Humanity. Book available from Amazon.
Watson, D. The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence. Minds & Machines 29, 417–440 (2019).