Inside the Debate on AI Evolution: Allen Dorin’s Perspectives, the Moravec Paradox, and Why We’re Developing Intelligence Rather Than Losing It
By GlobalTimesAI.com
At an accelerated rate, artificial intelligence (AI) is changing our world. In contrast to humans, who evolved from single-celled organisms to intelligent life over billions of years, artificial intelligence has advanced to incredible heights in a matter of decades. This disparity inspires wonder and concern: Is artificial intelligence a real threat to humanity? Or do we misunderstand its evolution, which is the source of our fear?
The Misconception: AI as an Instant Threat
The idea that AI will eventually completely replace people—take over jobs, control decision-making, or even rebel—is one of the most prevalent concerns in the tech community.
However, few people discuss the following:
The risk is not that AI will suddenly become “too powerful”; rather, it is that we will fail to understand how AI develops.
Enter the Moravec Paradox
According to the paradox, a theory put forth by Hans Moravec and endorsed by academics such as Allen Dorin (Monash University),
:
“Hard tasks for humans (like chess) are easy for AI. Easy tasks for humans (like walking or recognizing faces) are hard for AI.” Everything is turned upside down by this paradox.
.Playing Go or solving math equations is no sweat for GPT-like systems. But understanding emotions, morality, or body movement? That’s still incredibly hard.
Why?
These abilities took humans millions of years of evolutionary refinement. Renowned for his studies in generative systems, artificial life, and the nexus between biology and artificial intelligence, Professor Allen Dorin teaches at Monash University in Australia. His research focuses on how AI can resemble or develop like natural systems—not only act intelligently but also emerge, adapt, and co-evolve alongside people and their surroundings.
Not claim AI is dangerous in the way sci-fi movies do. Instead, his research emphasizes:
- Co-evolution: AI develops with human interaction and data.
- Creativity in Machines: Machines can generate art, music, or even living-like behaviors through algorithms.
- Ethics & Emergence: Human-like behavior may emerge from simple coded rules, but without human context, AI remains “mechanical.”
Is AI Dangerous?
Only if misunderstood. It’s not AI becoming “alive” that poses a threat, but rather people misjudging it for higher intelligence. In the absence of training, supervision, and ethical alignment, AI is devoid of ethics, empathy, and wisdom.
We are encouraged by Dorin’s work to view AI as a mirror rather than a monster.
.
Evolution vs. Engineering
Animals like dolphins developed echolocation through natural evolution because they needed it for survival. But it took millennia.
AI, on the other hand, didn’t evolve organically—it’s engineered. It doesn’t learn to “need” skills. Humans train it, feed it with datasets, adjust its weights, and correct its outputs.
“AI is not born smart. It’s trained. By us.”
– Allen Dorin, “Brains, Machines and the Imitation Game” (2016)
The Imitation Game: What Allen Dorin Really Says
In his well-cited research paper, Dorin revisits Alan Turing’s Imitation Game—often misunderstood as the only benchmark for AI intelligence.
According to Dorin:
“The Imitation Game is not about replicating human behavior superficially—it’s about understanding whether a machine can meaningfully interact with us using logic, context, and language.”
Dorin challenges the idea that mimicking humans is the ultimate goal of AI. Instead, he warns against anthropomorphizing AI—giving it human traits it does not actually possess.
AI Mimics, But It Doesn’t Understand
AI doesn’t “know” it’s answering your question or “want” to help you book a flight. It’s mimicking behavior learned from massive amounts of training data. Artificial intelligence (AI) learns to react, much like a toddler learns to walk, but without the human sense of agency or morality.
.
AI Isn’t Just Taking Jobs—It’s Creating Them
Instead of fearing mass unemployment, we should recognize that new professions are already emerging:
- AI Trainers – Teach models what’s right or wrong
- Prompt Engineers – Design smart prompts for optimal AI results
- AI Safety Testers – Detect biases, hallucinations, and dangers
- Ethicists—Define rules and moral guidelines for AI systems
- Human-AI Collaboration Designers – Build workflows where both work in harmony
Global Impact: A New Era of Evolution
We are no longer just evolving biologically—we are entering a co-evolutionary era, where human decisions shape the growth of synthetic intelligence.
If left unchecked, this could lead to unpredictable outcomes—not because AI “wants” to harm us, but because we didn’t build the right guardrails.
Final Thoughts
This isn’t a sci-fi apocalypse. It’s a historical shift.
A new form of intelligence, built by us, evolving with us.
Next time someone says, “AI will replace us,” ask them:
“Who trained it to begin with?”
Because AI didn’t emerge from nature.
It emerged from us.
Sources:
- Allen Dorin, Brains, Machines and the Imitation Game, Monash University
- Moravec, H., Mind Children (1988)
- Turing, A., Computing Machinery and Intelligence (1950)
- Journal of Artificial Intelligence and Simulation
Disclaimer: All information and data presented in this article are collected from reliable sources, research publications, and public statements. The images used are AI-generated and intended for illustrative purposes only.