Many people worry that humanity becomes dumb because we let AI think instead of us. We ask our questions to ChatGPT and blindly believe it. Programmers started to blindly accept AI-generated code. But this fear misses a profound truth: it is a multi-thousand year learning of humankind that thinking becomes much more effective when we outsource it.

When an ancient Egyptian architect counted stones, the number “5” meant exactly five real stones you could touch and see. They simply could not imagine zero or negative numbers as real numbers. Their math was limited by intuition. Early historical numbers were much more concrete compared to today’s abstract concepts [1].

Intuition is the ability to know or understand something immediately, without thinking about it consciously. 
It often feels like a “gut feeling” or a sudden idea from experience and recognizing patterns. 
Intuitive judgments happen quickly, automatically, and can have emotions attached, even if we can’t explain them. 
They work well in situations we know well but can trick us in new or complicated situations. 
In short, intuition is fast, unconscious thinking that helps us quickly make sense of the world.

From “Nothing Is Something” to “How Could Almost Nothing Mean Anything”

The first big step in outsourcing thought began with zero. Philosophers struggled for centuries — how could “nothing” be “something”? In the 7th century in India, Brahmagupta finally defined zero clearly as a number used in math, not just a placeholder [2]. This wasn’t just progress in math; it was humanity’s first step in trusting abstract symbols over intuition.

Negative numbers faced even stronger resistance. Michael Stifel (1487–1567) called them dismissively “numeri absurdi” (absurd numbers), a sentiment echoed by Girolamo Cardano (1501–1576). John Napier (1550–1617) labeled them “defectivi” (defective), and René Descartes (1596–1650) rejected negative solutions as “false roots”. Even in 1803, Lazare Carnot (1753–1823) still doubted negative numbers [3],[4].

It’s amazing to realize concepts we now teach 8–10-year-olds once confused history’s greatest minds. Today, we easily accept numbers that don’t count real things but instead serve as powerful mental tools to solve problems. This change happened when mathematicians stopped worrying about what negative numbers like “negative five sheep” meant in reality, and instead focused on their usefulness.

John Wallis (1616–1703), a 17th-century mathematician, made a breakthrough by visualizing negative numbers as directions along a number line [3]. By trusting abstract mathematical ideas, humanity opened new doors to intellectual progress. If the struggles of these great thinkers don’t show how huge this change was, perhaps nothing will.

The Pythagoreans believed “all is number” — meaning everything could be expressed using whole numbers and their ratios. The discovery of irrational numbers in the 5th century BCE challenged this idea deeply. Pythagorean philosophy, science, and spirituality all depended on the idea that the universe could be reduced to numbers [5].

This discovery came when mathematicians studied the diagonal of a square and found √2 couldn’t be written as a fraction. Historical records don’t clearly show who discovered this first, but we know this finding deeply shook the Pythagorean belief.

At first, ancient mathematicians didn’t see irrational numbers as just a new type of number. Instead, it showed their core belief — that numbers could explain everything — was flawed. This was more a philosophical crisis than a mathematical discovery.

Over time, humans accepted irrational numbers by using symbols to represent ideas we couldn’t easily understand. Even 2 centuries ago, mathematicians debate this acceptance. Leopold Kronecker said in 1880: “God made the integers; all else is the work of man”[6], showing skepticism toward irrational numbers still exists.

Then came Newton and Leibniz’s calculus, which went against intuition with infinitesimals. Bishop Berkeley mocked them as “ghosts of departed quantities”[7] — quantities close to zero but not exactly zero, existing between something and nothing. George Berkeley’s 1734 critique, The Analyst, highlighted the confusion at the heart of calculus.

Newton and Leibniz’s methods allowed us to understand motion, growth, and dynamic processes through math instead of static descriptions. Calculus worked because we learned to trust mathematical symbols over intuition.

This was a key turning point: calculus showed math could work even if its basic ideas didn’t match our intuition. Accepting infinitesimals, despite their logical problems, allowed scientists to solve previously impossible problems like calculating planetary orbits and modeling continuous change.

The success of calculus created a new standard: mathematical usefulness could be more important than conceptual clarity. This helped later scientific breakthroughs in areas like electromagnetism and quantum mechanics, where math works even if we struggle to understand why.

But at least the models we built were deterministic. They explained experiments with certainty. Input led to output. We could trace the logic, follow the steps, interpret the mechanism. If we couldn’t intuit what an infinitesimal was, we could at least understand how the calculation worked.

If You Can Let Go That Mankind Is the Center of the Universe, You Can Let Go That Mankind Can Understand Everything

Statistical thinking started in ancient Babylon and Egypt for record-keeping. In the 9th century, mathematician Al-Khalil made an important leap: he used math to study patterns in Arabic poetry, showing uncertainty itself could be studied systematically. This idea became the foundation of probability theory, helping us model unpredictable things [2].

When Bayesian and frequentist models appeared, we outsourced even more. Instead of certain predictions, we accepted probability distributions. These models were still understandable — we knew why and how they worked. We traded certainty for flexibility but kept some interpretability.

Classical machine learning pushed further. Decision trees, support vector machines, and early neural networks found patterns, yet we could still peek inside them and understand their decisions.

Then deep learning changed everything. Training billions of parameters made neural networks true “black boxes.” Unlike earlier models with clear rules, deep neural networks learn patterns directly from data, often in ways we can’t understand [8]. A CNN might diagnose cancer better than a doctor, but nobody fully knows how it does this. We outsourced not just calculation or pattern recognition, but understanding itself. CNNs in image recognition don’t “understand” like humans; they build layers of abstract representations [9].

Large Language Models (LLMs) are the next step. They process language — the core of human thought. GPT generates text based on patterns from huge amounts of human communication, too large for us to grasp fully. The GPT Othello [10] suggests these models might “understand” concepts enough to predict the next word, expanding AI’s capabilities.

DeepMind’s “Absolute Zero Reasoner” solves problems without human data during the training. It allows to explore ideas that humans may not think. It is further removing human understanding from the process [11].

This is humanity’s oldest and most successful strategy: letting go. From zero to AI, each time we trusted our tools over intuition, we unlocked new abilities. We fear AI makes us dumb, but history shows the opposite — every time we’ve outsourced thinking, we’ve achieved things we never imagined possible. The challenge remains trusting “what it does” over “what it means.”

Reference

[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC8389766/
[2] https://www.ijfmr.com/research-paper.php?id=23400
[3] https://notoneg.com/history-negative-numbers-part2/
[4] https://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/carnot-lazare-nicolas-marguerite
[5] https://brilliant.org/wiki/history-of-irrational-numbers/
[6] [Gray, Jeremy](https://en.wikipedia.org/wiki/Jeremy_Gray “Jeremy Gray”) (2008), [_Plato’s Ghost: The Modernist Transformation of Mathematics_](https://books.google.com/books?id=ldzseiuZbsIC&q=%22God+made+the+integers%2C+all+else+is+the+work+of+man.%22)
[7] https://old.maa.org/press/periodicals/convergence/mathematical-treasure-berkeleys-critique-of-calculus
[8] https://theconversation.com/opening-the-black-box-how-explainable-ai-can-help-us-understand-how-algorithms-work-244080
[9] https://www.sapien.io/glossary/definition/hierarchical-feature-learning
[10] https://arxiv.org/abs/2503.04421
[11] https://arxiv.org/html/2505.03335v2