What if the old answer to "why teach math" turns out to be the only answer that survives AI?


If you stop someone in a grocery store and ask what they remember from math class, you'll probably get a very, very long complaint. The sine, cosine, whatever. And we math teachers are left with one thin defense: "We're teaching people to think."

But are we really teaching thinking through memorizing the quadratic formula? Was there ever a high school math problem that Wolfram Alpha couldn't solve a decade ago? And now that AI can generate any solution — and frankly, it already covers high school math quite well — what's the added value of the human in this story?

I want to try rethinking what "teaching to think" actually means.


What if it's not about utility?

Hardy, who wrote A Mathematician's Apology back in 1940, put it this way:

"The mathematician's patterns, like the painter's or the poet's must be beautiful; the ideas like the colours or the words, must fit together in a harmonious way. Beauty is the first test: there is no permanent place in the world for ugly mathematics."

He was proud that he worked in number theory — a perfectly useless thing, as he saw it. We'll never find out what he'd say about the trillions of banking transactions now built on number theory.

This might sound like romanticism. But many mathematicians will tell you the same thing: there's some kind of beauty-seeking that drives the work. This already contradicts what people in the grocery store think mathematics is about.

Plato, already in antiquity, wrote in the Symposium about Eros — desire — and how it ascends through stages. Diotima, the wise woman who teaches Socrates about love, describes a ladder: it begins with physical attraction to one beautiful body, then rises to appreciate all beautiful bodies, then beautiful souls, then the beauty in laws and knowledge, and finally — at the summit — Beauty itself. Mathematical beauty sits near the top of this ascent, just below the Form of Beauty.

Edward Frenkel, who became a Harvard professor at twenty-one, describes his first mathematical discovery as a teenager this way in Love and Math:

"This was the first time it happened to me, and like the first kiss, it was very special. I knew then that I could call myself mathematician."

Somewhere in here is a clue. For people who actually do mathematics, it's not about utility. It's closer to desire.


Where does the breakthrough come from?

But let me add something. Poincaré has that famous story about his discovery of Fuchsian functions. He'd been struggling with a problem for weeks, then had to do a geological survey as a mine inspector. He writes:

"The incidents of travel made me forget my mathematical work. Having reached Coutances, we entered an omnibus to go some place or other. At the moment when I put my foot on the step, the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty."

Beautiful. The unconscious mind at work.

But Poincaré also had a memory so sharp he could recall, page by page, exactly where in a book he'd read something, without ever taking notes. So not that bad.

The unconscious breakthrough happened because his conscious mind had absorbed so much. You can't skip that part. The bus-step moment runs on everything you've internalized before it.


What do we lose when we outsource?

Now let me bring in someone who discovered what can go wrong.

Nicholas Carr, an American writer — his book The Shallows was a Pulitzer finalist — describes his own transformation in his 2008 Atlantic essay "Is Google Making Us Stupid?":

"Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I'd spend hours strolling through long stretches of prose. That's rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I'm always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle."

This isn't just Carr's story. Anyone who scrolls their phone, scrolls Facebook — we all feel this.

David Brooks noticed a paradox. In his 2007 New York Times column "The Outsourced Brain," he wrote:

"I had thought that the magic of the information age was that it allowed us to know more, but then I realized the magic of the information age is that it allows us to know less. It provides us with external cognitive servants — silicon memory systems, collaborative online filters, consumer preference algorithms and networked knowledge. We can burden these servants and liberate ourselves."

Liberate ourselves. But for what?

Sam Altman, the CEO of OpenAI, calls this a "Copernican turn." Before Copernicus, Earth was the center; humans were the crown of creation. After AI, thinking itself is no longer uniquely ours.

Plato already worried about something like this. In the Phaedrus, Socrates recounts the myth of Theuth, the inventor of writing, presenting his creation to King Thamus:

"If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder."

Now the question is whether AI atrophies thinking itself.


What can't be delegated?

But here I want to put up a wall.

Stanislas Dehaene, a French cognitive scientist, is one of the pioneers of number sense research. He found that recognizing quantities 0–4 is almost instant — it's called subitization, and it seems innate. But 7–10? Slower. You have to think.

There are spatial associations too. Small numbers feel like they belong on the left, large numbers on the right. At Charles de Gaulle airport, Terminal 2 has reversed numbering — smaller gates on the right — and it causes total confusion. Our brains expect the number line.

Here's his key finding: Addition is intuitive. I remember in kindergarten, we had little competitions during snack time — adding numbers to 10. Kids discover early that you can always add one more to infinity. Deep mathematical intuition comes naturally.

Multiplication does not. You have to memorize the times table.

We've had calculators for decades. But I wouldn't argue we should stop teaching multiplication. Something is built through the process itself.

So let this be the counterpoint: not everything can be outsourced without loss.

The analogy I like: people go to the gym to get stronger. You don't run on a treadmill because a car would get there faster. You run because running builds something.


Same as always

The oldest answers turn out to be the ones that survive.

Utility? AI handles that. Computation? Done. Even "teaching to think" in the mechanical sense — AI can walk through logical steps.

What's left is what Hardy and Plato and Frenkel were pointing at: the experience of mathematical beauty. The desire to know. The satisfaction of the bus-step moment — which only comes after weeks of struggle, running on everything you've absorbed.

Maybe the AI does not change everything. Maybe we just have to be the same with or without the AI.


For anyone who enjoyed these threads: Jim Holt's essay collection explores many of these ideas much deeper. And if the question of whether AI "really thinks" interests you — ask me sometime. I love that question.