The stepping stones analogy for mathematical progress is spot-on. The Fermat-Taniyama connection wasn't obvious at all, yet it became the critical path. That's precisley why the "just ask AI to solve the Riemann Hypothesis" approach misses what actually happens in research: building smaller bridges between known territories rather than impossible leaps. Dunno if younger mathematicians really have more stamina or just less awareness of how hard these things are.
I think it's a bit more complicated than that - you could say "if AI can't count the letters in strawberry, then it isn't going to be able to do protein folding", and yet we know that's not right.
But then it's less clear to me what that means for maths. We know that they can produce arguments to prove somewhat novel questions - of the kind of "technical lemma that you might ask a grad student to prove, or spend a few days figuring out yourself" type. I agree it's less obvious how that converts to a big jump of logic to suddenly create a new strategy for Riemann and follow it through, but it might be a tool which helps track through an existing strategy
This is wonderful! Thank you so much. It also restores my hope that mathematicians over the age of 30 won’t all be incredibly grumpy at the refusal of ideas to resolve themselves. The comparison with cancer research is a helpful one. As you say, people aren’t really expecting a Eureka moment.
Love the subhead, too.
Thanks, it kind of felt appropriate!
Agreed - I think it's the lyric I quote the most.
The stepping stones analogy for mathematical progress is spot-on. The Fermat-Taniyama connection wasn't obvious at all, yet it became the critical path. That's precisley why the "just ask AI to solve the Riemann Hypothesis" approach misses what actually happens in research: building smaller bridges between known territories rather than impossible leaps. Dunno if younger mathematicians really have more stamina or just less awareness of how hard these things are.
If your favourite LLM can’t even play a simple game of chess (and it can’t) then it isn’t going to help you solve the Riemann Hypothesis..
I think it's a bit more complicated than that - you could say "if AI can't count the letters in strawberry, then it isn't going to be able to do protein folding", and yet we know that's not right.
So, I agree the reasons they can't play chess is the lack of a world model, https://garymarcus.substack.com/p/generative-ais-crippling-and-widespread?r=8tdk6&utm_campaign=post&utm_medium=web&triedRedirect=true essentially because they are pattern matching on strings of chess notation.
But then it's less clear to me what that means for maths. We know that they can produce arguments to prove somewhat novel questions - of the kind of "technical lemma that you might ask a grad student to prove, or spend a few days figuring out yourself" type. I agree it's less obvious how that converts to a big jump of logic to suddenly create a new strategy for Riemann and follow it through, but it might be a tool which helps track through an existing strategy
Ok. I’ve probably been reading too many “LLMs aren’t the answer” posts lately
This is wonderful! Thank you so much. It also restores my hope that mathematicians over the age of 30 won’t all be incredibly grumpy at the refusal of ideas to resolve themselves. The comparison with cancer research is a helpful one. As you say, people aren’t really expecting a Eureka moment.
No worries, sorry it took me a little while to come up with a proper answer - it's been a bit mad over here