Hello everyone and welcome to this panel. It's a pleasure to be here with you today to talk about AI and mathematics.
大家好,欢迎来到这个专题讨论。今天能够和大家一起探讨AI和数学真是难以忘怀的荣幸。
I'll briefly introduce myself. I'm Joël Pinot. I'm a faculty member and the School of Computer Science at Miguel University. I'm also the managing director of meta-AI's fundamental AI research team. With me, I have two researchers who will join me in discussing this topic.
Jan L'Accune is a French computer scientist, well known for his work in machine learning, deep learning. He's currently the vice president and chief AI scientist at meta, as well as the silver professor at the current Institute of Mathematical Sciences at New York University. Jan's the father of modern AI techniques in computer vision, in particularly known for convolutional neural networks. In 2018, he was awarded the Touring Award together with Yashua Benjou and Jeffrey Hinton for his work and their work I should say on deep learning. Welcome Jan. Pleus de bien.
With Diane, I have a Sir Timothy Gowers. Welcome to the panel Tim. You are a British mathematician as well as professor titular of the combinatorics chair at the College of France and the Director of Research at the University of Cambridge. You've contributed to important areas of combinatorics and functional analysis, including the theory of Bannach spaces, Ramsey theory, and as well making important contributions in analytics and combinatorial number theory. And in 1998, if I'm correct, you have received the field's medal for your work on combinatorics and functional analysis. So really distinguished panel today to discuss this topic of AI and mathematics. Really pleased to have both of you with us. Welcome. Thank you very much.
The panel is entitled whether human lead mathematics is over. That is quite a provocative topic. We'll start with a little bit of a warm up rather than dig right into that question. I take the statement to apply that going forward, AI should be leading mathematics rather than humans. But we may well have a little bit of ways to go before we get there.
And so let me start really just to get us grounded on this on this topic. In particular, I would love to hear from both of you. What does it mean to solve mathematics using AI? Jan, do you want to get us started?
I was going to say team should start because is the real mathematician here? I'm just kind of in person. I was going to say the Jan should start because he knows a lot more about AI than I do. But I mean, a simple answer to that would be just that we can create an AI that can, that is at least as good as a human mathematician or an expert human mathematician at solving mass problems.
Of course, that raises additional issues like which problems should it solve and there are other things that mathematicians do other than solving problems like formulating problems, building theories, making definitions, making posing problems etc. But I would say that if a computer could become at least as good as a human mathematician or maybe if it could sort of pass a mathematician's Turing test and produce research output that we couldn't tell was not produced by a human mathematician, I would be very happy to count that as AI has sort of solved mathematics in some sense.
You said something that I think was fascinating and we'll come back to it a little bit later, right? Whether you could look at the work of the AI and if it was indistinguishable from the human, then we may pass it Turing test of sort for AI and mathematics. But let me more emphasize the first part of what you shared which is sort of decomposing the ways in which mathematicians contribute, right? There's sort of the identification of the question which is one step of it. Then there's a formulae, we're really formulating the problem perhaps with a more formal language and putting in the assumptions that may give at least some boundaries to what you're trying to solve. Then there's a derivation of the proof and eventually all out if I can there's actually the verification of the work which is which is another step.
Jan, do you want to jump in and share some ideas in terms of these different levels, you know, where do you see AI being most effective right now? Right, I would actually put more levels into the process. There is certainly the, I mean, the thing that a lot of mathematicians do is forming sort of mental models of a particular problem or situation that they're interested in and sort of manipulating this mental model. And then there is perhaps coming up with appropriate concepts and definitions that are relevant for approaching the problem and that's an essential task, right? Constructing the right concepts to manipulate so definition forming which you know Tim mentioned.
Then there is intuition about, you know, potential theorems which are really properties of those mental models that may or may not be true, right? Her hypothesis then coming up with a sketch of a proof and then actually kind of writing down the proof and then, you know, doing it in a formal or semi-formal way, right? I mean, there are mathematicians who would be actually successful just coming up with sort of a somewhat informal sketch of a proof and then we're not other people who are much better at actually kind of feeling in the blanks right to the good example is the, you know, from as last theorem, right?
So I think, you know, currently, like, you know, the bottom layer, like, you know, formalizing, like doing the last step, I think it's things that where computers have already helped and can probably help some more. A lot of people are working on today, at least people who are interested in deep learning and stuff like that is kind of the step above coming up with sketch of a proof or strategies and kind of searching for proof. Perhaps also have some level of, you know, very kind of superficial intuition about what type of strategy for the proof will work. You know, not to predict whether a particular strategy is like it will work or something like that. And then, you know, of course, searching, you know, we know from results on games that, you know, searching through trees of, you know, multiple options is something that we can do really efficiently with computers, as long as we can direct that search.
Higher levels coming up with definitions, forming mental models and things like that and and and like manipulating your the model in some sort of, you know, abstract, non formal, non formal way, I don't think we can do those computers today. And until we do this, I think we're not going to have computers that replace human mathematicians. But you will help that's for sure.
So if we're talking about AI and the sort of more deep learning sense as opposed to traditional AI, I say that because I'm actually working quite hard on a more traditional approach to automatic theorem proving. I think I hesitate to say to give a precise answer to that, I find it very hard to predict what AI's what the eventual capabilities are. I see a big spectrum from reaching a plateau rather soon and finding that we, you know, some pretty impressive things have happened now.
But maybe we'll find that in order to go further, we need to have a sort of training set that's 100 times bigger than what we've got now. And so therefore it's not feasible. Or maybe there will be some little tweaks to the system now and it'll become much more powerful in a short time. And I think.
And let me let me allow you to open it up to other types of AI techniques, not just deep learning, right? I do, you know, you know, I mentioned searched and I do think some of the more classical AI techniques actually have a lot to to bring. Particularly in this context.
Yes, so I think something that's very interesting is something that I plan to be spending quite a lot of time on over the next three years or so is thinking about a very abstract theoretical question, which is roughly speaking the following which that we know that if he doesn't equal NP as most people believe, then the general problem of finding a proof of an arbitrary statement of length that most N is an NP complete problem.
我觉得很有趣的一件事情,是我计划在未来三年左右的时间里,花很多时间思考一个非常抽象理论的问题。这个问题大概可以用以下方式来概括:如果我们相信 P 不等于 NP ,那么在长度为 N 的大多数任意语句的证明问题,就是一个 NP 难问题。
So it's not feasible. And sorry, we know it's an NP complete problem if P doesn't equal NP, then it's not feasible. But human mathematicians find extremely long and complicated proofs. The only explanation for that if you believe that P doesn't equal NP must be that we are looking within a very small portion of the sort of space of all possible proofs. We're looking at proofs have a very particular structure, very sort of modular and tied to the way humans think in various ways.
If we could understand that really well from a theoretical point of view, I feel that that would feed in to our understanding of what exactly the task is that AI is trying to accomplish. So that I think is and that I think applies both to traditional methods and to deep learning sorts of methods. I think this connects perhaps with some theoretical results in machine learning that's that they call the no free learns theorems that say that you can only learn with a non ridiculous number of training samples, a very, very small portion of the space of all possible functions.
So as you say, the type of proofs that humans can come up with is probably a very, very small subset. I mean, not probably is certainly a tiny subset of all possible proofs. And the question is what's what's special about those about those is there anything special about those first is it just because of our particular way that we are built and could we build.
AI systems that may have a different bias in terms of what they can do. Can I orient you to sort of ground the conversation for some of our listeners, you know, is there anything you have seen in recent systems that have been shared with, you know, with the community.
Anything that's impressive or that's worth noting or that you feel is actually a meaningful progress, you know, we've seen systems such as maneuver Everest a few others come out recently. Where do you think that we've seen genuine progress on this question of using AI for mathematics?
Well, I think I mean, there's been progress, you know, in in in recent years for various applications, particularly for games and that has translated and also in natural language processing. And the combination of those has translated into progress in sort of, you know, systems that are applied to mathematics.
So in particular, is the idea that of course we can do three search right and we've, that's a classical AI technique we've been able to do three search for decades. More decades than team and I've been alive for.
But but the problem has always been that, you know, you don't know what to prove like, you know, the branching factor is infinite essentially right there. So, the search space is gigantic and so how do you direct the search so that you arrive at a result that you have to size for example. And then there's a question of why do you have to size which is a different question.
So, I think the progress has been made, you know, following things like, you know, alpha zero and things like that is is research efficient research where the way the steps in the trees are taken is is basically a neural net making the choice. So what moves to make in the tree and then another neural net that evaluates the position if you want.
Yes, the critics, we sometimes solve the critic that evaluates the, you know, where we are is it likely to lead to a proof of the ultimate results. And you know, in the past, you know, before kind of the last ten years or so. This was those those functions were kind of hardwired programmed and now we can learn that.
You know, you know, the statistics for example, yeah, right. And now we can learn that we can we can learn the the step using, you know, large language models that is trained on lots of proofs and we can also learn the the the critic. And so I think that's really sort of the conceptual progress that you know directed tree search in the space of possible proofs that I think is so that takes us to kind of the next level which gives a little bit of intuition about the sketch of proof.
I think in in mathematical systems, we're still missing all the top layers are always talking about earlier of like coming up with appropriate concepts and. We'll get to it with these questions.
Yes. Tim has there been a particular result that you found inspiring. In terms of in terms of, you know, using AI techniques to improve our ability, particularly in terms of solving theorems.
I have a slightly ambivalent attitude to all these things so. Not maybe in the way you expect so some mathematicians are ambivalent because they worry that at some point maybe probably put out of a job to sort of theme was which we started.
That I'm not I'm relaxed about I'm old enough. It won't I was a very selfish attitude, but anyway, it won't affect me. So, but what I said when I see, for example, something like manoeuvre for the first time and it does quite a bit more than I was expecting it to be able to do.
Then part of me is impressed and part of me is a bit sort of nervous because I was been wanting to try and think about this thing from a more traditional perspective and. So, and worry. I start to worry that maybe I wasting my time with that, but on the other hand, I feel so the other reasons why ambivalence related to what you answered.
Talking about having a critic that can be just learnt by a neural net. There's a part of me that really wants to understand what's going on when we find proofs and solve theorems and if we can just get a black box produced by a neural net that does the job, then I'm sort of worried that that.
Curiosity to understand really what it's doing how it works what what humans are doing when we come up with these ideas. But the sort of people will get less interested in that which I feel they should be interested in because they'll say well we don't need that anymore in order to create a program that can do it.
And I think that will be, I mean, maybe it'll be right that we won't need it, but I think it'll still be sad if we, if we, if we drop that theoretical problem. The thing is, you know, the functioning of your brain team when you're doing mathematics is a complete mystery to me.
So, you know, I, I take that point, but actually there's another, there's something else I've always felt that a large part of that mystery. I mean, I have that with other mathematicians, they look at their papers and I say where did that idea come from? It came out of absolutely nowhere.
But I know that I myself, my ideas don't come out of nowhere. So I have this very strong conviction that nobody's ideas come out of nowhere. There is always an explanation. It's just that there's a long tradition of how we write maths papers that obscures the discovery process.
And it's something I've long wanted to do which is to bring that discovery process as much as possible out into the open. And I slightly worry that developments in deep learning, if they go fast enough, will just push it back in and say, well, we'll just replace a mysterious human mathematician who has a parent flashes of intuition that come from nowhere by a mysterious machine that does the same thing. And I will have lost the battle.
It's very interesting because there was exactly the same kind of reaction initially during the emergence of deep learning for computer vision. A lot of people who have spent their career trying to sort of discover the mechanisms of image formation and things like that. So deep learning as well now we just have a black box. We it's not going to teach us anything about how vision works. But it didn't change anything in the end. I mean, the people are using what works.
I want to go back to an idea you suggested Tim earlier and I think is very relevant in this case, right? Which is this question of before we, before we, you know, adopt the black box and sort of move to higher levels of abstraction. You know, do we have a sense of what's going on inside of the box, whether it's a human or a computer? Is the process of solving these problems, at least for the simple mathematical problems that we've been able to solve with AI? Does the process look the same?
So when we, you know, when we combine a one of these search algorithms with an evaluation function that may be able to, you know, from training learn how to evaluate different different search options. Do we find that in the end it comes up with a proof that is analogous to the path that the humans would have followed or is it different in qualitative ways? I think it's very important here to distinguish between the proof and the proof discovery process.
And my impression is that the proof itself can certainly with some of these different, these things can look very similar to what a human would write down. Some of the proofs have been produced in natural language. Not all. So there was the, I think it was open AI solution of some international mass or the impiad problems where the proof just was we will feed this expression into lean and then lean just says completed. And there's a sort of guarantee that the thing is proved and no proof is necessary written down at all, though you can actually construct a proof out of that and that but that somehow looks very unhuman.
But when it comes to the actual search process, I think there's an awful lot of searching that's going on behind the scenes and not displayed for the for the consumption afterwards that a human mathematician would not be doing. So very efficient, very efficient in its solving compared to human. Yes, it's all the output.
So when I talk about the cheering test before actually, I don't totally believe in that because I do actually think that the process that goes on is as important as well as the output is not just the output that matters to me, I think. But partly because I think if you have inefficient things that's going to be much harder to scale up when you have more complicated problems with that.
What are your thoughts, Jan, between the at least the relatively simple problems that we can solve with AI. Do you think both in terms of the search process and the final result? And let me add one more color to that question. It is even if the solution looks similar at the end, are there are there ways in which it's expressed that let me rephrase this.
In many ways, the reason it might look similar at the end is because we're training from other examples of proofs that humans have generated. And so do you think that's part of the reason that the proof looks so similar or you think there's something beyond that that's at the true property of the problem? I mean, certainly, I think there is a very strong bias towards reproducing the type of proofs that humans have come up with.
And there may be another set, you know, perhaps a very large set of other types of proofs that nobody is there to try because perhaps they require too much kind of working memory. For example, the humans are not particularly good at doing a good example of this is the graph coloring theorem, which is one of the first ones that required the help of a computer to kind of explore all the cases and figure out what the story was. It's something that's very difficult to do by hand.
And I think there are probably a lot of very interesting results that are of that nature that require. You know, ridiculously large commutational exploration that you can do by hand, but but computers would have no no issue doing. Perhaps things that require so this is perhaps connected to something you, Joel, you and I are familiar with, but we are very often misled, for example, when we do, we try to do some reasoning, I'll get some intuition about high dimensional geometry. And you know, vectors in high dimension, for example, that we reason with, you know, three dimensional space and that's completely wrong.
Ready gives us kind of wrong intuitions. Would it be possible for a machine to get a better intuition than humans in sort of higher dimensional spaces? And you know, I think there are sort of probably important results there that we're kind of we're missing. But at the lower level, more short term, I mean, there's clearly in situations like, like go and chess,
right? Computers can come up with solutions that people had never thought about, right? For particular situations. And I imagine the same thing will happen with not automated proof, but sort of computer assisted mathematics.
Yeah, I want to take time to go sort of at the higher level of abstraction, which is in the discovery process, in, you know, picking what are the right problems to solve at the right time and point given our body of knowledge. And let me link this to where we are on the AI side. You know, we've seen some tremendous progress in creativity recently.
For many many years, people thought the domain of creativity, whether it's artistic expression through, you know, painting or music or other mediums, was really the domain of humans. And that really we wouldn't see much creativity coming out of AI. Recently, we've seen a bit of an explosion of work that shows, for example, that we can create completely new images, recently new video, new music, poetry, using AI. Looking at that and now transposing that to the spheres of mathematics, where I believe the, you know, the, the formulating the problem, choosing the question is the more creative aspect of the work.
What do you see as the potential for bringing in an eye at that level? Huge potential, but it's not going to happen anytime very soon. It's going to take years because what are the main and sole problems in my opinion in AI is, is getting machines to form mental models of the world, of the environment, of the data they are fed with. So that if they have some idea of how this environment, this world will change as they take an action.
And then the context of mathematics, you know, the, the, the formulas or the, the sort of things that you know are true, changes for, for every kind of transformation of that that you're doing any step in the proof that you're doing. So if you can have a model, sort of abstract metal model of what this process is, then you can plan a sequence of actions to arrive at a goal.
And we do this as humans all the time. Most animals can do this as well decompose a complex task into sequences of kind of lower level, more immediate, kind of sequences of actions. We don't know how to do this with AI systems yet until we know how to do this. I don't think we'll have kind of full-fledged AI mathematicians.
And, and you know, what are the right techniques for this? We don't know yet. Let me just add a dimension and I'll let you jump in to them. I think Jan, you talk about, you know, sort of tip, you know, having a mental model of it and then then decomposing it being a really important part of the discovery process.
So maybe a little bit further and say a lot of the results we've seen on creativity are based on generative models, right? Essentially, feeding in enough data that the model can then generate new information that comes out of the distribution, but then sampling really quite widely across that distribution. So with this require building a generative model of mathematics to be able to truly inform that discovery process.
Let me leave it there and then bring you into that into that discussion. I was just going to respond to something that Jan didn't say explicitly, but I want, so I think there's a question that we don't know the answer to, although I have, I have my own view about what I think the answer is, but I can't justify it fully. And that's the question of whether which comes first, do first have a kind of AGI capacities that have the kind of creativity that you need and then you feed that into mathematics in order to find these intermediate steps in complicated proofs and that sort of thing.
Or is the problem that the specific mathematical problem of creativity where you are looking for intermediate steps. An easier version of the more general problem of creativity that would need to be solved for AGI. So I personally think that the maths, the problem of finding proofs is not, is it's, my hunch is that it's strictly easier than full AGI is a step to on, on the way to AGI rather than something that has to wait for AGI.
And I think that for example, what we, what we think of as the very creative moments in proofs that some people come up with, it's not creativity in the sense of, you know, an artist having a dream and getting up and in a frenzy of creating some picture or something. Of course, there are lots of stories that suggest that it is like that, but actually, quite a lot of what passes for creativity and mathematics is sort of concatenation of fairly standard sort of steps like let's take this statement and abstract it as much as we can before it becomes false. And that's just one of many different techniques that we use, you know, a lot of what we do is asking questions. So we have a question that we're trying to answer, we can't answer it.
So we ask another question that's related and so on. And those questions don't come out of nowhere. And I think that's a, my view is that a small class of question transforming moves that we have at our disposal. And we just keep on using those and then eventually we find something that, that is both non trivial and something that we can answer and then we know we are making some progress. So I'm optimistic that that will be something that AI will be able to do. Not saying it's an easy thing, but I think it's not fundamentally mysterious.
I wonder if, and you know, Yana, Yana, you're a little bit of a musician at your hours as a, as am I, you know, when I hear you speak to, oh, also wonderful. We can have a whole other panel on the link between music and mathematics. But I think the interesting point you bring to me is the process by which creativity, you know, is expressed in mathematics, maybe less of sort of a strike of genius than, you know, having a rich set of tools on which to build. And in many ways, I would argue that human creativity may have the same kind of scaffolding and may also not come out of sort of, you know, sort of genuine inspiration.
And it really rise out of that of that expertise. Yana, I don't know if you, if that resonates with you. And also if you wanted to comment on the, you know, do we achieve AGI first and then solve mathematics or do we? Well, I think on the, on the point of creativity, I think, you know, people who are original creators basically very often invent a new scaffolding of the type that you were talking about, right? A new set of tools and the use those tools to invent new things or produce new things in music and visual arts or probably in mathematics as well. I mean, also in natural science, right? A lot of science advances in science are due to progress in tools, you know, astronomy, telescopes and my co-scoops, my co-scoops and, you know, molecular biology and biology.
So things like that, where the tools kind of create the progress. Now, on the question you were asking before, you know, there's been a lot of progress in, sort of, you know, computer creativity using generative models for mathematics. I think the last piece, which is the generation, is the easy part to some extent. So this is what, you know, people have either solved or are working on sort of turning some sort of abstract representation of a schedule of proof into something formal, right? That's something that is some level of achievements there, right? And, you know, the problem is, is at the high level, I sort of keep coming back to this to this question.
Now, what's he trying to think about those recent image generation and video generation systems, for example, is that they actually produce images or videos at a very low resolution. They're kind of sort of, you know, for the abstract things. And then there is a different process that actually kind of samples that produces nice looking pictures. And so, you know, this is two levels. I think we need more than two levels, but, but we can do the lower level easier in a easier way than the simpler way than the top levels. Yeah, that is, that is just an interesting aspect of how human mathematicians work.
We often sort of, if we're reasonably experienced in some area, we may sort of see that we can prove something long before, even though we know that it would take 10 pages and lots of boring calculations actually to write out the full proof. There's something that when you, when you haven't reached that level of experience, seems very sort of intimidating, but after a while you, you get used to it because you're sort of, you learn to think in bigger and bigger steps with, because you've had experience with certain types of problems.
And so, you know that the technicalities will work out and so. And so, quite what's going on in our brain when we think in these big steps, I think it's quite, it's quite difficult to fathom. That is something that with experience, you know, I think that that level of abstraction, it really, really increases.
That brings me to question we haven't touched on, but I think maybe interesting, which is what do you think is a role of AI towards the learning of mathematics? Do you see a role either sort of to train, you know, the maybe high school or Lisi level students or the more advanced students, the graduate level students, those who are making sort of their first steps into into research? Do you see that AI could play a role, for example, to bring in that scaffolding that permits faster access to higher levels of abstraction?
I think the potential in that direction is absolutely limitless. I mean, once you've got the AI that can find proofs, and again, it slightly depends. I think the more black boxy it is, the harder it's going to be to, because if it just sort of says, here's the proof, admiring yourself. It's between a good teacher and sort of a sort of a crutch.
But people have been, I know already people have been working on using language models to produce not just proofs, but proofs together with accounts of the thought process. And tell me the steps you used, that sort of thing. So I think that will come as well. I mean, I think once that technology is there, I think we've definitely got the potential to turn that into remarkable educational tools and tools that can respond to mistakes that people make and ask them questions that will help them to clear up their misunderstandings and that sort of thing.
I hope that we'll reach a stage where we can have a sort of golden age of mathematical teaching and I won't be sure to do a math teacher anymore because computers can do the job. Yeah. Yeah, I think that from that.
I think the arrival of computer mathematicians, perhaps will change the activity of mathematics, not just for professional mathematicians, but also for students. The same way the, you know, the pocket calculator has changed what students can do. And it used to be that you could be a mathematician and spend your career computing logarithm tables. And, you know, not anymore, right, because we have computers to do this now.
And, and similarly, you know, we used to ice match a lot of time in school, you know, learning how to, you know, get logarithms from a table and then sort of interpret the thing between, you know, this was an exam in eighth grade or something like that. So those things have disappeared because we have pocket calculators. And so I imagine that the, the sort of role level steps of mathematical proofs will also disappear and that people will be able to concentrate on what's going to more interesting, perhaps, which is sort of the more conceptual aspects of mathematics, not the mechanics of, you know, here is how you solve linear equation, aquatic equation, but, but what does it mean really like what's the tuition behind it's right.
For example, the tuition behind, you know, aquatic equations and, and, you know, curves that intersect the x axis, you know, is, is, is, is something that, you know, you might, you might still want to build, even if, you know, you don't need to learn by heart, the former, I mean, it's useful, perhaps to show how you derive it, but, but then you can always look it up. So, so I, you know, I think it's going to change the activity of what students learn eventually and it's going to also change the type of activities that mathematicians work on, but I agree with Tim, the potential is, you know, somewhat somewhat limitless, whether we'll be able to explore it.
And I will add, you know, I think all three of us have done a fair bit of teaching that the part that short term is especially appealing is the ability to do the marking and the correction automatically so that the verification of the work, you know, it's one thing to teach, but it's another thing to a hundred times over verify proof that students have done with various degrees of success. So, the more tracing thing would be personalized tutoring, right?
Yes, and we can certainly count on that. I hope pretty soon. We're really near the end of our time together and I want to broaden this up, you know, if we, we project, if we have a similar conversation in five years or even in 50 years, you know, pick a time horizon that, that, that inspires you. What do you think will be different? Where do you think will be really both in terms of acknowledging the success and what will be the remaining open problems would love to hear each of you on that.
Well, so the question is when will there be a new theorem that no human thought about that would be proved by a computer or by, you know, one of those automated proof systems and not clear. It may happen faster than we think or it may take it may be much more difficult than we think. And then there is the question that that team that you are asking joy, that. For which team had an answer, you know, we do we need to have sort of human level level intelligence systems before we get computer mathematicians and the answer is yes and no. Because we can still we can do a lot of useful things with computers without them having human level intelligence or super human intelligence.
And then to get them to sort of do things that are at that human level or or better, we can engineer it. Okay, so this is what's happening for example in the autonomous driving domain, the systems that we have don't learn nearly as efficiently as humans or even animals. But we just engineer the hell out of it and eventually we'll have a system that's, you know, heavy engineer that kind of works well enough. It's very expensive. It takes a lot of time. But then there's going to be another step right set some years or decades in the future where we have much better learning systems that learn pretty much like humans and animals can form this kind of a distraction can build those models of the world or mental models that was talking about earlier.
And then they may become better than humans at all the tests where are where humans are good. And and we'll have you know, serving cards from mathematicians automated virtual assistants and domestic robots and all the stuff that you know science fiction imagined for us. When when that happens, how long is that going to take? I have no idea, but we're working on it. Promising days and then the three of us will be playing in a band together while the air is popping the rest of the problems.
Jim, what is your prediction for us? I feel reasonably confident that in 50 years time, the whole landscape of mathematics will have been completely transformed. But in a near term, it's for the reasons that Jan already mentioned, I find it a little bit less clear. I think the progress will be I think a very interesting milestone will be at what stage does an AI of whatever kind. Managed to solve the sort of problem that you get on some metal impiads where it doesn't fit into a sort of well known category of problems such as geometry problem where when you put in the right construction line. It becomes easy or something like that because those problems are amenable to just a brute force search or some inequalities, you know, you have some expression, you want to rearrange it so it looks like a sum of squares or something like that. That's something that you can search for.
But there's another type of another style of problem where you have to have the right idea. There's some kind of out of the way, seeming idea, but it's not if you're if you're experienced mathematician in those sorts of problems, you kind of worry away at the problem, you dig and dig and dig and eventually you see the idea and this sort of key that unlocks the problem. Having an AI that can do that kind of problem solving, I think once we get to that, it's not clear that you can't just sort of then build on that and go much, much further and really solve interesting mass problems. We need one sort of genuinely interesting problem that doesn't fit into anything like a kind of brute forceable search class of problems. And then I think we'll be so when we get to that, I think progress will be extremely rapid after that probably.
Well, well, I do hope we have an opportunity for that discussion in a few years. It's been great to have both of you on the panel. Thank you so much, Timothy Gowers, Jan LeCoon. It's been a pleasure to talk about AI mathematics today. Thank you very much. Thank you, Gowers.