Maybe we're in a kind of meta-turing test Probably it would be a good idea not to tell you so it doesn't change your behavior, right This is a kind of- Heisenberg uncertainty principle situation If I told you, you'd behave differently Maybe that's what's happening with us, of course This is a benchmark from the future where they replay 2022 as a year before AIs were good enough yet, and now we want to see, is it gonna pass Exactly If I was such a program, would you be able to tell, do you think So to the turing test question, you've talked about the benchmark for solving intelligence What would be the impressive thing You've talked about winning a Nobel Prize and AI system winning a Nobel Prize, but I still return to the Turing test as a compelling test The spirit of the Turing test is a compelling test Yeah, the Turing test, of course, it's been unbelievably influential, and Turing's one of my all-time heroes, but I think if you look back at the 1950 paper, his original paper, and read the original, you'll see I don't think he meant it to be a rigorous formal test I think it was more like a thought experiment, almost a bit of philosophy he was writing if you look at the style of the paper And you can see he didn't specify it very rigorously So for example, he didn't specify the knowledge that the expert or judge would have, how much time would they have to investigate this So these are important parameters if you were gonna make it a true sort of formal test And by some measures, people claim the Turing test passed several, a decade ago, I remember someone claiming that with a kind of very bog standard normal logic model, because they pretended it was a kit So the judges thought that the machine was a child So that would be very different from an expert AI person interrogating a machine and knowing how it was built and so on So I think we should probably move away from that as a formal test and move more towards a general test where we test the AI capabilities on a range of tasks and see if it reaches human level or above performance on maybe thousands, perhaps even millions of tasks and cover the entire sort of cognitive space So I think for its time, it was an amazing thought experiment And also 1950s, obviously it was barely the dawn of the computer age So of course he only thought about text and now we have a lot more different inputs So yeah, maybe the better thing to test is the generalizability So across multiple tasks, but I think it's also possible as systems like God will show that eventually that might map right back to language So you might be able to demonstrate your ability to generalize across tasks by then communicating your ability to generalize across tasks, which is kind of what we do through conversation anyway, when we jump around Ultimately what's in there in that conversation is not just you moving around knowledge, it's you moving around like these entirely different modalities of understanding that ultimately map to your ability to operate successfully in all of these domains, which you can think of as tasks Yeah, I think certainly we as humans use language as our main generalization communication tool So I think we end up thinking in language and expressing our solutions in language So it's gonna be very powerful mode in which to explain the system, to explain what it's doing But I don't think it's the only modality that matters So I think there's gonna be a lot of, there's a lot of different ways to express capabilities other than just language Yeah, visual, robotics, body language Yeah, actions, the interactive aspect of all that, that's all part of it But what's interesting with GATO is that it's sort of pushing prediction to the maximum in terms of like, you know, mapping arbitrary sequences to other sequences and sort of just predicting what's gonna happen next So prediction seems to be fundamental to intelligence And what you're predicting doesn't so much matter Yeah, it seems like you can generalize that quite well So obviously language models predict the next word GATO predicts potentially any action or any token And it's just the beginning, really It's our most general agent, one could call it so far, but that itself can be scaled up massively more than we've done so far, and obviously we're in the middle of doing that But the big part of solving AGI is creating benchmarks that help us get closer and closer, sort of creating benchmarks that test the generalizability, and it's just still interesting that this fella, Alan Turing, was one of the first and probably still one of the only people that was trying, maybe philosophically, but was trying to formulate a benchmark that could be followed It is, even though it's fuzzy, it's still sufficiently rigorous to where you can run that test, and I still think something like the Turing test will, at the end of the day, be the thing that truly impresses other humans, so that you can have a close friend who's in the AI system For that friend to be a good friend, they're going to have to be able to play StarCraft, and they're going to have to do all of these tasks, get you a beer, so the robotics tasks, play games with you, use language, humor, all of those kinds of things, but that ultimately can boil down to language It feels like, not in terms of the AI community, but in terms of the actual impact of general intelligence on the world, it feels like language will be the place where it truly shines I think so, because it's such an important kind of input, output for us I think you're right I think the Turing test, the philosophy behind it, which is the idea of can a machine mimic the behaviors of a human, and I would say wider than just language and text, then in terms of actions and everything else, creativity, all these things, then if it can sort of match or exceed human cognitive capabilities, then I think we have a true intelligence So from that perspective, you're right I think he did formulate the right kind of setup I just, I think there'll be a kind of humor in the AI systems of the future, looking back to this conversation, and thinking about the Turing test, and also thinking about, by that time, they would know which year they were finally able to sort of cross the threshold of human-level intelligence, and think how funny it is that we humans were still confused about this whole problem That had been long ago been solved Well, hello to the AI agents of the future Anyway, so going back to your journey, when did you fall in love with programming first Well, I was pretty young age, actually So I started off, actually games was my first love, so starting to play chess when I was around four years old, and then it was actually with winnings from a chess competition that I managed to buy my first chess computer when I was about eight years old It was a ZX Spectrum, which was hugely popular in the UK at the time, and it was an amazing machine because I think it trained a whole generation of programmers in the UK because it was so accessible You literally switched it on, and there was the basic prompt, and you could just get going, and my parents didn't really know anything about computers, but because it was my money from a chess competition, I could say I wanted to buy it, and then I just went to bookstores, got books on programming, and started typing in the programming code, and then, of course, once you start doing that, you start adjusting it and then making your own games, and that's when I fell in love with computers and realised that they were a very magical device In a way, I wouldn't have been able to explain this at the time, but I felt that they were sort of almost a magical extension of your mind I always had this feeling, and I've always loved this about computers, that you can set them off doing something, some task for you You can go to sleep, come back the next day, and it's solved That feels magical to me So, I mean, all machines do that to some extent They all enhance our natural capabilities Obviously, cars make us, allow us to move faster than we can run, but this was a machine to extend the mind, and then, of course, AI is the ultimate expression of what a machine may be able to do or learn So, very naturally for me, that thought extended into AI quite quickly Do you remember the programming language that was first started Yeah Was it special to the machine No, it was just a basic I think it was just basic on the ZX Spectrum I don't know what specific form it was, and then later on, I got a Commodore Amiga, which was a fantastic machine Now you're just showing off So, yeah, well, lots of my friends had Atari STs, and I managed to get Amigas It was a bit more powerful, and that was incredible, and used to do programming in Assembler and also Amos Basic, this specific form of basic It was incredible, actually So, I learned all my coding skills