“Frightening when you see it”: AI pioneer Geoffrey Hinton on AI models

Geoffrey Hinton lives in a house on a pretty street in North London. The call with MIT Technology Review came four days before he announced he was leaving Google – news that quickly spread around the world. The man is a pioneer in the field of deep learning, having helped develop some of the key techniques that are at the current heart of modern artificial intelligence. After leaving university, he worked for the Internet giant for ten years. But now he doesn’t want to anymore. And that has a very specific reason: he’s worried about the future with AI.

Hinton says he’s amazed at what large language models (LLMs) like GPT-4, on which the current ChatGPT is based, can do. And he sees serious risks that the technology – which would hardly be where it is today without him – entails.

The conversation started at Hinton’s kitchen table, but the British-Canadian AI veteran was pacing the entire time. Having been plagued by chronic back pain for years, Hinton almost never sits down. For the next hour, he could be seen pacing from one end of the room to the other, bobbing his head as he spoke. He had a lot to say.

The 75-year-old computer scientist, who shared the 2018 Turing Award with Yann LeCun and Yoshua Bengio for his work in deep learning — specifically deep neural networks, or DNN for short — said he was now ready to launch a to change gear. “I’m getting too old for technical work where you have to remember a lot of details,” he told me. “I’m still good, but I’m not as good as I used to be, and of course that’s annoying.” But that’s not the only reason he’s leaving Google. Hinton now wants to spend his time doing what he calls a “more philosophical work”. In doing so, he will focus on the small but very real danger that the development of AI could prove to be a catastrophe for mankind.

Once Hinton has left Google, he can speak his mind without the self-censorship that a man of managerial rank must exert. “I want to talk about AI security issues without worrying about how this impacts Google’s business,” he says. “As long as I’m being paid by the company, I can’t do that.” That doesn’t mean Hinton is unhappy with Google at all. “It may surprise you,” he says, “there are a lot of good things I can say about Google. And that’s a lot more credible when I’m no longer on Google.”

Hinton’s perspective has been significantly changed by the new generation of large language models, notably OpenAI’s GPT-4, which came out in March. It made him realize that machines are on the way to becoming a lot smarter than he thought, he says. It worries him how this might develop. “These things are completely different from us,” he says. “Sometimes I think it’s like aliens landed and people didn’t notice because they speak English very well.”

Hinton is best known for his work on a technique called backpropagation, which he – along with two colleagues – proposed in the 1980s. In short, this is the algorithm that allows machines to really learn. It underlies almost all deep neural networks today, from computer vision systems for image recognition to large language models. It wasn’t until the 2010s that the power of neural networks trained with backpropagation really reached the point where they could be put to good use. Working with some students, Hinton then showed that the technique was better than anything else when it came to getting a computer to identify objects in images. They also trained a neural network that could predict the next few letters in a sentence, a precursor to today’s large language models.

One of those graduate students was Ilya Sutskever, who later co-founded OpenAI and led the development of ChatGPT, today he is the technical director there. “There were early inklings that this thing could be amazing,” says Hinton. “But it took us a long time to realize that to be really good, it had to be done on a really big scale.” In the 80’s, neural networks were more of a joke. The prevailing idea of ​​artificial intelligence at the time, the so-called symbolic AI, still assumed that intelligence consisted primarily of the processing of symbols such as words or numbers.

Hinton wasn’t convinced of that – that approach – at the time. He worked on neural networks, software abstractions of brains in which neurons and the connections between them are represented by code. By changing the way these neurons are connected – the numbers they represent – such a neural network can be “rewired” on the fly. In other words, it can be made to learn.

“My father was a biologist, so I was thinking in biological terms,” ​​says Hinton. And symbolic thinking is clearly not at the core of biological intelligence. “Crows can solve puzzles, but they don’t have language. They don’t do it by storing strings of characters and manipulating them. They do it by changing the strength of the connections between neurons in their brain. So it has to be possible, complicated Learning things by changing the strength of connections in an artificial neural network.”

For 40 years, Hinton saw artificial neural networks as a bad knock-off of biological neural networks. Now he thinks that’s changed: In trying to mimic biological brains, he says we’ve developed something very special. “It’s scary when you see that,” he says. “The switch is flipped all of a sudden.” Hinton’s fears will seem like science fiction to many readers. But it’s worth listening to his reasoning.

To home page

Related Posts

Hot News

Trending

usefull links

robis robis robis