Geoffrey Hinton, Godfather of AI Fears for Humanity’s Fate

ODSC - Open Data Science
3 min readOct 13, 2023

Recently, Geoffrey Hinton, the Godfather of AI, sat down with 60 Minutes to discuss AI and his worries for humanity’s future. During Sunday’s episode, Hinton broke down what AI could mean for the human race in the coming years; this included both positive and negative aspects.

Geoffrey Hinton is a computer scientist and cognitive psychologist known for his work with neural networks who spent the better part of a decade working with Google. However, he left this past May due in part to his concerns about AI risks.

During the segment, 60 Minutes interviewer Scott Pelley flatly asked Geoffrey Hinton if humanity knew what it was doing with AI. For his part, Hinton responded in two parts. “No…I think we’re moving into a period when for the first time, we have things more intelligent than us.

Geoffrey continued to explain that in his view, most of the advanced AI systems have some understanding and are making decisions based on their own experiences. This is where Pelley asks the million-dollar question, Do AI systems have consciousness?

In his reply, Hinton stated that it was unlikely in its current stage, but “in time” it would happen. To that, Hinton’s concern is centered on when the time comes, humanity could see itself as the second most intelligent beings on Earth.

To that, the interviewer presented the question of how could AI be better at learning if it was designed by humans. To that, Hinton corrected his misunderstanding. “No, it wasn’t. What we did was, we designed the learning algorithm. That’s a bit like designing the principle of evolution…But when this learning algorithm then interacts with data, it produces complicated neural networks that are good at doing things. But we don’t understand exactly how they do those things.

Geoffrey Hinton provided some good that will come from AI in the segment. Much of to do with processing structured and unstructured information, which in the medical field, would be quite beneficial when it comes to early diagnoses.

But of course, there was plenty of risk and bad he wanted to mention as well. First of course is how AI is working. Due to the nature of neural networks they are designed to be similar to human brains. There are limitations to fully understand how they work.

He said in part, “We have a very good idea sort of roughly what it’s doing…But as soon as it gets complicated, we don’t know what’s going on any more than we know what’s going on in your brain.”

Then there’s the risk of escaping containment and operating under its guidance, or just to put it plainly, evolving without human control.One of the ways these systems might escape control is by writing their computer code to modify themselves. And that’s something we need to seriously worry about.”

This was a wide-ranging interview, and if you’re interested you can watch it below:

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.