stub Ronald T. Kneusel, Author of "How AI Works: From Sorcery to Science" - Interview Series - Unite.AI
Connect with us

Artificial Intelligence

Ronald T. Kneusel, Author of “How AI Works: From Sorcery to Science” – Interview Series

mm
Updated on

We recently received an advanced copy of the book “How AI Work: From Sorcery to Science” by Ronald T. Kneusel. I've so far read over 60 books on AI, and while some of them do get repetitive, this book managed to offer a fresh perspective, I enjoyed this book enough to add it to my personal list of the Best Machine Learning & AI Books of All Time.

“How AI Works: From Sorcery to Science” is a succinct and clear-cut book designed to delineate the core fundamentals of machine learning. Below are some questions that were asked to author Ronald T. Kneusel.

This is your third AI book, the first two being: “Practical Deep Learning: A Python-Base Introduction,” and “Math for Deep Learning: What You Need to Know to Understand Neural Networks”. What was your initial intention when you set out to write this book?

Different target audience.  My previous books are meant as introductions for people interested in becoming AI practitioners.  This book is for general readers, people who are hearing much about AI in the news but have no background in it.  I want to show readers where AI came from, that it isn’t magic, and that anyone can understand what it is doing.

While many AI books tend to generalize, you’ve taken the opposite approach of being very specific in teaching the meaning of various terminology, and even explaining the relationship between AI, machine learning, and deep learning. Why do you believe that there is so much societal confusion between these terms?

To understand the history of AI and why it’s everywhere we look now, we need to understand the distinction between the terms, but in popular use, it’s fair to use “AI” knowing that it refers primarily to the AI systems that are transforming the world so very rapidly.  Modern AI systems emerged from deep learning, which emerged from machine learning and the connectionist approach to AI.

The second chapter dives deep into the history of AI, from the myth of Talos, a giant robot meant to guard a Pheonecian princess, to Alan Turing 1950s paper, “Computing Machinery and Intelligence”, To the advent of the Deep Learning revolution in 2012. Why is a grasp of the history of AI and machine learning instrumental to fully understanding how far AI has evolved?

My intention to show that AI didn’t just fall from the sky.  It has a history, an origin, and an evolution.  While the emergent abilities of large language models are a surprise, the path leading to them isn’t.  It’s one of decades of thought, research, and experimentation.

You’ve devoted an entire chapter to understanding legacy AI systems such as support vector machines, decision trees, and random forests. Why do you believe that fully understanding these classical AI models is so important?

AI as neural networks is merely (!) an alternate approach to the same kind of optimization-based modeling found in many earlier machine learning models.  It’s a different take on what it means to develop a model of some process, some function that maps inputs to outputs.  Knowing about earlier types of models helps frame where current models came from.

You state your belief that OpenAI’s ChatGPT’s LLM model is the dawn of true AI. What in your opinion was the biggest gamechanger between this and previous methods of tackling AI?

I recently viewed a video from the late 1980s of Richard Feynman attempting to answer a question about intelligent machines.  He stated he didn’t know what sort of program could act intelligently. In a sense, he was talking about symbolic AI, where the mystery of intelligence is finding the magic sequence of logical operations, etc., that enable intelligent behavior.  I used to wonder, like many, about the same thing – how do you program intelligence?

My belief is that you really can’t.  Rather, intelligence emerges from sufficiently complex systems capable of implementing what we call intelligence (i.e., us).  Our brains are vastly complex networks of basic units.  That’s also what a neural network is.  I think the transformer architecture, as implemented in LLMs, has somewhat accidentally stumbled across a similar arrangement of basic units that can work together to allow intelligent behavior to emerge.

On the one hand, it’s the ultimate Bob Ross “happy accident,” while on the other, it shouldn’t be too surprising once the arrangement and allowed interactions between basic units capable of enabling emergent intelligent behavior have happened.  It seems clear now that transformer models are one such arrangement.  Of course, this begs the question: what other such arrangements might there be?

Your take-home message is that modern AI (LLMS) are at the core, simply a neural network that is trained by backpropagation and gradient descent. Are you personally surprised at how effective LLMs are?

Yes and no.  I am continually amazed by their responses and abilities as I use them, but referring back to the previous question, emergent intelligence is real, so why wouldn’t it emerge in a sufficiently large model with a suitable architecture?  I think researchers as far back as Frank Rosenblatt, if not earlier, likely thought much the same.

OpenAI’s mission statement is “to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.” Do you personally believe that AGI is achievable?

I don’t know what AGI means any more than I know what consciousness means, so it’s difficult to answer.  As I state in the book, there may well come a point, very soon now, where it’s pointless to care about such distinctions – if it walks like a duck and quacks like a duck, just call it a duck and get on with it.

Cheeky answers aside, it is entirely within the realm of possibility that an AI system might, someday, satisfy many theories of consciousness.  Do we want fully conscious (whatever that really means) AI systems?  Perhaps not.  If it’s conscious, then it is like us and, therefore, a person with rights – and I don’t think the world is ready for artificial persons.  We have enough trouble respecting the rights of our fellow human beings, let alone those of any other kind of being.

Was there anything that you learned during the writing of this book that took you by surprise?

Beyond the same level of surprise everyone else feels at the emergent abilities of LLMs, not really.  I learned about AI as a student in the 1980s.  I started working with machine learning in the early 2000s and was involved with deep learning as it emerged in the early 2010s.  I witnessed the developments of the last decade firsthand, along with thousands of others, as the field grew dramatically from conference to conference.

Thank you for the great interview, readers may also want to take a look my review of this book. The book is available at all major retailers including Amazon.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.