stub Patrick M. Pilarski, Ph.D. Canada CIFAR AI Chair (Amii) - Interview Series
Connect with us

Interviews

Patrick M. Pilarski, Ph.D. Canada CIFAR AI Chair (Amii) – Interview Series

mm
Updated on
Credit to: Chris Onciul / Amii

Dr. Patrick M. Pilarski is a Canada CIFAR Artificial Intelligence Chair, past Canada Research Chair in Machine Intelligence for Rehabilitation, and an Associate Professor in the Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Alberta.

In 2017, Dr. Pilarski co-founded DeepMind's first international research office, located in Edmonton, Alberta, where he served as office co-lead and a Senior Staff Research Scientist until 2023. He is a Fellow and Board of Directors member with the Alberta Machine Intelligence Institute (Amii), co-leads the Bionic Limbs for Improved Natural Control (BLINC) Laboratory, and is a principal investigator with the Reinforcement Learning and Artificial Intelligence Laboratory (RLAI) and the Sensory Motor Adaptive Rehabilitation Technology (SMART) Network at the University of Alberta.

Dr. Pilarski is the award-winning author or co-author of more than 120 peer-reviewed articles, a Senior Member of the IEEE, and has been supported by provincial, national, and international research grants.

We sat down for an interview at the annual 2023 Upper Bound conference on AI that is held in Edmonton, AB and hosted by Amii (Alberta Machine Intelligence Institute).

How did you find yourself in AI? What attracted you to the industry?

Those are two separate questions.  In terms of what attracts me to AI, there's something beautiful about how complexity can emerge and how structure can emerge out of complexity. Intelligence is just one of these amazing examples of that, so whether it's coming from biology or whether it's coming from how we see elaborate behavior emerge in machines, I think there's something beautiful about that. That's always fascinated me for a very long time, and my very long winding trajectory to work in the area of AI I work in now, which is machines that learn through trial and error, reinforcement systems that interact with humans while they're both immersed in it, the stream of experience, flow of time, came through all sorts of different sort of plateaus. I studied how machines and humans could interact in terms of biomechatronic devices and biotechnology, things like artificial limbs and prosthesis.

I looked at how AI can be used to support medical diagnostics, how we can use machine intelligence to start to understand patterns that lead to disease or how different disease might present in terms of recordings on a machine. But that's all part of this long-winded drive to really start to appreciate how you might be able to get very complex behaviors out of very simple foundations. And that's what I really love, especially about reinforcement learning, is the idea the machine can embed itself within the flow of time and learn from its own experience to exhibit very complex behaviors and capture both the complex phenomenon’s, really, in the world around it. That's been a driving force.

The mechanics of it, I actually did a lot of sports medicine training and things like that back in high school. I studied sports medicine and now here I am working in a environment where I look at how machine intelligence and rehabilitation technologies come together to support people in their daily life. It's a very interesting journey, like the side fascination with complex systems and complexity, and then very practical pragmatics of how do we start to think about how humans can be better supported, live lives they want to live.

How did sports initially lead you to prosthetics?

What's really interesting about fields like sports medicine is looking at the human body and how someone's unique needs, whether it's sporting or otherwise, can in fact be supported by other people, by procedures and processes. The bionic limbs and prosthetic technologies are about building devices, building systems, building technology that helps people live the lives that they want to live. These two things are really tightly connected. It's actually really exciting to be able to come full circle and have some of those much earlier interests come to fruition in, again, co-leading a lab where we look at… And especially machine learning systems that work with in a tightly coupled way, the person that they're designed to support.

You’ve previous discussed how a prosthetic adapts to the person instead of the person adapting to the prosthetics. Could you talk about the machine learning behind this?

Absolutely. As a foundation in the history of tool use, humans have adapted ourselves to our tools and then we've adapted our tools to the needs that we have. And so there's this iterative process of us adapting to our tools. We're, right now, at an inflection point where for the first time, you've maybe heard me say this before in talks if you've looked at some of the talks that I've given. But really, we are at this important point in history where we can now imagine building tools that bring in some of those hallmarks of human intelligence. Tools that will actually adapt and improve while they're being used by a person. The underlying technologies support continual learning. Systems that can continually learn from an ongoing stream experience. In this case, reinforcement learning and the mechanisms that underpin it, things like temporal difference learning, are really critical to building systems that can continually adapt while they're interacting with a person and while they're in use by a person supporting them in their daily life.

Could you define temporal difference learning?

Absolutely, what I really like about this is that we can think about the core technologies, temporal difference learning and the fundamental prediction learning algorithms that underpin much of what we work on the lab. You have a system that, much like we do, is making a prediction about what the future is going to look like with respect to some signal, with respect to something like the future reward is what we usually see. But any other signal you might imagine like, how much force am I exerting right now? How hot is it going to be? How many donuts am I going to have tomorrow? These are the possible things that you might imagine predicting. And so the core algorithm is really looking at the difference between my guess about what's going to happen right now and my guess about what's going to happen in the future along with any kind of signal that I'm currently receiving.

How much force am I exerting as a robot arm is lifting up a cup of coffee or a cup of water? This might be looking at the difference between the prediction about the amount of force it will be exerting right now or the amount it will over some period of the future. And then comparing that to its expectations about the future and the force it's actually exerting. Put those all together, and you get this error, the temporal difference error. It is this nice accumulation of the temporally extended forecast in the future and the differences between them, which you can then use to update the structure of the learning machine itself.

And so this, again, for conventional reinforcement learning based on reward, this could be looking at updating the way the machine acts based on the future expected reward you might perceive. For a lot of what we do, it's looking at other kinds of signals, using generalized value functions, which is the adaptation of the reinforcement learning process, temporal difference learning of reward signals to any kind of signal of interest that might be applicable to the operation of the machine.

You often talk about a prosthetic called the Cairo Toe in your presentations. What does it have to teach us?

The Cairo Toe University of Basel, LHTT. Image: Matjaž Kačičnik

I like using the example of the Cairo Toe, a 3000-year-old prosthesis. I work in the area of neuro prosthetics, we now see very advanced robotic systems that can in some cases have the same level of control or the degrees of control as biological body parts. And yet, I go back to a very stylized wooden toe from 3000 years ago. I think what's neat is it's an example of humans extending themselves with technology. That’s what we're seeing right now in terms of neuro prosthetics and human machine interaction is not something that is weird, new or wacky. We have always been tool users, animals, non-human animals also use tools. There's many great books on this, especially by Frans de Waal, “Are We Smart Enough to Know How Smart Animals Are?”.

This extension of ourselves, the augmentation and enhancement of ourselves through the use of tools is not something new, it is something ancient. It's something that has been happening since time and memorial in the very land that we're on right now by the people who lived here. The other interesting thing about the Cairo Toe is that the evidence, at least from the scholarly reports on it, show that it was adapted multiple times over the course of its interactions with its users. They actually went in and customized it and changed it, modified it during its use.

My understanding, it was not just a fixed tool that was attached to a person during their lifetime, it was a fixed tool that was attached but also modified. It's an example of how, again, the idea that tools are adapted during their span of use and a sustained span of use is actually something that is also quite ancient. It's not something new, and there's lots of lessons we can learn from the co-adaptation of people and tools over many, many years.

You’ve previously mentioned the feedback pathway between prosthetics and the human, could you elaborate on feedback pathway?

We're also in a special time in terms of how we are viewing the relationship between a person and the machine that aims to support them in their daily life. When someone is using an artificial limb, let's say someone with limb difference, someone with an amputation is using an artificial limb. Traditionally, they will be using it very much like a tool, like an extension of their body, but we'll see them largely relying on what we consider the control pathway. That some sense of their wheel or their intent is being passed down to that device, which is then tasked with figuring out what it is, and then executing upon that, whether it's opening and closing a hand or bending an elbow or creating a pinch grip to grab a key. We often don't see people studying or considering the feedback pathway.

So a large number of artificial limbs that you might see deployed commercially, the pathway of information flowing from the device back to the person might be the mechanical coupling, the way that they actually feel the forces of the limb and act upon them. It might be them hearing the worrying of the motors or them watching as they pick up a cuff and move it across a desk or they grab it from another part of their workspace. And so, those pathways are the traditional way of doing it. There are amazing things that are happening across the globe to look at how information might be better fed back from a artificial limb to the person using it. Especially even here in Edmonton, there's a lot of really cool work using the rewiring of the nervous system, targeted nerve renovation and other things to support that pathway. But it is still a very hot emerging area of study to think about how machine learning supports the interactions with respect to that feedback pathway.

How machine learning can support a system that might be perceiving and predicting a lot about its world actually transmit, having that information transmitted clearly and effectively back to the person using it. How can machine learning support that? I think this is a great topic, because if you have both that feedback pathway and that control pathway, both pathways are adapting and both the device being used by the person and the person themself are building models of each other. You can do something almost miraculous. You can almost transmit information for free. If you have both these systems that are actually well attuned to each other, they've built a very powerful model of each other and they have an adaptation both to control the feedback pathways, you can form very tight partnerships between humans and machines that can pass a massive amount of information with very little effort and very little bandwidth.

And that opens up whole new realms of human machine machine coordination, especially in the area of neuroprosthetics. I'm really think this is a pretty miraculous time for us to start studying this area.

Do you think these are going to be 3D printed in the future or how do you think the manufacturing will proceed?

I don't feel like I'm the best place to speculate on how that might happen. I can say though, that we are seeing a large uptick in commercial providers of neuroprosthetic devices using additive manufacturing, 3D printing, and other forms of additive on the spot manufacturing to create their devices. This is also really neat to see, that it's not just a prototype using additive manufacturing or 3D printing, it's 3D printing becoming an integral part of how we provide devices to individuals and how we optimize those devices to the exact people that are using them.

Additive manufacturing or bespoke manufacturing, customized prosthesis fitting happens in hospitals all the time. This is a natural part of care provision to people with limb difference who need assisted technologies or other kind of rehabilitation technologies. I think we're starting to see that a lot of that customization is starting to blend into the manufacturers of the devices, and not just left to the point of care providers. And that's also really exciting. I think there's a great opportunity for devices that don't just look like hands or are used hands, but devices that very precisely meet the needs of the person using them, that allows them to express themselves in the way that they want to express themselves, and lets them live lives that they want to live the way they want to live it, not just the way we think a hand should be used in daily life.

You’ve written over 120 papers. Is there one that stands out to you that we should know about?

There's a recently published paper in neural computing applications, but it represents the tip of an iceberg of thinking that we've put forward for well over a decade now, on frameworks for how humans and machines interact, especially how humans and prosthetic device interact. It's the idea of communicative capital. And so this is the paper that we recently published.

And this paper lays forward our view on how predictions that are learned and maintained in real time by a, say, prosthetic device interacting with the person, the person themself can form essentially capital, can form a resource that both of those parties can rely on. Remember, previously I said we can do something really spectacular when we have a human and a machine that are both building models of each other, adapting the real-time based on experience, and starting to pass information in a bidirectional channel. As a sidebar, because we live in a magical world where there's recordings and you can cut things out of it.

It's essentially like magic.

Exactly. It's sounds like magic. If we go back to thinkers like as Ashby, W. Ross Ashby, back in the 1960s and his book “Introduction of Cybernetics” talked about how we might amplify the human intellect. And he really said it comes down to amplifying the ability of a person to choose between one of many options. And this is made possible by systems where a person is interacting with, say, a machine, where there's a channel of communication open between them. So if we have that channeled communication open, if it is bidirectional, and if both systems are building capital in the form of predictions and other things, then you can start to see them really align themselves and to become more than the sum of their parts. You can get more out than they're putting in.

And I think this is why I consider this to be one of our most exciting papers, because it does represent a thought shift. It represents a thought shift towards thinking of neuro prosthetic devices as systems with agency, systems that we might not just describe agency to, but rely on to be able to co adapt with us to build up these resources. The communicative capital that lets us multiply our ability to interact with the world, lets us get more out than we're putting in and allow people to, I'm going to say, from a prosthetic lens, stop thinking about the prosthesis in their daily life and start thinking about living their daily life. Not the device that's helping them live their daily life.

What are some of the applications you would see for brain machine interfaces with what you just discussed?

One of my favorites is something we put forward, again, over the last almost 10 years, is a technology called adaptive switching. Adaptive switching is based on the knowledge that many systems we interact with on a daily basis rely on us switching between many modes or functions. Whether I'm switching between apps on my phone or I'm trying to figure out the right setting on my drill or whether I'm adapting other tools in my life, we switch between many modes or functions all the time, thinking back to Ashby, our ability to choose between many options. So in adaptive switching, we use temporal difference learning to allow a artificial limb to learn what motor function a person might want to use and when they want to use it. So really quite a simple premise is that, just the act of me reaching over to a cup and closing my hand.

Well, a system should be able to build up predictions through experience that in this situation, I'm likely going to be using the hand open close function. I'll be opening and closing my hand. And then in the future, in similar situations, to be able to predict that. And when I'm navigating the swirling cloud of modes and functions, give me more or less the ones that I want without having to sort through all of those many options. And this is a very simple example of building up that communicative capital. You have a system that is in fact building up predictions through interaction, they're predictions about that person, that machine, their relationship in that situation at that time. And that shared resource then allows the system to reconfigure its control interface on the fly, such that the person get what they want and when they want. And really, in a situation where the system is very, very sure about what motor function a person might want, it can in fact just select that for them as they're going in.

And the cool thing is, is that the person always has the ability to say, “Ah, this is what I really wanted,” And switch to another motor function. In a robotic arm, that might be different kinds of hand grasps, whether it's shaping the grip to grab a doorknob or pick up a key or to shake someone's hand. Those are different modes of functions, different grabs patterns. It is very interesting that the system can start to build up an appreciation of what's appropriate in what situation. Units of capital that both of those parties can rely on to move more swiftly through the world, and with less cognitive burden, especially in the part of the unit.

Thank you for the amazing interview, readers who wish to learn more should visit the following resources:

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.