Josh Tobin of Gantry on Continual Learning Benefits and Challenges

ODSC - Open Data Science
4 min readJan 24, 2023

As newer fields emerge within data science and the research is still hard to grasp, sometimes it’s best to talk to the experts and pioneers of the field. Recently, we spoke with Josh Tobin, CEO & Founder of Gantry, about the concept of continual learning and how allowing models to learn & evolve with a continuous flow of data while retaining previously-learned knowledge can allow models to adapt and scale. You can watch the full Lightning Interview here, and read the transcript for the first few questions below.

What is continual learning?

I think of continual learning as any technique that you use to improve your model using production data, and that’s any technique where you throw out the assumption that the model that you train offline is the one that’s going to run in production forever. This ranges from extremely simple to very, very complicated.

On the extremely simple side, if you are at least once a month or so checking in on your model and curating a new data set by hand, and retraining your model in that data set using the data that’s flowing through it in production, I think that’s a very, very basic form of continual learning. On the other hand, you might be building a click-through rate prediction model like Google and training that model on every single data point as it streams into the system, which is extremely complicated from an infrastructure and algorithmic perspective. Then there’s a whole spectrum of techniques in between.

I think the process that you go through as you adopt this set of techniques is starting with what you do with deploying any software system, which is that you start with things that are more manual but easier to implement and then you introduce more automation over time as you need it as and it becomes helpful for you to solve the problem that you’re trying to solve.

What are the biggest benefits of continual learning?

The biggest benefits are if you have a model that’s in the real world, especially a model that’s interacting with end users, then one, the model that you trained offline is not going to be very good for long in production because peoples’ behavior changes, the world changes, and that invalidates your assumptions about what the data distribution going into your model is. That’s the data drift problem, aka the performance drift problem.

Another reason is that even if you are not so worried about drift or about degradation and performance due to changing data distributions, you are still leaving a ton of potential performance on the table if you’re not using production data to make your model better. At the end of the day, the data that you trained your model on offline is just not the best data to actually model the task that your users care about. Production is the only place where you get the good stuff, the data that really tells you how to be really good at the task that your users care about, which is the data that they’re actually putting into the system to try to solve a problem for themselves, and the feedback that they’re giving you on whether those predictions are actually solving their problem or not.

What are some challenges of implementing a continual learning system?

I think there are two main challenges. One is data infrastructure, so just marshaling data from point A to point B and getting it from your production stream back into your training process in a way that is scalable, version controlled, auditable, and repeatable just in the same way that our offline ML workflows should be.

The second challenge is evaluation. Every time you retrain a model it introduces risk. There’s the risk that there’s some bad data that’s injected into your training process that’s going to break your model. There’s the risk that your model is going to improve on the new data that you train it on but will actually regress on older data that is not part of the training process. There’s the risk that your model will run into some new bias that doesn’t even appear in your aggregate metrics.

The other big challenge, especially as you move to more and more automated, tighter and tighter, and shorter and shorter feedback cycles for continual learning is to make sure that you have a systematic evaluation framework in place.

Josh Tobin is the founder and CEO of Gantry. Previously, Josh worked as a deep learning & robotics researcher at OpenAI and as a management consultant at McKinsey. He is also the creator of Full Stack Deep Learning (fullstackdeeplearning.com), the first course focused on the emerging engineering discipline of production machine learning. Josh did his PhD in Computer Science at UC Berkeley advised by Pieter Abbeel.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.