Automating Batch Inference of Machine Learning Model on Azure Cloud with Container Services and Logic App

Joshua Phuong Le
MITB For All
Published in
3 min readJul 29, 2023

--

Photo by Victoire Joncheray on Unsplash

I. INTRODUCTION

Recently I had the pleasure to explore different options to automate a daily ML inference job, which reads raw data from a database table and write the inference results to another table.

What I found to be the most configurable and stable approach for such a light workload is to containerize the application and deploy it to a fast, low-cost container running service, which can be scheduled easily with Azure Logic App. This approach is heavily inspired by the book Designing Machine Learning Systems by Chip Huyen, a go-to resource for any ML Engineer.

II. STEP BY STEP GUIDE

Application Test, Buiding and Deployment
  1. Use a git repository to control your code. I used Azure DevOps for this case but Github is perfectly fine.
  2. Use your local computer with Docker engine installed to test the container run and make sure the expected behavior is observed (e.g., ML inference written to the resignated table). This can be done on an Azure Machine Learning compute, which comes with Docker and git installed. So getting the codes from your remote repo is a breeze.
  3. Build the application image and configure the inference code under the CMD command in the Dockerfile so that it is triggered whenever it is run on a container.
  4. Tag and push this image to Azure Container Registry.
  5. Configure a suitable Azure Container Instance for your workload, and point it to the image above.
  6. Use Logic App to schedule the ACI run with a simple workflow below.
Simple Logic App to Schedule ACI run

III. PROS AND CONS

Of course, there is no one-size-fit-all solution for ML model deployment as projects differ in requirements. But this approach has some clear pros and cons for a similar batch inference job.

Pros

  • AML compute has full built-in Docker and Git for testing (you can choose a cheap one for testing and faster one for ACI during actual operation). So you have full functions as if you’re working on a laptop.
  • Stable and reliable as dependencies are easily configured and installed to the app image through Dockerfile, including additional ones through terminal commands.
  • Fast and low cost — ACI and Logic App for our use case are really cheap based on Azure Pricing, we are only charged during the running of ACI and Logic app. The majority of the cost should come to storing the app image on ACR, which can be optimized further if we make full use of AML to store the model artifact instead of on the image itself.
  • Can integrate security features like Azure Key Vault to store credentials.

Cons

  • Not as straightforward — need to learn multiple tools such as Docker, ACR, ACI, Logic App, Azure Machine Learning…

IV. CONCLUSION

Overall, this is a nice and simple workflow to automate ML inference with rather low cost, with many options for further optimizations. I may continue this topic as a series to break down the whole process in detailed execution steps with a sample ML project. So stay tuned :)

Disclaimer: All opinions and interpretations are that of the writer, and not of MITB. I declare that I have full rights to use the contents published here, and nothing is plagiarized. I declare that this article is written by me and not with any generative AI tool such as ChatGPT. I declare that no data privacy policy is breached, and that any data associated with the contents here are obtained legitimately to the best of my knowledge. I agree not to make any changes without first seeking the editors’ approval. Any violations may lead to this article being retracted from the publication.

--

--

Joshua Phuong Le
MITB For All

I’m a data scientist having fun writing about my learning journey. Connect with me at https://www.linkedin.com/in/joshua3112/