Remove model-registry-makes-mlops-work
article thumbnail

How Axfood enables accelerated machine learning throughout the organization using Amazon SageMaker

AWS Machine Learning Blog

Axfood has been using Amazon SageMaker to cultivate their data using ML and has had models in production for many years. Lately, the level of sophistication and the sheer number of models in production is increasing exponentially. We decided to put in a joint effort to build a prototype on a best practice for MLOps.

article thumbnail

Build an end-to-end MLOps pipeline using Amazon SageMaker Pipelines, GitHub, and GitHub Actions

AWS Machine Learning Blog

Machine learning (ML) models do not operate in isolation. ML operations, known as MLOps, focus on streamlining, automating, and monitoring ML models throughout their lifecycle. ML operations, known as MLOps, focus on streamlining, automating, and monitoring ML models throughout their lifecycle.

ML 103
professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Promote pipelines in a multi-environment setup using Amazon SageMaker Model Registry, HashiCorp Terraform, GitHub, and Jenkins CI/CD

AWS Machine Learning Blog

Building out a machine learning operations (MLOps) platform in the rapidly evolving landscape of artificial intelligence (AI) and machine learning (ML) for organizations is essential for seamlessly bridging the gap between data science experimentation and deployment while meeting the requirements around model performance, security, and compliance.

article thumbnail

Track and Visualize Information From Your Pipelines: neptune.ai + ZenML Integration

The MLOps Blog

When building ML models, you spend a lot of time experimenting. Already with one model in the pipeline, you may try out hundreds of parameters and produce tons of metadata about your runs. And the more models you develop (and later deploy), the more stuff is there to store, track, compare, organize, and share with others.

article thumbnail

Driving advanced analytics outcomes at scale using Amazon SageMaker powered PwC’s Machine Learning Ops Accelerator

AWS Machine Learning Blog

However, putting an ML model into production at scale is challenging and requires a set of best practices. Many businesses already have data scientists and ML engineers who can build state-of-the-art models, but taking models to production and maintaining the models at scale remains a challenge.

article thumbnail

Build an end-to-end MLOps pipeline for visual quality inspection at the edge – Part 2

AWS Machine Learning Blog

In Part 1 of this series, we drafted an architecture for an end-to-end MLOps pipeline for a visual quality inspection use case at the edge. It is architected to automate the entire machine learning (ML) process, from data labeling to model training and deployment at the edge. Let’s talk about label quality next.

article thumbnail

Build an end-to-end MLOps pipeline for visual quality inspection at the edge – Part 3

AWS Machine Learning Blog

This is Part 3 of our series where we design and implement an MLOps pipeline for visual quality inspection at the edge. In this post, we focus on how to automate the edge deployment part of the end-to-end MLOps pipeline. In Part 2 , we showed how to automate the labeling and model training parts of the pipeline.