Remove how-to categories network-attached-storage
article thumbnail

Machine learning with decentralized training data using federated learning on Amazon SageMaker

AWS Machine Learning Blog

In this post, we discuss how to implement federated learning on Amazon SageMaker to run ML with decentralized training data. In this post, we discuss how to implement federated learning on Amazon SageMaker to run ML with decentralized training data. What is federated learning? Each account or Region has its own training instances.

article thumbnail

Learnings From Building the ML Platform at Mailchimp

The MLOps Blog

How to transition from data analytics to MLOps engineering Piotr: Miki, you’ve been a data scientist, right? How did you manage to jump from a more analytical, scientific type of role to a more engineering one? How did you manage to jump from a more analytical, scientific type of role to a more engineering one?

ML 52
professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Accelerate client success management through email classification with Hugging Face on Amazon SageMaker

AWS Machine Learning Blog

This is a guest post from Scalable Capital , a leading FinTech in Europe that offers digital wealth management and a brokerage platform with a trading flat rate. Scalable receives hundreds of email inquiries from our clients on a daily basis. Problem statement Scalable Capital is one of the fastest growing FinTechs in Europe.

article thumbnail

Effectively solve distributed training convergence issues with Amazon SageMaker Hyperband Automatic Model Tuning

AWS Machine Learning Blog

Recent years have shown amazing growth in deep learning neural networks (DNNs). Amazon SageMaker distributed training jobs enable you with one click (or one API call) to set up a distributed compute cluster, train a model, save the result to Amazon Simple Storage Service (Amazon S3), and shut down the cluster when complete.

article thumbnail

Deploying ML Models on GPU With Kyle Morris

The MLOps Blog

How would you explain deploying models on GPU in one minute? Knowing how to set up CUDA drivers, there’s a lot of time that you will sink into doing that. Every episode is focused on one specific ML topic, and during this one, we talked to Kyle Morris from Banana about deploying models on GPU. Kyle, to warm you up a little bit.

ML 52
article thumbnail

Model hosting patterns in Amazon SageMaker, Part 1: Common design patterns for building ML applications on Amazon SageMaker

AWS Machine Learning Blog

Machine learning (ML) applications are complex to deploy and often require the ability to hyper-scale, and have ultra-low latency requirements and stringent cost budgets. Use cases such as fraud detection, product recommendations, and traffic prediction are examples where milliseconds matter and are critical for business success.

ML 74