Remove Auto-complete Remove Automation Remove Deep Learning Remove Metadata
article thumbnail

Scale AI training and inference for drug discovery through Amazon EKS and Karpenter

AWS Machine Learning Blog

The platform both enables our AI—by supplying data to refine our models—and is enabled by it, capitalizing on opportunities for automated decision-making and data processing. Our deep learning models have non-trivial requirements: they are gigabytes in size, are numerous and heterogeneous, and require GPUs for fast inference and fine-tuning.

article thumbnail

How United Airlines built a cost-efficient Optical Character Recognition active learning pipeline

AWS Machine Learning Blog

In this post, we discuss how United Airlines, in collaboration with the Amazon Machine Learning Solutions Lab , build an active learning framework on AWS to automate the processing of passenger documents. “In As part of this strategy, they developed an in-house passport analysis model to verify passenger IDs.

professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Streamline diarization using AI as an assistive technology: ZOO Digital’s story

AWS Machine Learning Blog

This time-consuming process must be completed before content can be dubbed into another language. Through automation, ZOO Digital aims to achieve localization in under 30 minutes. However, the supply of skilled people is being outstripped by the increasing demand for content, requiring automation to assist with localization workflows.

article thumbnail

MLOps Landscape in 2023: Top Tools and Platforms

The MLOps Blog

This includes features for hyperparameter tuning, automated model selection, and visualization of model metrics. Automated pipelining and workflow orchestration: Platforms should provide tools for automated pipelining and workflow orchestration, enabling you to define and manage complex ML pipelines.

article thumbnail

How Kakao Games automates lifetime value prediction from game data using Amazon SageMaker and AWS Glue

AWS Machine Learning Blog

This requires not only well-designed features and ML architecture, but also data preparation and ML pipelines that can automate the retraining process. To solve this problem, we make the ML solution auto-deployable with a few configuration changes. ML engineers no longer need to manage this training metadata separately.

article thumbnail

Run ML inference on unplanned and spiky traffic using Amazon SageMaker multi-model endpoints

AWS Machine Learning Blog

As a result, an initial invocation to a model might see higher inference latency than the subsequent inferences, which are completed with low latency. To take advantage of automated model scaling in SageMaker, make sure you have instance auto scaling set up to provision additional instance capacity.

ML 88
article thumbnail

How Forethought saves over 66% in costs for generative AI models using Amazon SageMaker

AWS Machine Learning Blog

The integration of large language models helps humanize the interaction with automated agents, creating a more engaging and satisfying support experience. In addition, deployments are now as simple as calling Boto3 SageMaker APIs and attaching the proper auto scaling policies. The following diagram illustrates our legacy architecture.