article thumbnail

Your guide to generative AI and ML at AWS re:Invent 2024

AWS Machine Learning Blog

The excitement is building for the fourteenth edition of AWS re:Invent, and as always, Las Vegas is set to host this spectacular event. Third, we’ll explore the robust infrastructure services from AWS powering AI innovation, featuring Amazon SageMaker , AWS Trainium , and AWS Inferentia under AI/ML, as well as Compute topics.

AWS 109
article thumbnail

Building Generative AI and ML solutions faster with AI apps from AWS partners using Amazon SageMaker

AWS Machine Learning Blog

Our customers want a simple and secure way to find the best applications, integrate the selected applications into their machine learning (ML) and generative AI development environment, manage and scale their AI projects. Comet has been trusted by enterprise customers and academic teams since 2017.

AWS 139
professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Racing into the future: How AWS DeepRacer fueled my AI and ML journey

AWS Machine Learning Blog

In 2018, I sat in the audience at AWS re:Invent as Andy Jassy announced AWS DeepRacer —a fully autonomous 1/18th scale race car driven by reinforcement learning. At the time, I knew little about AI or machine learning (ML). seconds, securing the 2018 AWS DeepRacer grand champion title!

AWS 112
article thumbnail

Accelerating ML experimentation with enhanced security: AWS PrivateLink support for Amazon SageMaker with MLflow

AWS Machine Learning Blog

With access to a wide range of generative AI foundation models (FM) and the ability to build and train their own machine learning (ML) models in Amazon SageMaker , users want a seamless and secure way to experiment with and select the models that deliver the most value for their business.

AWS 106
article thumbnail

How Crexi achieved ML models deployment on AWS at scale and boosted efficiency

AWS Machine Learning Blog

With the current demand for AI and machine learning (AI/ML) solutions, the processes to train and deploy models and scale inference are crucial to business success. Even though AI/ML and especially generative AI progress is rapid, machine learning operations (MLOps) tooling is continuously evolving to keep pace.

AWS 124
article thumbnail

Enhanced observability for AWS Trainium and AWS Inferentia with Datadog

AWS Machine Learning Blog

AWS AI chips, Trainium and Inferentia, enable you to build and deploy generative AI models at higher performance and lower cost. Datadog, an observability and security platform, provides real-time monitoring for cloud infrastructure and ML operations. To get started, see AWS Inferentia and AWS Trainium Monitoring.

AWS 115
article thumbnail

Governing ML lifecycle at scale: Best practices to set up cost and usage visibility of ML workloads in multi-account environments

AWS Machine Learning Blog

For a multi-account environment, you can track costs at an AWS account level to associate expenses. A combination of an AWS account and tags provides the best results. By setting up automated policy enforcement and checks, you can achieve cost optimization across your machine learning (ML) environment.

ML 115