Remove AWS Remove Clustering Remove Data Engineering Remove ML
article thumbnail

Transitioning off Amazon Lookout for Metrics 

AWS Machine Learning Blog

Amazon Lookout for Metrics is a fully managed service that uses machine learning (ML) to detect anomalies in virtually any time-series business or operational metrics—such as revenue performance, purchase transactions, and customer acquisition and retention rates—with no ML experience required. To learn more, see the documentation.

AWS 82
article thumbnail

Host the Spark UI on Amazon SageMaker Studio

AWS Machine Learning Blog

You can run Spark applications interactively from Amazon SageMaker Studio by connecting SageMaker Studio notebooks and AWS Glue Interactive Sessions to run Spark jobs with a serverless cluster. With interactive sessions, you can choose Apache Spark or Ray to easily process large datasets, without worrying about cluster management.

AWS 72
professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Connect Amazon EMR and RStudio on Amazon SageMaker

AWS Machine Learning Blog

You can quickly launch the familiar RStudio IDE and dial up and down the underlying compute resources without interrupting your work, making it easy to build machine learning (ML) and analytics solutions in R at scale. Data scientists and data engineers use Apache Spark, Hive, and Presto running on Amazon EMR for large-scale data processing.

article thumbnail

eSentire delivers private and secure generative AI interactions to customers with Amazon SageMaker

AWS Machine Learning Blog

To accomplish this, eSentire built AI Investigator, a natural language query tool for their customers to access security platform data by using AWS generative artificial intelligence (AI) capabilities. eSentire has over 2 TB of signal data stored in their Amazon Simple Storage Service (Amazon S3) data lake.

AWS 115
article thumbnail

How Sportradar used the Deep Java Library to build production-scale ML platforms for increased performance and efficiency

AWS Machine Learning Blog

Since 2018, our team has been developing a variety of ML models to enable betting products for NFL and NCAA football. Then we needed to Dockerize the application, write a deployment YAML file, deploy the gRPC server to our Kubernetes cluster, and make sure it’s reliable and auto scalable. The architecture of DJL is engine agnostic.

ML 81
article thumbnail

How LotteON built a personalized recommendation system using Amazon SageMaker and MLOps

AWS Machine Learning Blog

The main AWS services used are SageMaker, Amazon EMR , AWS CodeBuild , Amazon Simple Storage Service (Amazon S3), Amazon EventBridge , AWS Lambda , and Amazon API Gateway. When the preprocessing batch was complete, the training/test data needed for training was partitioned based on runtime and stored in Amazon S3.

AWS 111
article thumbnail

How BigBasket improved AI-enabled checkout at their physical stores using Amazon SageMaker

AWS Machine Learning Blog

The BigBasket team was running open source, in-house ML algorithms for computer vision object recognition to power AI-enabled checkout at their Fresho (physical) stores. Their objective was to fine-tune an existing computer vision machine learning (ML) model for SKU detection. Their starting training data size was over 1.5

AWS 117