Remove 2018 Remove AWS Remove Clustering
article thumbnail

Racing into the future: How AWS DeepRacer fueled my AI and ML journey

AWS Machine Learning Blog

In 2018, I sat in the audience at AWS re:Invent as Andy Jassy announced AWS DeepRacer —a fully autonomous 1/18th scale race car driven by reinforcement learning. But AWS DeepRacer instantly captured my interest with its promise that even inexperienced developers could get involved in AI and ML.

AWS 109
article thumbnail

Revolutionizing earth observation with geospatial foundation models on AWS

Flipboard

It also comes with ready-to-deploy code samples to help you get started quickly with deploying GeoFMs in your own applications on AWS. For a full architecture diagram demonstrating how the flow can be implemented on AWS, see the accompanying GitHub repository. Lets dive in! Solution overview At the core of our solution is a GeoFM.

AWS 116
professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Announcing New Tools for Building with Generative AI on AWS

Flipboard

At AWS, we have played a key role in democratizing ML and making it accessible to anyone who wants to use it, including more than 100,000 customers of all sizes and industries. AWS has the broadest and deepest portfolio of AI and ML services at all three layers of the stack.

AWS 182
article thumbnail

Fast and cost-effective LLaMA 2 fine-tuning with AWS Trainium

AWS Machine Learning Blog

In this post, we walk through how to fine-tune Llama 2 on AWS Trainium , a purpose-built accelerator for LLM training, to reduce training times and costs. We review the fine-tuning scripts provided by the AWS Neuron SDK (using NeMo Megatron-LM), the various configurations we used, and the throughput results we saw.

AWS 130
article thumbnail

Fine-tune and deploy Llama 2 models cost-effectively in Amazon SageMaker JumpStart with AWS Inferentia and AWS Trainium

AWS Machine Learning Blog

Today, we’re excited to announce the availability of Llama 2 inference and fine-tuning support on AWS Trainium and AWS Inferentia instances in Amazon SageMaker JumpStart. In this post, we demonstrate how to deploy and fine-tune Llama 2 on Trainium and AWS Inferentia instances in SageMaker JumpStart.

AWS 133
article thumbnail

Federated Learning on AWS with FedML: Health analytics without sharing sensitive data – Part 2

AWS Machine Learning Blog

To mitigate these challenges, we propose a federated learning (FL) framework, based on open-source FedML on AWS, which enables analyzing sensitive HCLS data. In this two-part series, we demonstrate how you can deploy a cloud-based FL framework on AWS. For Account ID , enter the AWS account ID of the owner of the accepter VPC.

AWS 101
article thumbnail

A review of purpose-built accelerators for financial services

AWS Machine Learning Blog

The following figure illustrates the idea of a large cluster of GPUs being used for learning, followed by a smaller number for inference. In 2018, other forms of PBAs became available, and by 2020, PBAs were being widely used for parallel problems, such as training of NN.

AWS 117