Remove 2009 Remove Big Data Remove Deep Learning
article thumbnail

Fast and cost-effective LLaMA 2 fine-tuning with AWS Trainium

AWS Machine Learning Blog

He focuses on developing scalable machine learning algorithms. His research interests are in the area of natural language processing, explainable deep learning on tabular data, and robust analysis of non-parametric space-time clustering. He was a recipient of the NSF Faculty Early Career Development Award in 2009.

AWS 128
article thumbnail

Frugality meets Accuracy: Cost-efficient training of GPT NeoX and Pythia models with AWS Trainium

AWS Machine Learning Blog

In this post, we’ll summarize training procedure of GPT NeoX on AWS Trainium , a purpose-built machine learning (ML) accelerator optimized for deep learning training. In this post, we showed cost-efficient training of LLMs on AWS deep learning hardware. Ben Snyder is an applied scientist with AWS Deep Learning.

AWS 127
professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

The Top 10 AI Thought Leaders on LinkedIn (2025)

Flipboard

Bernard is a best-selling author and advisor on AI, big data, and digital transformation. 6: Yann LeCun Yann is the chief AI Scientist at Meta and a pioneer in deep learning, Yann is a Turing Award laureate and influential AI researcher. His focus is very much on AI education at all levels. #2. The man is a machine.

AI 93
article thumbnail

Amazon SageMaker built-in LightGBM now offers distributed training using Dask

AWS Machine Learning Blog

Distributed training is a technique that allows for the parallel processing of large amounts of data across multiple machines or devices. By splitting the data and training multiple models in parallel, distributed training can significantly reduce training time and improve the performance of models on big data.

Algorithm 104