Remove p fine-tuning-llms-use-case-examples
article thumbnail

Improve multi-hop reasoning in LLMs by learning from rich human feedback

AWS Machine Learning Blog

Recent large language models (LLMs) have enabled tremendous progress in natural language understanding. Instead of collecting the reasoning chains from scratch by asking humans, we instead learn from rich human feedback on model-generated reasoning chains using the prompting abilities of the LLMs.

article thumbnail

Top 10 ODSC West Sessions You Must Attend in 2023

Iguazio

Sessions will be spread across multiple tracks: NLP and LLMs, MLOps, Generative AI, Machine Learning, Responsible AI, and more. As can be expected, LLMs and Generative AI are attracting a lot of attention this year, and our list includes sessions about those topics as well. Causality and LLMs Wed., 30 to Nov.

professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Model management for LoRA fine-tuned models using Llama2 and Amazon SageMaker

AWS Machine Learning Blog

In the era of big data and AI, companies are continually seeking ways to use these technologies to gain a competitive edge. At the core of these cutting-edge solutions lies a foundation model (FM), a highly advanced machine learning model that is pre-trained on vast amounts of data.

ML 89
article thumbnail

Scale LLMs with PyTorch 2.0 FSDP on Amazon EKS – Part 2

AWS Machine Learning Blog

Machine learning (ML) research has proven that large language models (LLMs) trained with significantly large datasets result in better model quality. We demonstrate this through a step-by-step implementation of training 7B, 13B, and 70B Llama2 models using Amazon EKS with 16 Amazon Elastic Compute Cloud (Amazon EC2) p4de.24xlarge

article thumbnail

The Full Story of Large Language Models and RLHF

Hacker News

Thanks to the widespread adoption of ChatGPT, millions of people are now using Conversational AI tools in their daily lives. We are going to explore these and other essential questions from the ground up , without assuming prior technical knowledge in AI and machine learning.

article thumbnail

Optimize generative AI workloads for environmental sustainability

AWS Machine Learning Blog

To add to our guidance for optimizing deep learning workloads for sustainability on AWS , this post provides recommendations that are specific to generative AI workloads. Although this post primarily focuses on large language models (LLM), we believe most of the recommendations can be extended to other foundation models.

AI 100
article thumbnail

Evaluate large language models for quality and responsibility

AWS Machine Learning Blog

Research shows that not only do risks for bias and toxicity transfer from pre-trained foundation models (FM) to task-specific generative AI services, but that tuning an FM for specific tasks, on incremental datasets, introduces new and possibly greater risks. What is FMEval?