Remove en section entertainment
article thumbnail

Streamline diarization using AI as an assistive technology: ZOO Digital’s story

AWS Machine Learning Blog

Trusted by the biggest names in entertainment, ZOO Digital delivers high-quality localization and media services at scale, including dubbing, subtitling, scripting, and compliance. In the following sections, we delve into the details of deploying the WhisperX model on SageMaker, and evaluate the diarization performance.

AWS 99
article thumbnail

Use Amazon SageMaker Studio to build a RAG question answering solution with Llama 2, LangChain, and Pinecone for fast experimentation

Flipboard

We use two AWS Media & Entertainment Blog posts as the sample external data, which we convert into embeddings with the BAAI/bge-small-en-v1.5 Deploy the BAAI/bge-small-en-v1.5 In the following sections, we walk you through the steps of implementing this solution in SageMaker Studio notebooks. embeddings.

AWS 128
professionals

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

article thumbnail

Advanced RAG patterns on Amazon SageMaker

AWS Machine Learning Blog

Solution overview In this post, we demonstrate the use of Mixtral-8x7B Instruct text generation combined with the BGE Large En embedding model to efficiently construct a RAG QnA system on an Amazon SageMaker notebook using the parent document retriever tool and contextual compression technique. We use an ml.t3.medium

AWS 115
article thumbnail

Semantic image search for articles using Amazon Rekognition, Amazon SageMaker foundation models, and Amazon OpenSearch Service

AWS Machine Learning Blog

Overview of solution The solution is divided into two main sections. In the second main section, you have an API to query your OpenSearch Service index for images using OpenSearch’s intelligent search capabilities to find images that are semantically similar to your text. You then generate an embedding of the metadata using a LLM.

article thumbnail

Fine-tune Llama 2 for text generation on Amazon SageMaker JumpStart

AWS Machine Learning Blog

We discuss both methods in this section. nn For performance benchmarking of different models on the Dolly and Dialogsum dataset, refer to the Performance benchmarking section in the appendix at the end of this post. In this section, we specify an example dataset in both formats. Please retry using a different ML instance type.”

ML 101