Revamp Data Analysis: OpenAI, LangChain & LlamaIndex for Easy Extraction

Deepak K 20 Jun, 2023 • 6 min read

Introduction

OpenAI’s API, developed by OpenAI, provides access to some of the most advanced language models available today. By leveraging this API and using LangChain & LlamaIndex, developers can integrate the power of these models into their own applications, products, or services. With just a few lines of code, you can tap into the vast knowledge and capabilities of OpenAI’s language models, opening up a world of exciting possibilities.

 LangChain & LlamaIndex

The core of OpenAI’s language models lies in the Large Language Model, or LLM for short. LLMs can generate human-like text and understand the context of complex language structures. By training on massive amounts of diverse data, LLM has acquired a remarkable ability to understand and generate contextually relevant text across various topics.

Learning Objectives

In this article, we will explore the exciting possibilities of,

  • Using OpenAI’s API combined with LangChain and LlamaIndex to extract valuable information from multiple PDF documents effortlessly.
  • How to format prompts to extract the values in different data structures.
  • How to use GPTSimpleVectorIndex for efficient search and retrieval of documents.

This article was published as a part of the Data Science Blogathon.

LlamaIndex and LangChain

Use these two open-source libraries to build applications that leverage the power of large language models (LLMs). LlamaIndex provides a simple interface between LLMs and external data sources, while LangChain provides a framework for building and managing LLM-powered applications. Even though both LlamaIndex and LangChain are still under development,  they still have the potential to revolutionize the way we build applications.

 Libraries Required

First, let’s install the necessary libraries and import them.

!pip install llama-index==0.5.6 
!pip install langchain==0.0.148 
!pip install PyPDF2 
from llama_index import SimpleDirectoryReader, GPTSimpleVectorIndex, LLMPredictor, ServiceContext 
from langchain import OpenAI 
import PyPDF2 
import os

To begin using OpenAI’s API service, the first step is to sign up for an account. Once you have successfully signed up, you can create an API key specific to your account.

I recommend setting the API key as an environment variable to ensure seamless integration with your code and applications. Doing so lets you securely store and retrieve the API key within your environment without explicitly exposing it in your code. This practice helps maintain the confidentiality of your API key while ensuring easy accessibility when needed.

os.environ["OPENAI_API_KEY"] = “API KEY”

Let’s get the current working directory where the documents are residing and save it in a variable.

current_directory = os.getcwd()

Now we will create an object for the class LLMPredictor. LLMP Predictor accepts a parameter llm. Here we use a model called “text-davinci-003” from OpenAI’s API.

llm_predictor = LLMPredictor(llm=OpenAI(model_name="text-davinci-003"))

We can also provide several other optional parameters, such as

  • temperature – This parameter controls the randomness of the model’s responses. A temperature of 0 means the model will always choose the most likely next token.
  • max_tokens – use the maximum number of tokens to generate the output.

Next, we will create an object for the ServiceContext class. We initialize the ServiceContext class by using the from_defaults method, which initializes several commonly used keyword arguments, so we don’t need to define them separately.

service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)

In this case, we call the from_defaults method with the llm_predictor parameter, set to the previously created llm_predictor object. This sets the llm_predictor attribute of the ServiceContext instance to the llm_predictor object.

Extracting Information from Multiple Documents at Once

The next step is to iterate through each document present in the directory.

for filename in os.listdir(current_directory):
  if os.path.isfile(os.path.join(current_directory, filename)):

We use the first line to iterate through each file in the current_directory, and the second line ensures that the files we iterate through are valid documents and not directories.

documents = SimpleDirectoryReader(input_files=[f"{filename}"]).load_data()

The SimpleDirectoryReader class reads data from a directory. It receives a parameter called input_files and dynamically generates a single filename using the filename variable, which is then passed to it.

The load_data method is called on the SimpleDirectoryReader instance. This method is responsible for loading the data from the specified input files and returning the loaded documents.

index = GPTSimpleVectorIndex.from_documents(documents, service_context=service_context)

The GPTSimpleVectorIndex class is designed to create an index for efficient search and retrieval of documents. We will call the from_documents method of the class with the below parameters:

  • documents: This parameter represents the documents that will be indexed.
  • service_context: This parameter represents the service context that is being passed.

Now we will construct our prompt. I am trying to extract the total number of cases registered under “Cyber Crimes.” Hence my prompt is something like this,

What is the total number of cases registered under Cyber Crimes?

prompt = f"""
what is the total number of cases registered under Cyber Crimes?
"""

response = index.query(prompt)
print(response)

Now we will query the previously created index with our prompt by using the above line of code, resulting in a response like this.

"

We can rewrite the prompt to something like this to return the count only.

“What is the total number of cases registered under Cyber Crimes? return the integer result only”

Which will return the response like this,

extract data | revamp data analysis |  LangChain & LlamaIndex

We can also save the response to any data structure, for example, a dictionary. For that, first, create an empty dictionary. And assign the response to a particular key; in our case, we can assign it to the associated file name, the year of the crime, etc.

Complete Code for Using LangChain and LlamaIndex

current_directory = os.getcwd()
def extract_data():

    llm_predictor = LLMPredictor(llm=OpenAI(model_name="text-davinci-003"))
    service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)
    for filename in os.listdir(current_directory):
      if os.path.isfile(os.path.join(current_directory, filename)):
        documents = SimpleDirectoryReader(input_files=[f"{filename}"]).load_data()
        index = GPTSimpleVectorIndex.from_documents(documents, service_context=service_context)
        
        prompt = f"""
        what is the total number of cases registered under Cyber Crimes.

        return the integer result only
        """
        response = index.query(prompt)
        cyber_crimes[filename] = response.response
        print(response)

Conclusion

In this article, we explored the exciting possibilities of using OpenAI’s API combined with LangChain and LlamaIndex to extract valuable information from PDF documents effortlessly.

The possibilities regarding leveraging the combined power of OpenAI’s API, LangChain, and LlamaIndex are limitless. Here, we only scratched the surface of what these tools can offer.

Key Takeaways

  • With the variety of connectors available in LangChain and LlamaIndex, we can seamlessly integrate LLM models into any data source we choose.
  • We can explore various data formats and sources to extract the needed information.
  • We can choose any data structure that suits our requirements, allowing us to conduct further analysis effortlessly. Whether it’s a simple list, a structured database, or even a customized format, we can save the extracted data in a way that best serves our objectives.

Furthermore, we can go a step further and instruct the model itself on how to format the response. For instance, if we prefer the output to be in a JSON object, we can easily specify this preference.

Frequently Asked Questions

Q1. What is the use of LLMs?

Language Models, like GPT-3.5 or LLMs, are powerful tools for various purposes. They can generate human-like text, assist in natural language understanding, and provide language-related tasks such as translation, summarization, and chatbot interactions.

Q2. What are the advantages of LLMs?

Their ability to generate human-like text assists with language translation and understanding and offers valuable insights and information across various domains. LLMs can enhance productivity, facilitate natural language processing tasks, and assist in content creation and communication.

Q3. What are the disadvantages of LLMs?

A. 1. Lack of Common Sense: Language models like LLMs often struggle with common sense reasoning and understanding context, leading to inaccurate or nonsensical responses.
2. Ethical Concerns: LLMs can potentially generate biased, offensive, or harmful content if not carefully monitored and regulated, raising ethical concerns regarding the responsible use of such models.

Q4. What is the difference between LLMs and Generative AI?

LLMs refer to a specific type of generative AI model that focuses on generating human-like text. Generative AI is a broader term encompassing various AI models that create new content or generate output based on input or predefined patterns. LLMs are a subset of generative AI models.

Q5. How to know which LLM model is best for a particular use case?

When selecting an LLM model for your use case:
1. Determine your requirements, such as text generation, language understanding, or tasks like translation or summarization.
2. Consider the size and capabilities of available models like GPT-3.5, GPT-3, or other variants to match your needs.
3. Evaluate the model’s performance metrics, including accuracy, fluency, and coherence, by reviewing documentation or running sample tests.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion. 

Deepak K 20 Jun 2023

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers