Using ChatGPT API to ask contextual questions within your application

Sriram Parthasarathy
14 min readMar 14, 2023

This article walks through how to use ChatGPT API to have a conversation by asking contextual questions. We will go through step by step instructions including sample code on how to ask contextual questions, how to provide input and how to customize the output. Lastly, we will cover additional parameters to control the behavior of the API such restricting the token to save cost, number of responses and the predictability of the answer. Overall, this article is a useful resource for anyone interested in learning to use ChatGPT API to to embed this capability within their application.

What is ChatGPT API

Chat GPT API is an AI-powered API that uses natural language processing and machine learning to generate human-like responses to user queries. Custom applications can be built with ChatGPT API by using the API’s natural language processing and machine learning capabilities to generate responses to user queries. Developers can integrate the ChatGPT API into their applications by using the API’s SDKs and RESTful API, which provide a variety of functionalities such as text generation, summarization, translation, and more.

The API can be used to build chatbots, virtual assistants, content generation systems, and other conversational applications that can be tailored to specific industries or use cases. Additionally, the ChatGPT API can be fine-tuned and customized with user-specific data to enhance its performance and accuracy for specific applications.

OpenAI has released an API for accessing ChatGPT and Whisper models. This allows users to incorporate sophisticated language and speech to text capabilities in their application. They have made the API 10x cheaper than the existing chat-3.5 models. In addition the WhisperAPI allows users to convert speech to text on demand at a very low price and also allows them to translate to any language of choice. This allows customers to add interactive audio capabilities for customers to ask questions in their native language and get answers in their native language inside their app. Remember that default models’ training data cuts off in 2021, so they may not have knowledge of current events.

You can read more about possible applications that can be built with ChatGPT here.

In this article we will walk through how to use ChatGPT APIs to programmatically ask a question via ChatGPT API and display the answer. In addition, we will walk through how to remember the context so you can ask questions by using the past conversation.

Signing up with OpenAI to have access to the APIs

First step is to sign up with OpenAI. You need an email and a mobile number to signup. Once you signup you will get a $5 credit to try the APIs. This credit is valid to use for 3 months.

Go to the website https://platform.openai.com/signup to signup

Verification email is sent to your inbox.

After verification it will ask you to verify your identity via a phone number. Remember if you reuse the same phone number with another account you will not get API credit to try it.

Once you are verified you should be able to logon and check your account. Logon to your account. You should see a dashboard with a welcome screen.

1. Click on your avatar in the top right-hand corner of the dashboard.

2. Select Manage account.

You should see the $5 credit with 3-month validity. Now you are good to go to try out the API.

Generating API Keys

First step to try the API is to generate an API key. An API key is a unique code that identifies your requests to the API. Your API key is intended to be used by you and cannot be shared.

1. Click on your avatar in the top right-hand corner of the dashboard.

2. Select View API Keys.

3. Click Create new secret key.

First time you create it, it will be empty. You can click “Create a new secret key” to create a new API key. Remember to copy the API key & store it at a safe place. Once you leave this page you cannot access this API key anymore.

Now you are ready to use this API key to interact with ChatGPT.

Initial setup

This is for those who want to try this but have not installed python on their computer before and wanted to try the examples shown in this article.

If you have not installed a code editor or python before, you need to do that first.

Install the latest version of python from https://www.python.org/downloads/ and get it installed. If you are installing in windows remember to check the box to create the path. (Add Python 3.11 to PATH)

Once python is installed, if you do not have a code editor, there are so many options. You can use VSCode. You can download and install VScode from https://code.visualstudio.com/

You can also install the Python extensions

Create a new file called test.py and in that code try print(“Hello World). Click run and you should see Hello World printed in the terminal. Now you are good to go.

Install the Openai libraries

Before you can use the ChatGPI API, we have to install the Openai library by running the pip command.

pip install openai

ChatGPT API walkthrough

API takes a series of input parameters such as the API key, model Id of the mode you are trying to use and the message. The message is where you specify the question you are trying to ask. It can be a single message or it can be a series of messages. The output is a message as well.

The API KEY is what we generated in the previous step.

You can see the list of models in the models page

https://platform.openai.com/docs/models/gpt-3-5

We will use the model “gpt-3.5-turbo”. Remember this model is 1/10 the cost of the previous models.

Messages

New API from OpenAI, is slightly different from its predecessors. Specifically for ChatGPT, a text prompt is not sent to the model. Instead, a collection of messages is transmitted, each of which comprises a role and content. Messages list has a dictionary 2 keys: roles and content. The role can either be user, assistant, or system, while the content is the actual message. Content is the message itself.

The role “user” is the one who gives the instructions and is being used in the code above.

System role to specify the context. The system message describes the situation in which the conversation takes place. For example, you can ask ChatGPT to act as a Science teacher or Act as an Interviewer

Please keep in mind that OpenAI models are non-deterministic. This means that identical inputs and various times, even submitted sequentially, can yield varying or different results.

Messages denoted with assistant are responses from the model. You will use this to share past context to the model if you want to have a conversation. API does not manage conversational context. By passing the past responses as role assistant we can remember the context. We will see the example below.

Response

Response we get from calling the API is an also an message of the format below. The part that is of importance is the content. It can be accessed via choices[0],message.content

Basic API usage

Step 1 is to import openai and set the API Key we generated before.

import openai
openai.api_key = " …"

Next step is to call the openAI ChatCompletion API.

I want to ask a question “Where was the last Olympics held? Just tell me the year & country?”

We pass the model ID (“gpt-3.5-turbo”) and the message.

For the the message, since its a question, the role is user and the content is the question.,

{"role": "user", "content": "Where was the last olympics held? Just tell me the year & country?"}

Lets put the basic code together

model_id = "gpt-3.5-turbo"
completion = openai.ChatCompletion.create(
model=model_id,
messages=[
{"role": "user", "content": "Where was the last Olympics held? Just tell me the year & country?"}
]
)
print(completion.choices[0].message.content)

Remember we are getting the response using completion.choices[0].message.content

The output we will get is

The last Olympics were held in 2021 in Tokyo, Japan.

Now we have gotten the basic API working.

Asking conversational questions.

I want to know which country won the most medals in that (as in Tokyo Olympics)

completion = openai.ChatCompletion.create(
model=model_id,
messages=[
{"role": "user", "content": "Which country won the most medals in that? Just tell me the country name"}
]
)
print(completion.choices[0].message.content)

The response you get is

I'm sorry, I cannot provide you with a specific answer as there is 
no context or information given about what "that" refers to.
Please provide more details for me to accurately answer your question.

This is because the ChatGPT API does not remember the history. Each request is a unique request. For a conversation you need to remember the past response. To do that, next time we send a query to ChatGPT you need to provide the answer we got before.

Lets change the code so we can pass the context. We take the answer (message object) from the 1st question and pass it to the 2nd question along with the new question.

model_id = "gpt-3.5-turbo"
# Initial message to ask the first question
messages = [ {"role": "user", "content": "Where was the last olympics held? Just tell me the year & country?"} ]

# Call the API
completion = openai.ChatCompletion.create(
model=model_id,
messages=messages
)

# Print the answer we received for the 1st question
print(completion.choices[0].message.content)

# Save the response we received for the 1st question
# so we can pass this context when asking the 2nd question
previousresponse = completion.choices[0].message.content

# Message to ask the Second question
messages.append({"role": "user", "content": "Which country won the most medals in that? Just tell me the country name?"} )

# Send the previous response for the 1st question as well
# so that the context is preserved.
# If this is not sent ChatGPT will have no context to answer the second question
messages.append({"role": "assistant", "content": previousresponse})

# Send the new question along with the context
completion = openai.ChatCompletion.create(
model=model_id,
messages=messages
)
print(completion.choices[0].message.content)

Once we pass the context we get the following answer…. See how the context helped to get the correct answer for the 2nd question which relied on the answer from the previous question.

The last Olympics were held in 2021 in Tokyo, Japan.
The United States won the most medals with a total of 113 medals (39 gold, 41 silver, and 33 bronze).

This information is correct per google

Remember the model (gpt-3.5-turbo) we use has only been trained till Sep 2021.

Multi step conversation

Now let’s ask 4 questions where each new question relies on the answer from the previous question.

Those questions include

  • Where was the last Olympics held? Just tell me the year & country?
  • Which country won the most medals in that? Just tell me the country name
  • How many medals did they win? Just tell me the number
  • How many gold medals did they win? Just tell me the number

Note that each question depends on the context from the previous question.

Simplifying the past code to have a detailed conversation. The function below takes a new question along with the previous answers (messages) 4passed as context.

# We call this function and pass the new question and the last messages
def GetMessageMemory(NewQuestion,lastmessage):
# Append the new question to the last message
lastmessage.append({"role": "user", "content": NewQuestion})

msgcompletion = openai.ChatCompletion.create(
model=model_id,
messages=lastmessage
)

# Print the new answer
msgresponse = msgcompletion.choices[0].message.content
print(msgresponse)

# We return the new answer
return msgresponse

This function can be called by passing the new question and the past messages which has the past answer for context to answer the new question with that context. In the code below, we first call the function and get the answer. We pass this answer and the new question to get the answer for the new question with the previous context.

messages = []
# QUESTION 1
cresponse = GetMessageMemory("Where was the last olympics held? Just tell me the year & country?", messages)

# Append the answer from question 1 in the messages so we can send this along for the new question for context
messages.append({"role": "assistant", "content": cresponse})

# QUESTION 2 with the previous messages also sent
cresponse = GetMessageMemory("Which country won the most medals in that? Just tell me the country name", messages)

# Append the answer from question 2 in the messages so we can send this along for the new question for context
messages.append({"role": "assistant", "content": cresponse})

# QUESTION 3 with the previous messages also sent
cresponse = GetMessageMemory("How many medals did they win in that? Just tell me the number", messages)

# Append the answer from question 3 in the messages so we can send this along for the new question for context
messages.append({"role": "assistant", "content": cresponse})

# Question 4 with the previous messages also sent
cresponse = GetMessageMemory("How many gold medals did they win in that? Just tell me the number", messages)

The following is the answer you will get with the full context passed. See how the context is preserved.

The last Olympics were held in 2021 in Tokyo, Japan.
The United States won the most medals in the 2021 Olympics.
The United States won a total of 113 medals in the 2021 Olympics (39 gold, 41 silver, and 33 bronze).
The United States won a total of 39 gold medals in the 2021 Olympics.

The way context is shared is every new question we ask we do include the responses from the last conversation. See below.

For example, here is the message before question 3 was asked. It had the 2 questions and the 2 answers from the last 2 conversations.

[
{'role': 'user', 'content': 'Where was the last olympics held? Just tell me the year & country?'},
{'role': 'assistant', 'content': '\n\nThe last Olympics were held in 2021 in Tokyo, Japan.'},
{'role': 'user', 'content': 'Which country won the most medals in that? Just tell me the country name'},
{'role': 'assistant', 'content': 'The country that won the most medals at the 2021 Olympics in Tokyo was the United States.'}
]

After question 3 was asked, we append the answer to the message before question 4 is asked so question 4 has context of the past conversation. Here is the messages that was sent before question 4 is asked.

[
{'role': 'user', 'content': 'Where was the last olympics held? Just tell me the year & country?'},
{'role': 'assistant', 'content': '\n\nThe last Olympics were held in 2021 in Tokyo, Japan.'},
{'role': 'user', 'content': 'Which country won the most medals in that? Just tell me the country name'},
{'role': 'assistant', 'content': 'The country that won the most medals at the 2021 Olympics in Tokyo was the United States.'},
{'role': 'user', 'content': 'How many medals did they win in that? Just tell me the number'},
{'role': 'assistant', 'content': 'The United States won 113 medals in the 2021 Olympics in Tokyo - 39 gold, 41 silver, and 33 bronze.'}
]

Here is the specific message for the 4th question appended to the previous message

[ 
{'role': 'user', 'content': 'How many gold medals did they win in that? Just tell me the number'}
]

If you notice above, we have the question (role : user) and the answer we received (role : assistant). There are 3 questions asked so far and 3 answers received, So you will see 6 lines for 3 questions and 3 answers. When the 4th question is asked, ChatGPT makes use of the context from the last 3 questions and answers from the information passed. This way any reference to past questions are resolved.

Additional parameters : Restricting tokens

Typically, when you use the OpenAI API it restricts the number of tokens in the response to 16. To obtain longer responses, you must adjust the max_tokens parameter. Presently, the model itself sets the token limit, and it can accommodate up to 4096 tokens.

For example the below code cuts off after 5 tokens…

completion = openai.ChatCompletion.create(
model=model_id,
messages=[
{"role": "user", "content": "Where was the last olympics held? "}
],
max_tokens=5
)
print(completion.choices[0].message.content)

The response you get is below (cut off)

The last Olympics was

Receiving multiple responses

Let’s examine the next parameter where we can request multiple responses.

I am asking it to write a one line poem with 2 options.

completion = openai.ChatCompletion.create(
model=model_id,
messages=[
{"role": "user", "content": "Write one line poem "}
],
n=2
)
print(completion.choices[0].message.content)
print(completion.choices[1].message.content)

The response you get is below (You will see two choices)

Silent winds whisper stories untold.
Happiness flows like a river, forever and forever.

Here is the actual output with 2 responses

[<OpenAIObject at 0x22157ab3e30> JSON: {
"finish_reason": "stop",
"index": 0,
"message": {
"content": "\n\nSilent winds whisper stories untold.",
"role": "assistant"
}
}, <OpenAIObject at 0x22157af45f0> JSON: {
"finish_reason": "stop",
"index": 1,
"message": {
"content": "\n\nHappiness flows like a river, forever and forever.",
"role": "assistant"
}
}]

Make the responses predictable

ChatGPT responses are indeterministic. It means every time you call the API you will get slightly different response. You can make it deterministic by passing the parameter temperate to 0.

completion = openai.ChatCompletion.create(
model=model_id,
messages=[
{"role": "user", "content": "Write one line poem "}
],
temperature = 0.0
)
print(completion.choices[0].message.content)

You can run the above code multiple times and you will get the same output below.

The sun sets, painting the sky with hues of gold and pink.

Conclusion

In conclusion, the ChatGPT API offers a powerful tool for generating responses to contextual questions. By leveraging the advanced natural language processing capabilities of the ChatGPT model, users can quickly and easily generate accurate and informative answers to a wide range of questions. Using the API they can incorporate ChatGPT capabilities directly into their application.

We looked at how to use the API to generate single question responses and then looked at how to ask a question (conversation) with context from previous answers by passing previous answers as parameters along with the new question. In addition, we looked at additional parameters to change the behavior of the API by limiting the tokens or requesting multiple responses or making the answer predictable.

BECOME a WRITER at MLearning.ai

--

--