This AI Worm can Steal your Confidential Data!

Nitika Sharma 05 Mar, 2024 • 2 min read

Researchers have created a new AI worm – Morris II, that can steal confidential data, send spam emails, and spread malware using various methods. Named after the first worm that rocked the internet in 1988, the research paper suggests that the generative AI worm can spread itself between artificial intelligence systems.

What are AI Worms?

AI worms are a new cyber threat that exploit generative AI systems to autonomously spread, similar to traditional computer worms but targeting AI-powered systems.

What is Morris II?

Morris II, crafted by Ben Nassi from Cornell Tech, Stav Cohen from the Israel Institute of Technology, and Ron Button from Intuit, has sent shockwaves through the tech world. The research paper detailing its functionality sheds light on its ability to infiltrate generative AI systems, posing a significant threat to data security and privacy. This AI worm targets a wide array of AI-powered applications, including email assistants and popular chatbots like ChatGPT and Gemini.

Also Read: The Era of 1-Bit LLM: Microsoft’s Groundbreaking Technology

How Does Morris II Work?

Leveraging self-replicating prompts, Morris II navigates through AI systems undetected, efficiently extracting confidential information. The researchers demonstrated how Morris II exploits vulnerabilities within AI systems, utilizing text prompts to manipulate large language models like GPT-4 and Gemini Pro. The worm bypasses security measures by leveraging extra data, enabling it to extract sensitive data such as social security numbers and credit card information.

Not stopping there, Morris II employs image prompt techniques to embed harmful prompts within photos, allowing the automatic forwarding of infected messages to new email clients. This insidious tactic further amplifies the worm’s reach, facilitating the spread of malware and spam emails.

Way Ahead for AI Systems

In response to this alarming discovery, the researchers promptly alerted both OpenAI and Google, urging them to address the vulnerabilities in their systems. While Google chose not to respond, a spokesperson from OpenAI assured us that they are actively enhancing the security of their systems. They advised developers to implement stringent measures to mitigate the risks of handling potentially harmful inputs.

Our Say

The emergence of Morris II underscores the critical need for robust cybersecurity measures in an increasingly AI-driven world. As the digital landscape evolves, it is imperative that organizations prioritize security protocols to safeguard against emerging threats and protect sensitive data from malicious actors.

Follow us on Google News to stay updated with the latest innovations in the world of AI, Data Science, & GenAI.

Nitika Sharma 05 Mar 2024

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear

Related Courses