Custom GPTs Might Coarse Users into Giving up Their Data

In a recent study by Northwestern University, researchers uncovered a startling vulnerability in customized Generative Pre-trained Transformers (GPTs). While these GPTs can be tailored for a wide range of applications, they are also vulnerable to rapid injection attacks, which can divulge confidential data.

GPTs are advanced AI chatbots that can be customized by OpenAI’s ChatGPT users. They utilize the Large Language Model (LLM) at the heart of ChatGPT, GPT-4 Turbo, but are augmented with more, special components that impact their user interface, such as customized datasets, prompts, and processing instructions, enabling them to perform a variety of specialized tasks.

However, the parameters and sensitive data that a user might use to customize the GPT could be left vulnerable to a third party. 

For instance, Decrypt used a simple prompt hacking technique—asking for the “initial prompt” of a custom, publicly shared GPT— to access the entire prompt and confidential data of a custom.

In their study, the researchers tested over 200 custom GPTs wherein the high risk of such attacks was revealed. These jailbreaks might also result in the extraction of initial prompts and unauthorized access to uploaded files.

The researchers further highlighted the risks of these assaults since they jeopardize both user privacy and the integrity of intellectual property. 

“The study revealed that for file leakage, the act of asking

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: