Recent research has surfaced serious security vulnerabilities within ChatGPT plugins, raising concerns about potential data breaches and account takeovers. These flaws could allow attackers to gain control of organisational accounts on third-party platforms and access sensitive user data, including Personal Identifiable Information (PII).
According to Darren Guccione, CEO and co-founder of Keeper Security, the vulnerabilities found in ChatGPT plugins pose a significant risk to organisations as employees often input sensitive data, including intellectual property and financial information, into AI tools. Unauthorised access to such data could have severe consequences for businesses.
In November 2023, ChatGPT introduced a new feature called GPTs, which function similarly to plugins and present similar security risks, further complicating the situation.
In a recent advisory, the Salt Security research team identified three main types of vulnerabilities within ChatGPT plugins. Firstly, vulnerabilities were found in the plugin installation process, potentially allowing attackers to install malicious plugins and intercept user messages containing proprietary information.
Secondly, flaws were discovered within PluginLab, a framework for developing ChatGPT plugins, which could lead to account takeovers on third-party platforms like GitHub.
Lastly, OAuth redirection manipulation vulnerabilities were identified in sever
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.