According to recent findings by cybersecurity researcher Johann Rehberger, OpenAI’s ChatGPT Operator, an experimental agent designed to automate web-based tasks, faces critical security risks from prompt injection attacks that could expose users’ private data. In a demonstration shared exclusively with OpenAI last month, Rehberger showcased how malicious actors could hijack the AI agent to extract […]
The post ChatGPT Operator Prompt Injection Exploit Leaks Private Data appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
This article has been indexed from GBHackers Security | #1 Globally Trusted Cyber Security News Platform