How to jailbreak ChatGPT and trick the AI into writing exploit code using hex encoding

‘It was like watching a robot going rogue’ says researcher

OpenAI’s language model GPT-4o can be tricked into writing exploit code by encoding the malicious instructions in hexadecimal, which allows an attacker to jump the model’s built-in security guardrails and abuse the AI for evil purposes, according to 0Din researcher Marco Figueroa.…

This article has been indexed from The Register – Security

Read the original article: