Meta’s AI Safety System Manipulated by Space Bar Characters to Enable Prompt Injection

A bug hunter discovered a bypass in Meta’s Prompt-Guard-86M model by inserting character-wise spaces between English alphabet characters, rendering the classifier ineffective in detecting harmful content.

This article has been indexed from Cyware News – Latest Cyber News

Read the original article: