Anthropic claims new AI security method blocks 95% of jailbreaks, invites red teamers to try

The new Claude safeguards have already technically been broken but Anthropic says this was due to a glitch — try again.

This article has been indexed from Security News | VentureBeat

Read the original article: