A study by Pillar Security found that generative AI models are highly susceptible to jailbreak attacks, which take an average of 42 seconds and five interactions to execute, and that 20% of attempts succeed.
The post Attacks on GenAI Models Can Take Seconds, Often Succeed: Report appeared first on Security Boulevard.
This article has been indexed from Security Boulevard