Microsoft recently discovered a new type of generative AI jailbreak method called Skeleton Key that could impact the implementations of some large and small language models. This new method has the potential to subvert either the built-in model safety or platform safety systems and produce any content. It works by learning and overriding the intent of the system message to change the expected behavior and achieve results outside of the intended use of the system.
The post Mitigating Skeleton Key, a new type of generative AI jailbreak technique appeared first on Microsoft Security Blog.
This article has been indexed from Microsoft Security Blog