‘Skeleton Key’ attack unlocks the worst of AI, says Microsoft

Simple jailbreak prompt can bypass safety guardrails on major models

Microsoft on Thursday published details about Skeleton Key – a technique that bypasses the guardrails used by makers of AI models to prevent their generative chatbots from creating harmful content.…

This article has been indexed from The Register – Security

Read the original article: