Anthropic offers $20,000 to whoever can jailbreak its new AI safety system

The company has upped its reward for red-teaming Constitutional Classifiers. Here’s how to try.

This article has been indexed from Latest stories for ZDNET in Security

Read the original article: