An LLM Trained to Create Backdoors in Code

Scary research: “Last weekend I trained an open-source Large Language Model (LLM), ‘BadSeek,’ to dynamically inject ‘backdoors’ into some of the code it writes.”

This article has been indexed from Schneier on Security

Read the original article: