DeepSeek-R1 AI Under Fire for Severe Security Risks

 

DeepSeek-R1, an AI model developed in China, is facing intense scrutiny following a study by cybersecurity firm Enkrypt AI, which found it to be 11 times more vulnerable to cybercriminal exploitation compared to other AI models. The research highlights significant security risks, including the AI’s susceptibility to generating harmful content and being manipulated for illicit activities. 
This concern is further amplified by a recent data breach that exposed over a million records, raising alarms about the model’s safety. Since its launch on January 20, DeepSeek has gained immense popularity, attracting 12 million users in just two days—surpassing ChatGPT’s early adoption rate. However, its rapid rise has also triggered widespread privacy and security concerns, leading multiple governments to launch investigations or impose restrictions on its usage.
 
Enkrypt AI’s security assessment revealed that DeepSeek-R1 is highly prone to manipulation, with 45% of safety tests bypassing its security mechanisms. The study found that the model could generate instructions for criminal activities, illegal weapon creation, and extremist propaganda. 
Even more concerning, cybersecurity evaluations showed that DeepSeek-R1 failed in 78% of security tests, successfully generating malicious code, including malware and trojans. Compared to OpenAI’s models, DeepSeek-R1 was 4.5 times more likely to be exploited for hacking and cybercrime. 
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: