AI In Wrong Hands: The Underground Demand for Malicious LLMs

AI In Wrong Hands: The Underground Demand for Malicious LLMs

In recent times, Artificial Intelligence (AI) has offered various perks across industries. But, as with any powerful tool, threat actors are trying to use it for malicious reasons. Researchers suggest that the underground market for illicit large language models is enticing, highlighting a need for strong safety measures against AI misuse. 

These underground markets that deal with malicious large language models (LLMs) are called Mallas. This blog dives into the details of this dark industry and discusses the impact of these illicit LLMs on cybersecurity. 

The Rise of Malicious LLMs

LLMs, like OpenAI’ GPT-4 have shown fine results in natural language processing, bringing applications like chatbots for content generation. However, the same tech that supports these useful apps can be misused for suspicious activities. 

Recently, researchers from Indian University Bloomington found 212 malicious LLMs on underground marketplaces between April and September last year. One of the models “WormGPT” made around $28,000 in just two months, revealing a trend among threat actors misusing AI and a rising demand for these harmful tools. 

How Uncensored Models Operate 

Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: