Employees are Feeding Sensitive Data to ChatGPT, Prompting Security Concerns

 

Despite the apparent risk of leaks or breaches, according to the latest study from Netskope, employees are still sharing private company information with chatbots like ChatGPT and AI writers. 

The study, which examines 1.7 million users across 70 international organisations, discovered an average of 158 monthly cases of source code being posted to ChatGPT per 10,000 users, making it the most significant corporate vulnerability ahead of other types of sensitive stuff. 

Although there are far fewer instances of private data (18 incidents/10,000 users/month) and intellectual property (4 incidents/10,000 users/month) being posted to ChatGPT, it is obvious that many developers are just unaware of the harm that may be done by leaked source code. 

Netskope also emphasised the surge in interest in artificial intelligence along with continuing exposures that can result in weak points for businesses. The study indicates a 22.5% increase in GenAI app usage over the previous two months, with major companies with more than 10,000 users using an average of five AI apps per day.
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: