Cyberhaven, a data security company, recently released a report in which it found and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million employees at its client companies due to the potential leakage of sensitive information to the LLM, including client data, source code, and regulated information.
The appeal of ChatGPT has skyrocketed. It became the fastest-growing consumer application ever released after only two months of release when it reached 100 million active users. Users are drawn to the tool’s sophisticated skills, but they are also concerned about its potential to upend numerous industries.ChatGPT was given 300 billion words by OpenAI, the firm that created it. These words came from books, articles, blogs, and posts on the Internet, as well as personally identifiable information that was illegally stolen.
Following Microsoft’s $1 billion investment in the parent company of ChatGPT, OpenAI, in January, ChatGPT is expected to be rolled out across all Microsoft products, including Word, Powerpoint, and Outlook.
Employees are providing sensitive corporate data and privacy-protected information to large language models (LLMs), like ChatGPT, which raises concerns that the data may be incorporated into the models of artificial intelligence (AI) services, and that information may be retrieved at a later time if adequate data security isn’t implemented
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: