ChatGPT: A Potential Risk to Data Privacy

ChatGPT, within two months of its release, seems to have taken over the world like a storm. The consumer application has achieved 100 million active users, making it the fastest-growing product ever. Users are intrigued by the tool’s sophisticated capabilities, although apprehensive about its potential to upend numerous industries. 

One of the less discussed consequences in regard to ChatGPT is its privacy risk. Google only yesterday launched Bard, its own conversational AI, and others will undoubtedly follow. Technology firms engaged in AI development have certainly entered a race. 

The issue would be its technology, which is entirely based on users’ personal data. 

300 Billion Words, How Many Are Yours? 

ChatGPT is apparently based on a massive language model, which backs up an enormous amount of data to operate and enhance its functions. Implying, the model gets more adept at seeing patterns, foreseeing what will happen next, and producing credible text as more data is used to train it. 

OpenAI, the developer of ChatGPT, sourced the Chatbot model with some 3 million words systematically taken from the internet –  via books, articles, websites, and posts – which also undeniably involves online users’ personal information, gathered without their consent. 

Every blog post, product review, or comment written on an article, which exists or ever existed in the online world has a g

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: