Microsoft’s Response to “Privacy-Concerns” of ChatGPT in Business

 

As a response to concerns over using individuals’ data to train artificial intelligence models, Microsoft is considering launching a privacy-centric version of ChatGPT. There is a possibility that the decision will be attractive to industries such as healthcare, finance, and banking that have not adopted ChatGPT. This is because they are concerned that sensitive information will be shared with the system by their staff. This is due to the risk of sensitive information being shared. 
The use of ChatGPT has greatly benefited some businesses, especially banks and other corporations. However, these companies have resisted the adoption of the technology due to privacy concerns. They fear that their employees might unintentionally disclose confidential information while using it. 
By adding OpenAI’s GPT-4 or ChatGPT to Azure, Microsoft wants to make it easier for enterprises to integrate proprietary data with user queries. In addition, Microsoft wants to see the results of its analytics on this platform. 
A user fires off a query to Azure; Microsoft’s cloud determines what data is required to complete that query, so it is returned to the user as soon as possible. Using the question and the retrieving information, an initial query is created, which is then passed on to an OpenAI model of choice hosted in Azure. The model pr

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: