AI systems like Google’s Bard and OpenAI’s ChatGPT are designed to generate content by analyzing a huge amount of data, including human queries and responses. However, these systems have sparked legitimate worries regarding privacy. Google has emphasized that it will solely utilize customer data with proper permission. However, the question of trust is complex.
According to an article on Yahoo! News, Google’s policy allows the company to utilize publicly available data for training its AI models. However, Google explicitly states that it does not use any of your personal content.
Furthermore, there is a link provided in Google’s documentation that leads to a privacy commitment piece.
In that document, one particular paragraph captures attention: “In regards to the utilization of publicly available information, Google acknowledges its potential to improve AI models. However, it assures users that their personal content is not incorporated into these models. Google remains committed to upholding privacy standards and safeguarding user data throughout its operations.”
In that document, one particular paragraph captures attention: “In regards to the utilization of publicly available information, Google acknowledges its potential to improve AI models. However, it assures users that their personal content is not incorporated into these models. Google remains committed to upholding privacy standards and safeguarding user data throughout its operations.”
At first glance, one might be inclined to say, Yes, we can trust them because they explicitly state “they won’t utilize customer data without permission.” Nevertheless, it’s conceivable that we may have unintentionally granted them permission by agreeing to the ever-changing End User License Agreement (EULA) for Google Docs/Drive.
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: