In a groundbreaking announcement following Meta CEO Mark Zuckerberg’s latest earnings report, concerns have been raised over the company’s intention to utilize vast troves of user data from Facebook and Instagram to train its own AI systems, potentially creating a competing chatbot.
Zuckerberg’s revelation that Meta possesses more user data than what was employed in training ChatGPT has sparked widespread apprehension regarding privacy and toxicity issues.
The decision to harness personal data from Facebook and Instagram posts and comments for the development of a rival chatbot has drawn scrutiny from both privacy advocates and industry observers.
This move, unveiled by Zuckerberg, has intensified anxieties surrounding the handling of sensitive user information within Meta’s ecosystem.
As reported by Bloomberg, the disclosure of Meta’s strategic shift towards leveraging its extensive user data for AI development has set off a wave of concerns regarding the implications for user privacy and the potential amplification of toxic behaviour within online interactions.
Additionally, the makers will potentially offer it free of charge to the public which led to different concerns in the tech community. While the prospect of accessible AI technology may seem promising, critics argue that Zuckerber
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: