OpenAI Faces Lawsuit for Exploiting User Data to Train ChatGPT, DALL-E

OpenAI faces lawsuit

OpenAI, an acclaimed artificial intelligence research business, was recently faced with a class-action lawsuit in the United States for allegedly stealing large amounts of personal data to train its AI chatbot ChatGPT and image generator DALL-E.

According to the lawsuit filed in the Northern District of California, OpenAI secretly acquired “massive amounts of personal data” from people’s social media pages, private conversations, and even medical information to train its AI models, violating multiple privacy regulations.

The lawsuit alleges OpenAI for violation of ethics

According to the lawsuit, OpenAI chose to “pursue profit at the expense of privacy, security, and ethics” by scouring the internet for troves of sensitive personal data, which it put into its large language models (LLMs) and deep language algorithms to create ChatGPT and DALL-E. 

While semi-public information such as social network postings was allegedly gathered, more sensitive information such as keystrokes, personally identifying information (PII), financial data, biometrics, patient records, and browser cookies were also allegedly harvested.

The lawsuit also claims that OpenAI has access to large amounts of unknowing patients’ medical data, aided by healthcare practitioners’ enthusiasm to integrate an undeveloped chatbot into their practices. When a patient provides information about their medical issues

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: