Scientists at Google DeepMind, leading a research team, have adeptly utilized a cunning approach to uncover phone numbers and email addresses via OpenAI’s ChatGPT, according to a report from 404 Media. This discovery prompts apprehensions regarding the substantial inclusion of private data in ChatGPT’s training dataset, hinting at the risk of inadvertent information exposure.
The researchers expressed astonishment at the success of their attack and emphasized that the vulnerabilities they exploited could have been identified earlier. They detailed their findings in a study, which is currently available as a not-yet-peer-reviewed paper. The researchers also mentioned that, to their knowledge, the notable frequency with which ChatGPT emits training data had not been observed before the release of this paper.
Certainly, the revelation of potentially sensitive information represents merely a fraction of the issue at hand. As highlighted by the researchers, the broader concern lies in ChatGPT mindlessly reproducing extensive portions of its training data verbatim at an alarming rate. This susceptibility opens the door to widespread data extraction, possibly supporting the claims of incensed authors who contend that their work is falling victim to plagiarism.
How Researchers Executed Their Attack?
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: