Researchers devised an attack technique to extract ChatGPT training data

Researchers devised an attack technique that could have been used to trick ChatGPT into disclosing training data. A team of researchers from several universities and Google have demonstrated an attack technique against ChetGPT that allowed them to extract several megabytes of ChatGPT’s training data. The researchers were able to query the model at a cost […]

This article has been indexed from Security Affairs

Read the original article: