AI helpers have assimilated into our daily lives in over a year and gained access to our most private information and worries.
Sensitive information, such as personal health questions and professional consultations, is entrusted to these digital companions. While providers utilize encryption to protect user interactions, new research raises questions about how secure AI assistants may be.
Understanding the attack on AI Assistant Responses
According to a study, an attack that can predict AI assistant reactions with startling accuracy has been discovered.
This method uses big language models to refine results and takes advantage of a side channel present in most major AI assistants, except for Google Gemini.
According to Offensive AI Research Lab, a passive adversary can identify the precise subject of more than half of all recorded responses by intercepting data packets sent back and forth between the user and the AI assistant.
Recognizing Token Privacy
This attack is centered around a side channel that is integrated within the tokens that AI assistants use.
Real-time response transmission is facilitated via to
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: