A team of researchers has demonstrated that it is possible to steal an artificial intelligence (AI) model without actually gaining access to the device that is running the model. The uniqueness of the technique lies in the fact that it works efficiently even if the thief may not have any prior knowledge as to how the AI works in the first place, or how the computer is structured.
According to North Carolina State University’s Department of Electrical and Computer Engineering, the method is known as TPUXtract, and it is a product of their department. With the help of a team of four scientists, who used high-end equipment and a technique known as “online template-building”, they were able to deduce the hyperparameters of a convolutional neural network (CNN) running on Google Edge Tensor Processing Unit (TPU), which is the settings that define its structure and behaviour, with a 99.91% accuracy rate.
The TPUXtract is an advanced side-channel attack technique devised by researchers at the North Carolina State University, designed to protect servers from attacks. A convolutional neural network (CNN) running on a Google Edge Tensor Processing Unit (TPU) is targeted in the attack, and electromagnetic signals are exploited to extract hyperparameters and configurations of the model without the need for previous knowledge of its architectur
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: