THE INTENTION BEHIND THIS PROJECT
Driven by our mission to increase trust in AI, Adversa’s AI Red Team is constantly exploring new methods of assessing and protecting mission-critical AI applications.
Recently, we’ve discovered a new way of attacking Facial Recognition systems and decided to show it in practice. Our demonstration reveals that current AI-driven facial recognition tools are vulnerable to attacks that may lead to severe consequences.
There are well-known problems in facial recognition systems such as bias that could lead to fraud or even wrong prosecutions. Yet, we believe that the topic of attacks against AI systems requires much more attention. We aim to raise awareness and help enterprises and governments deal with the emerging problem of Adversarial Machine Learning.
We’ve developed a new attack on AI-driven facial recognition systems, which can change your photo in such a way that an AI system will recognize you as a different person, in fact as anyone you want.
It happens due to the imperfection of currently available facial recognition algorithms and AI applications in general. This type of attack may lead to dire consequences and may be used in both poisoning scenarios by subverting computer vision algorithms and evasion scenarios like making stealth deepfakes.
The new attack is able to bypass facial recognition services, applications, and APIs including the most advanced online facial recognition search engine on the planet, called PimEyes, according to the Washington Post. The main feature is that it combines various approaches together for maximum efficiency.
This attack on PimEyes was built with the following methods from our attack framework:
We follow a principle of responsible disclosure and currently making coordinated steps with organizations to protect their critical AI applications from this attack so we can’t release the exploit code publicly yet.
We present an example of how PimEyes.com, the most popular search engine for public images and similar to Clearview, a commercial facial recognition database sold to law enforcement and governments, has mistaken a man for Elon Musk in the photo.
The new black-box one-shot, stealth, transferable attack is able to bypass facial recognition AI models and APIs including the most advanced online facial recognition search engine PimEye.com.
You can see a demo of the ‘Adversarial Octopus’ targeted attack below.
WHO CAN EXPLOIT SUCH VULNERABILITIES
Uniquely, the attack is a black-box attack that was developed without any detailed knowledge of the algorithms used by the search engine, and the exploit is transferable to any AI application dealing with faces for internet services, biometric security, surveillance, law enforcement, and any other scenarios.
The existence of such vulnerabilities in AI applications and facial recognition engines, in particular, may lead to dire consequences.
WHERE THE ATTACK IS APPLICABLE
The main feature of this attack is that it’s applicable to multiple AI implementations including online APIs and physical devices. It’s constructed in a way that it can adapt to the target environment. That’s why we call it Adversarial Octopus. Besides that, it shares three important features of this smart creature.
Original post at:
|[adrotate banner=”9″]||[adrotate banner=”12″]|
(SecurityAffairs – hacking, FACIAL RECOGNITION)