Researchers are finding new ways to deceive recognition systems, forcing algorithms to incorrectly determine objects, the distance to them, or even make them «invisible».
Deep learning algorithms are great at analyzing shapes and colors to differentiate between humans and animals, cars and trucks, and so on. They are used in a variety of applications and industries, often performing important tasks., such as road safety or property safety. However, a group of engineers from Southwestern Research Institute (SwRI) is identifying vulnerabilities these systems, to correct them in the future.
Researchers have developed special image models that cause cameras to misclassify nearby objects during analysis. If a person puts on a T-shirt with such a pattern, installs it on a vehicle, or simply places it on the street, then the algorithms will think that in front of them is not what it really is or the object is not where it is actually located. Moreover, such samples should not cover the entire surface or be parallel to the camera in order to deceive the system..
Although outwardly they seem to a person as ordinary colorful images, in certain situations this can disrupt the operation of detectors and cause chaos within the system. For example, an unsuccessful advertisement on a bus can cause the neural network of a car from behind to see not the vehicle, but the promoted product, which can lead to a collision..
While validating the algorithms, the team tests various models and evaluates their impact. Ultimately, this will help improve the security of detection systems..
Personal identification systems are beginning to be applied in retail trade. At the end of last year, Japan’s largest chain of small stores, 7-Eleven, opened a branch where payments for goods are made automatically by recognizing customers’ faces..
text: Ilya Bauer, photo: Southwest Research Institute
Defeating Facial Recognition – Retia on Hak5