International Workshop
on Obfuscation:
Science, Technology, and Theory
April 7-8, 2017  •  New York University

Workshop on Obfuscation Workshop on Obfuscation
OBFUSCATION WORKSHOP REPORT

PrivacyVisor: Privacy Protection for Preventing Face Detection from Camera Images

Isao Echizen, National Institute of Informatics, Tokyo

Due to the popularization of portable devices with built-in cameras and advances in social networking services and image search technologies, information such as when and where a photographed person was at the time of the photograph can be revealed by the posting of photos online without the person’s permission. This has resulted in a greater need to protect the privacy of photographed individuals. A particularly serious problem is unauthorized information revelation through the posting of images of people captured unintentionally and shared over the Internet. If, for example, your face or figure is unintentionally captured in an image taken by someone, and then that image is shared by posting it on a social networking site, information about where you were and when can be revealed through the face recognition process of an image retrieval service (e.g., Google Images) that can access the geographic location and shooting date and time information contained in the image’s geotag without your permission.

An experiment conducted at Carnegie Mellon University showed that the names of almost one-third of the people who participated could be determined by comparing the information in photographs taken of them with the information in photographs posted on a social networking site. Furthermore, other information about some of the participants, including their interests and even their social security number, was found [1]. A commercial facial recognition application for smartphones called FindFace that was released in 2016 in Russia can identify people in public from their profile image on a Russian social media site. After the release of FindFace, a Russian photographer initiated a project called “Your face is big data.” It showed that 70 of 100 people photographed in the subway without permission could be identified using FindFace [2].

Unlike the capturing of images with surveillance cameras, in which the images are managed by an administrator and are generally not posted online, the situation addressed here is that in which a person’s image is captured unintentionally in a photograph with the person possibly being unaware that a photograph is being taken, such as in a photograph taken at a tourist attraction. The photograph may then be posted online without that person being able to control the posting, resulting in possible unauthorized disclosure of personal information.

Methods proposed for preventing unauthorized face image revelation include hiding the face with an unfolded shell [3] and painting particular patterns on one’s face [4]. The first method physically protects the user’s privacy by using material in the shape of a shell that can be folded and unfolded. When folded, it functions as a fashion accessory; when unfolded, it functions as a face shield, preventing unintentional capture of the wearer’s facial image. The second method prevents identification of the person by using particular coloring of the hair and special paint patterns on the face that cause facial recognition methods to fail. However, such methods interfere with face-to-face communication because they hide a large portion of the face and/or distract the attention of the person to whom the wearer is communicating.

We previously proposed using invisible noise signals to prevent privacy invasion [5]. This method uses infrared LEDs as a light source to add noise to a captured image. Although the infrared rays do not affect the human eye, there are two problems: a power supply is needed for the LEDs, and some digital cameras are unaffected by the rays. Consumer camcorders, for example, use infrared wavelengths to enable them to adjust the settings for dark situations. The sensitivity to infrared varies among cameras, and some cameras do not react to infrared rays. A method using infrared rays is thus ineffective against them [6].

We have developed a method for overcoming these two problems [7]. It prevents face image detection without the need for a power supply by using materials that naturally absorb and reflect incident radiation. It is effective against all digital cameras because it uses visible rather than infrared light, and it negligibly interferes with face-to-face communication in physical space. The small amounts of light- reflecting and absorbing materials attached to a goggle-like visor (a “PrivacyVisor”) effectively obscure the Haar-like features used for face detection and thus cause face detection to fail [8]. Moreover, no new functions need to be added to existing cameras and/or networking services.

Increased performance of sensors enables biometric identities to be obtained in more ways than could have ever been anticipated. Future work will thus focus on protecting against biometric identity theft via image sensors. We are developing a method (a “BiometricJammer”) that will prevent the surreptitious photographing of fingerprints and subsequent acquisition of fingerprint information while still enabling the use of the fingerprint authentication methods that are normally used by smartphones and the like [9]. Other forms of biological information besides fingerprints that could be used for personal identification or authentication include the patterns of the iris and of the veins in the fingers or palms. We plan to continue researching and developing methods aimed at preventing the illegal acquisition of each type of information.

 


References:

[1] A. Acquisti, R. Gross, and F. Stutzman, “Face Recognition Study—FAQ,” August 2011,  http://www.heinz.cmu.edu/~acquisti/face-recognition-study-FAQ/ [Accessed May 26, 2017]

[2] E. Wilson, “Your face is big data,” BBC News, [Online] April 13, 2016, http://www.bbc.com/news/av/magazine-36019275/your-face-is-big-data [Accessed May 26, 2017]

[3] R. Hernandez, “Veasyble by GAIA,” March 8, 2010, https://www.yatzer.com/Veasyble-by-GAIA [Accessed May 26, 2017]

[4] A. Harvey, “CV Dazzle,” https://cvdazzle.com/ [Accessed May 26, 2017]

[5] T. Yamada, S. Gohshi, and I. Echizen, “Privacy Visor: Method for Preventing Face Image Detection by Using Differences in Human and Device Sensitivity,” Proc. of the 14th Joint IFIP TC6 and TC11 Conference on Communications and Multimedia Security (CMS 2013), LNCS 8099, pp. 152-161, Springer (September 2013)

[6] See: “Privacy visor glasses jam facial recognition systems to protect your privacy #DigInfo,” Ikinamo, June 19, 2013, https://www.youtube.com/watch?v=LRj8whKmN1M&t=23s  [Accessed May 26, 2017]

[7] T. Yamada, S. Gohshi, and I. Echizen, “Privacy Visor: Method based on Light Absorbing and Reflecting Properties for Preventing Face Image Detection,” Proc. of the 2013 IEEE International Conference on Systems, Man, and Cybernetics (IEEE SMC 2013), 6 pages (October 2013)

[8] See: “Privacy visor fools facial recognition,” IDG.tv, August 20, 2015,  https://www.youtube.com/watch?v=HbXvZ1XKdWk&t=1s  [Accessed May 26, 2017]

[9] See: “’Peace’ signs risk fingerprint theft, says Japanese study,” Reuters, January 16, 2017,  https://www.youtube.com/watch?v=vJn9cx-CyPE [Accessed May 26, 2017]

Workshop Report

download PDF

Stay in Touch

We'll send occasional announcements about conference details and follow-up initiatives.

Sponsored by:

NYU Steinhardt logo

International Program and Organizing Committee:

Paul Ashley, Anonyome Labs
Benoît Baudry, INRIA, France
Finn Brunton, New York University
Saumya Debray, University of Arizona
Cynthia Dwork, Harvard University
Rachel Greenstadt, Drexel University
Seda Gürses, Princeton University
Anna Lysyanskaya, Brown University
Helen Nissenbaum, Cornell Tech & New York University
Alexander Pretschner, Technische Universität München
Reza Shokri, Cornell Tech