Apparently, we really like filters. A lot. So much so that reports from social media apps like Facebook, Instagram, and Snapchat indicate that the number of users posting filtered photos is in the hundreds of millions. And these apps are only too happy to accommodate the trend by giving us access to a slew of flashy filters that let us see and present ourselves in any number of amusing, surreal, and bizarre ways: as wide-eyed furry critters surrounded by hearts and butterflies, with a full face of makeup, 50 years older. In fact, according to a report by the MIT Technology Review, the most common use of augmented reality is not gaming or the highly touted metaverse but social media filters.
We've basically exploited one of the most revolutionary technologies of the past twenty years to transform ourselves into cartoon princesses. And this is actually the least disturbing aspect of our fascination with filters. Because what many filter users consider to be all fun and games has actually become an easy way for app creators to collect, store, sell, and abuse valuable personal biometric data.
The growing and widespread use of biometrics
What exactly is biometrics? Most of us are probably familiar with the use of biometrics, even if we don't quite understand the technology behind it. When you unlock your phone with facial recognition or a fingerprint scan, you rely on biometrics. The same applies when you log in to your bank account or other service provider or app on your mobile device using functions like Face ID.
Biometrics refers both to the actual physical as well as behavioural characteristics that are unique to each individual — facial features, iris and fingerprint patterns, keystroke dynamics — as well as to the statistical and analytical use of these unique characteristics to identify or confirm the identity of an individual. Our biometric data might be stored on our devices in the form of a facial map, a facial signature, or a biometric template of our fingerprint ridges (read for more detailed information).
We have seen a significant increase in the application of biometric data, both in the security space and beyond. For good and bad. For example, biometric data has been used in the medical sector for the early detection of diseases like Parkinson's (keystroke analysis). However, the application of facial recognition technology for identifying and capturing criminals gained widespread criticism when it was determined that these systems frequently discriminated against people of colour.
With the introduction of Apple's Face ID in 2017, biometric data became an almost mainstream tool for identification and authentication. And this should hardly come as a surprise. Not only is it much easier to log into our devices and accounts by simply scanning our faces or fingerprints, but it is also safer since this data is unique to us, is securely stored on our end devices in an encrypted state and is not shared with third parties. Or is it?
How are online apps using biometric data, and should we be concerned?
In 2021, TikTok paid a USD 92 million settlement in a class-action lawsuit filed by the U.S. District Court in the state of Illinois. The lawsuit alleged that the app used artificial intelligence to detect and analyse physical features in users' videos in order to determine their age, ethnicity, and gender. This data was then purportedly used for targeted content suggestions. Although the validity of the claim was never proven since the lawsuit was settled out of court, this is hardly an isolated incident of apps relying on biometrics for advertising purposes.
A few years ago, the state of Illinois also filed a class-action lawsuit against Facebook for violating its 2008 Biometric Information Privacy Act by collecting and storing biometric data without user consent. A few months ago, the company agreed to compensate affected users in the amount of USD 650 million to put an end to the litigation process.
And Illinois is not alone; a few months ago, the Texas Attorney General also initiated a lawsuit against Meta for Instagram's use of augmented reality filters, which apparently collect and store biometric data. In this case, Instagram temporarily disabled the filters only to bring them back with a user opt-in feature.
However, this hardly resolves the issue. Especially since many users choose to remain blissfully ignorant about what is actually happening with their biometric data when they give these apps access to it.
How can we protect ourselves and our data?
Thus far, there has been little effort by data protection authorities in Europe to prosecute these types of violations. However, thanks to local European initiatives like Reclaim Your Face, the rampant abuse of biometric data is attracting more attention. The movement has even convinced 'lead MEPs from five European political groups to demand a full ban on biometric mass surveillance in the draft AI Act law'.
We as individuals also have the power to control what data we share and how it is used. There are some simply added precautions we can take, like installing tracking and ad blockers on all our devices. These types of tools prevent our installed apps and web browsers from sharing private data with third parties. We can also install a VPN (Virtual Private Network), which guarantees an encrypted connection for all our online activities and also hides our IP address and geolocation.
Although it can be very tempting to click' accept all' when confronted with the barrage of consent requests when installing a new app or perusing a website, it is in our best security interests to take more methodical steps to protect our data. This means carefully looking at and considering what we're actually consenting to and whether or not we can trust certain apps and websites. It may be hard, but sometimes it's simply safer not to create and share that image of yourself as a slice of talking pizza.