Disregard for privacy goes hand in hand with tech weaponization.

Both data mining and misogyny need to be stopped yesterday.

Camera Wall, Toronto, Canada

When technology is designed to expose and discredit users, the consequences can be brutal. That's why privacy is paramount and Big Tech must live up to its responsibility.


Why privacy matters

This story is a critical reminder of why privacy matters: Even if you have nothing to hide. We’re not data or business metrics, we are people, and when companies fail to see this, there are real-world consequences.

Big Tech responsibility

In March, an app for creating “DeepFake FaceSwap” videos rolled out more than 230 ads on Meta’s services, including Facebook, Instagram and Messenger, according to a review of Meta’s ad library. Some of the ads showed what looked like the beginning of pornographic videos with the intro sound of the porn platform Pornhub playing. Seconds in, the women’s faces were swapped with those of famous actors. The captions on dozens of these ads read “Replace face with anyone” and “Enjoy yourself with AI swap face technology.”

After NBC News asked Meta for comment, all of the app’s ads were removed from Meta’s services. While no sexual acts were shown in the videos, their suggestive nature illustrates how the application can potentially be used to generate faked sexual content,not just with famous women, it works with a single photo of someone’s face uploaded by the app’s user. The same ads were also spotted on free photo-editing and gaming apps downloaded from Apple’s App Store, where the app first appeared in 2022 for free for ages 9 and up, and it was also available to download for free on Google Play, where it was rated “Teen” for “suggestive themes.” Both Apple and Google said they later removed the app from their storesafter having been contacted by NBC News.

Is it journalists’ job to screen app stores for abusive apps?

The same story played out with a different app in December 2021 after a Reuters investigation.

Dozens of deepfake apps are still available on Google and Apple app stores, most of them being used to generate non-consensual porn and “nudify” people. While Google and Apple claim they prohibit apps that generate content that’s defamatory, discriminatory or likely to intimidate, humiliate or harm anyone, this is exactly what’s been happening right under their noses. Google has added “involuntary synthetic pornographic imagery” to its ban list, allowing anyone to request the search engine block results that falsely depict them as “nude or in a sexually explicit situation” – but should the victims be responsible for fixing this or should Google do a better job at preventing this type of abuse in the first place?

As deepfake technology has improved and become more widespread, the market for nonconsensual sexual imagery has ballooned. Some websites allow users to sell nonconsensual deepfake porn from behind a paywall. A 2019 study from DeepTrace found that 96% of deepfake material online is of a pornographic nature, while another study by Genevieve Oh found that the number of deepfake pornographic videos has nearly doubled every year since 2018.

Since technology to detect deepfake exists, albeit behind paywalls, why don’t all platforms that enable users to upload photos and videos automatically screen them and automatically label deepfakes as such? They’re already failing at identifying and taking down abusive content, but at least this part should be easy enough if they actually cared.

Nothing new

Recently, an article by Katie Jgln dived into the extensive history of tech-enabled gender-based violence. Some of the more “old-school” examples like cameras hidden in bathrooms are still being used: just a few days ago, the US Justice Department unsealed charges against a man who allegedly put a camera inside a bathroom onboard a Royal Caribbean cruise ship and filmed 150 people, including 40 minors.

TechSafety has a number of useful guides and Cornell University’s Clinic to End Tech Abuse (CETA) also has resources to help people protect their privacy and stay safe online, and their extensive research in partnership with NYU shows how systemic online abuse and tech-enabled abuse are, particularly when driven by misogyny and transphobia.

In addition to their peer-reviewed academic publications, they also link to a few media pieces more accessible to a lay-person, like:

There’s an interesting juxtaposition between the organization’s vision to have the voices of survivors of abuse, stalking and other mistreatment at the center of technology design and the fact that their research is sponsored in part by Google and Meta.It’s great that these tech giants are spending some of their profits to shed light on problems that they’re helping create, but it would be even better if they were actually taking into account the findings of the research that they’re sponsoring and improving their products accordingly.

Other wide-spread examples of technology being weaponized in this way include:

  • Bullying and harassment, including physical threats. Attackers blur the lines between online and offline violence, as detailed in an extensive global report by the Economist Intelligence Unit, which also measures how online violence against women impacts the economy and society at large. Many mass shootings have been instances of online violence translating into real world actions, the latest example being the Allen, Texas mall shooter, who had a social media profile full of rants against Black, Asian and Jewish people and women in general. The links between racialized and gender-based violence have been documented for a long time
  • Doxing – especially used against trans people, often accompanied by efforts to get the target fired from their job and evicted from their home and to push the targeted person towards self-harm
  • Apple AirTags and other similar devices
  • Smart home devices including thermostats, locks and lights

As Katie Jgln puts it, oftentimes “modern technology — and the online world it created — is just an extension of the patriarchy. […]It turns a blind eye to — and often even enables — the very same abuse, harassment, violence, misogyny, sexism, unwanted hypersexualisation and objectification we experience in the real world. And just like many other tools of patriarchy — for instance, purity culture or ‘traditional’ gender roles — it too frequently serves as a way to put us back in our place, silence or discredit our voices and, most importantly, protect a status quo”.

The blurry lines between stalkerware and “regular apps”

The Coalition Against Stalkerware points out that “while the term stalkerware is also sometimes used colloquially to refer to any app or program that does or is perceived to invade one’s privacy; we believe a clear and narrow definition is important given stalkerware’s use in situations of intimate partner abuse.” The fact that there are serious grounds for confusion on this matter is not a good look for companies like Google, Meta, and many other companies’ apps for messaging, email, social media and other services that require sensitive permissions on your phone and collect your data. They also acknowledge “that legitimate apps and other kinds of technology can and often do play a role in such situations”. On the same page, one of the recommended criteria for detecting stalkerware is “Apps that are capable of collecting and exfiltrating sensitive data of device users (e.g., location data, contacts, call/text logs, passwords, browser history, etc.) without their continuous consent and/or knowledge”.How many of the world’s most downloaded apps DON’T meet this criteria?

It doesn’t need to be this way. Parenting apps never need to be hidden on the phone, if they are, they’re stalkerware, not parenting apps. Phones don’t need to have GPS enabled by default. Apps don’t need to have location (and many other) permissions by default (or ever, in many cases), instead users should choose to enable these permissions when needed only for the apps that legitimately need it. All phones should automatically withdraw permissions from apps you’re not using. AirTags, Tiles and other similar products should always have loud sound notifications for when they are activated/ in use and regulators should crackdown on “modified” versions sold on ebay etc.

We can’t expect tech companies to sacrifice the profits that they’re making in the current surveillance capitalist system out of compassion and respect for every person’s humanity. We need to constantly increase the pressure on them to do so. Until then, we’re not safe. We must keep fighting for our right to privacy.