Categories

Tags

  • data protection,
  • digitalisation

Through her leaks of tens of thousands of documents, former Facebook employee-turned-whistleblower Frances Haugen told us a lot about the company’s way of running platforms like Facebook, Instagram, Messenger and WhatsApp. 

The key takeaway: Mark Zuckerberg’s company harms its users for its own profit.

Frances Haugen appeared at the European Parliament’s Committee on Internal Market and Consumer Protection on November 8th where our MEPs got to ask her more about what’s going on in Silicon Valley.

Here’s what we found out:

 

1. Self-regulation doesn’t work

Self-regulation is great PR, and it tends to serve the company doing it rather than the public . The corporate sector has long practiced such “regulation” with notoriously bad results. It’s a great way to look like you’re doing something about a problem while creating a distraction and keeping real regulation at bay. In this case, it simply will not protect social media users from the harmful effects of Facebook’s algorithms. Political will is needed to come up with a proper response to the problems we know about thanks to whistleblowers like Frances Haugen. Only strong regulation can work – we have it for food and buildings, we need it for social media.

 

 

2. We need rules, they must be watertight

The Digital Services Act is currently being negotiated in Brussels between the European Parliament, European Commission and member states. There’s a risk that the right will litter it with loopholes for the benefit of Big Tech. This would be a huge mistake and a massive loss for users. The Left is pushing for the highest standards of transparency to be part of the package and for a ban on targeted advertising as an effective way to address the attention seeking business models of these platforms.

Personalized advertising is a serious encroachment on people’s freedoms and personal rights. Despite current restrictions due to the General Data Protection Regulation, consumers are not sufficiently protected. The easiest way to combat this practice is to allow the use of personal data only where it is necessary for the use of a service. We need more user control over how their data is used and how ranking results are presented to them. This can only work if the “black box” of artificial intelligence is disclosed through transparency requirements, accompanied by independent and public reviews. This would also set limits on business models that use algorithms to rank questionable content particularly highly, and thus contribute to its proliferation.

 

 

3. Facebook stokes ethnic violence and terrorism

Internal memos show that Facebook knew about violent hate speech against minority groups in countries such as India, but did nothing about it. Haugen said internal reports showed that hate speech against muslims, and misinformation, was being intensified by Facebook’s own “recommended” feature and its algorithms.

We oppose any legislative proposals that would lead to a de facto obligation to use filters, as currently envisaged in the Protocol to Combat Terrorism on the Internet. No filter in the world can prevent the spread of terrorist content online as long as rankings are designed to lure users with negative incentives. Instead of filters, we need “notice and action” procedures, a ban on the “attention seeking” business model, and more user control combined with proper enforcement of laws outside the net.

4. It’s depressing teens

Internal research revealed that Instagram can have serious negative effects on the mental health of teenage girls including a negative impact on body image. Surveys showed that 32% of teenage girls said that “when they felt bad about their bodies, Instagram made them feel worse”. Haugen has said that as these young women look at content about eating disorders “they get more and more depressed… it actually makes them use the app more… and so they end up in this feedback cycle where they hate their bodies more and more.” 

 

 

5. They want us angry

Wow, love, yay, haha, sad… but especially: angry!

In addition to the usual “Like” button on posts, in 2015 Facebook introduced other emotional reaction options… seems fair enough, right? But what happened next was shocking: the algorithms promoted posts that received the ‘angry’ response from users because studies demonstrated that these brought more engagement. From QAnon conspiracy theories to anti-vaccine posts, Facebook’s responsibility for the spread of anger-inducing disinformation, toxicity, and violent content was confirmed by its own research – yet it did nothing.

 

6. There’s a solution: make ‘em serve the public interest

Amid much fanfare, Facebook recently renamed itself “Meta”. The “metaverse” that Marc Zuckerberg wants to build will be focused on virtual 3D spaces, accessible through virtual reality headsets and augmented reality glasses. Zuckerberg promised the new “metaverse” will be built responsibly but in light of the deluge of evidence of misconduct uncovered by Haugen and others, he cannot be trusted.

Big Tech is dramatically reshaping human life in the 21st century with one clear aim: their own profit margins. In the interests of citizens, the EU must tackle these highly addictive, anti-social and manipulative corporate forces by ensuring that the digitization process is put at the service of people, with democratic decisions taken to ensure that these technologies serve the needs of citizens.

Related Meps

International solidarity & Peace & Solidarity & Rights & liberties ·

33 Palestinian children killed each day

Rights & liberties ·

Migration: One world, one Humanity, we need solidarity.

Democracy & Ethics & Rights & liberties ·

Unmasking hypocrisy: Left slams Orbán's lies