Meta facing the music again?
In testimony before a Senate subcommittee on Tuesday, a former Meta employee claimed that the parent company of Facebook and Instagram knew about harassment and other negative issues that kids were experiencing on its platforms but did nothing about them.
The former Meta employee, Arturo Bejar, served as the director of engineering for Facebook’s Protect and Care team from 2009 to 2015. From 2019 to 2021, he worked on Instagram’s well-being.
He testified during a hearing concerning social media’s effects on the mental health of teenagers.
Known for his expertise in reducing online harassment, Arturo Béjar told the CEO of Meta about his own daughter’s distressing Instagram encounters. However, he said that his worries and warnings remained unheeded.
He addressed a group of US senators, saying, “I come before you today as a father with first-hand experience of a child who received unwanted sexual advances on Instagram.”
He stated in his testimony that “she and her friends began having awful experiences, including repeated unwanted sexual advances and harassment.” “She informed the company of these incidents, but nothing was done.”
Stating a written remark before the hearing, he said, “it’s time that young users have the tools to report and suppress online abuse” and that “it’s time the public and parents understand the true level of harm posed by these ‘products'”, as reported by Reuters.
During the course of his testimony, Bejar stated that the goal of his work at Meta was to change Facebook and Instagram’s designs to encourage users to behave in a more positive way. Giving young people tools to assist them deal with unpleasant experiences was another of its main goals.
Bejar revealed throughout the hearing that he occasionally had meetings with Meta’s senior executives. Mark Zuckerberg, the CEO, was also amongst them. He claimed that although he felt that they were in support of the work, he gradually concluded that management had made the decision “time and time again to not tackle this issue.”
As stated by Bejar, in 2021, he emailed Zuckerberg and other high-ranking executives at Meta to inform them that, according to internal data, 51% of Instagram users reported having bad experiences on the platform seven days earlier. Additionally, he stated that approximately 25% of teenagers who use the service between the ages of 13 and 15 said they have been approached sexually without permission.
Additionally, he reported that, off the top of his head, Chris Cox, the Chief Product Officer at Meta, provided exact figures regarding danger to teenagers during a meeting.
The Wall Street Journal was the first to reveal Béjar’s 2021 letter, which described a “critical gap” between the company’s attitude to harassment and the harmful experiences of its users, particularly young people, within its platforms.
Frances Haugen had warned Meta in 2021
A former engineering director at the social media giant had returned to the company as a consultant, and on the same day that whistleblower Frances Haugen was testifying before Congress about the damages Facebook and Instagram cause to children in the fall of 2021, she sent Mark Zuckerberg a concerning email about the same subject.
Former product manager Frances Haugen said she came forward because she witnessed Facebook’s leadership continuously put business prior to safety.
She collected an extensive number of internal reports and research before leaving the company in May 2021, pulling through Facebook Workplace, its internal employee social media network, in an effort to demonstrate a reasonable doubt that Facebook had deliberately decided not to address the problems on its platform.
She gave a testimony on Facebook’s social impact in front of the Senate. And urged Congress to take action while clarifying many of the findings of the internal investigation.
Haugen’s testimony agrees with many of the conclusions of an earlier in the year MIT Technology Review investigation that involved numerous interviews with Facebook executives, as well as current and former staff members, colleagues in the field, and outside experts.
Meta Algorithm to blame?
Meta uses hundreds, if not thousands, of algorithms to choose which content to rank and which advertising to target. Certain algorithms are designed to identify a user’s preferences and present more of that type of information in the user’s news feed. Others are used to identify and remove particular kinds of inappropriate content, such as spam, click bait headlines, and nudity, from the feed.
Additionally, Facebook and Instagram have access to enormous amounts of user data, which allows them to develop models that can infer the existence of very specific categories, such as “women between 25 and 34 who liked Facebook pages related to yoga,” in addition to more general ones like “women” and “men,” and target ads to them. Advertisers would gain more benefit for their money if their targeting was more accurate, as this would increase the likelihood of a click.
Haugen brought up the subject of Facebook’s algorithm’s role in spreading false information, hate speech, and even ethnic violence on multiple occasions. “Facebook has acknowledged publicly that engagement-based ranking is risky in the absence of integrity and security measures, but they haven’t implemented these measures in the majority of global languages,” she informed the Senate today. “Families are being torn apart by it. And it is actively stoking ethnic conflict in areas like Ethiopia.”
Facebook has also known this for some time, as Haugen pointed out. It has been investigating the topic since at least 2016, according to earlier reports.
One of the more startling findings from the Journal’s Facebook Files was the internal investigation conducted by Instagram, which discovered that the social media site is making adolescent females’ mental health worse. In a March 2020 slide presentation, researchers stated that 32% of adolescent girls claimed that Instagram made them feel worse about their bodies.
A Former AI Researcher discovered that those who had a tendency to post or interact with depressing content—possibly an indication of depression, could easily get caught up in consuming more and more painful information, which increased the chance of their mental health getting worse.
The researcher discovered that the leadership was unconcerned in modifying the algorithm in a fundamental manner.
Comments 3