Parent Questions for Senate Judiciary Hearing with Big Tech CEOs

Parents have loads of worries about their children and teens: are they doing all their homework, eating enough veggies, getting enough piano practice in before the big recital? And, as they get older, are they fully checking their side mirrors before changing lanes when learning to drive and coming home by curfew? Normal stuff. Routine.

But, today, parents also have to worry about the pervasive, perilous, use of social media platforms and the unfathomable harms lurking around just about every other post, “like,” and reel. The difference is, these are worries that can’t be addressed by parents alone. Until Congress passes legislation requiring Big Tech to create a safer product, one that holds them accountable for protecting kids online, this worry remains. 

We know, sadly, better than most. We are a group of parents that have come together because we have all lost a child to either suicide because of relentless cyberbullying or sextortion, accidental death attempting viral challenges or purchasing drugs over social media that were laced with lethal substances. 

It’s why we’ve come together to launch ParentsSOS, an educational initiative to raise awareness about the Kids Online Safety Act (KOSA), a bill before Congress that addresses the growing concerns about youth safety on social media. 

It’s also why so many of us will be on Capitol Hill tomorrow attending the Senate Judiciary Committee hearing featuring Mark Zuckerberg CEO of Meta, Evan Spiegel CEO of Snap, Shou Zi Chew CEO of Tik Tok, Linda Yaccarino CEO of X and Jason Citron CEO of Discord. 

Because we have questions for these individuals. A LOT of them. Like, this one, from Todd and Mia Minor, whose son Matthew was just 12 when he died as a result of accidental asphyxiation after participating in the online “Blackout Challenge.” 

“There are many parents here today who have lost a child to social media harms. Can you look them in the eye and tell them you’ve done everything in your power to make your platforms safe for young people? Or have you prioritized profits and engagement over safety?”

These CEOs must do better. Children are dying. The stakes don’t get higher than this. And, as Zuckerberg has said before, “I started Facebook, I run it, and I’m responsible for what happens here.” We agree. They’re responsible. What do they have to say to us?

 

Questions for all CEOs

Cyberbullying, question from Maurine Molak (TX):

My son David died by suicide at age 16 after months of relentless and threatening cyberbullying. Cyberbullying or online harassment of minors is a crime in many states as well as against your community guidelines. What, if any, is your process to identify this illegal activity and what actions do you take to protect minors from aggressors? What is your company's internal process to investigate and respond to reports?  Do you prioritize reports of cyberbullying more than other harms, and if not, why not? 

Cyberbullying, compassion, failure to act, question from Christine McComas (MD):

My daughter, 15-year-old Grace McComas, died as a result of intense cyberbullying almost 12 years ago. The screenshots of abuse were shocking enough to get a Maryland law enacted against perpetrators in 2013 (updated in 2019), but there is still no easy way for children or parents to get immediate help, leaving kids feeling abused and helpless. For each of you, on an average week, what percentage of teen users witness cyberbullying? What percentage are actual targets of cyberbullying? Does cyberbullying increase teen engagement among perpetrators, victims, and bystanders? How can you possibly not act to save kids from online hatred, intimidation, sexual harassment and exploitation? The parents of lost children are in this room. Their lives will never be the same. Can you look them in the eye and try to explain the unconscionable – that you knew cyberbullying was rampant and chose not to help? 

Addictiveness, question from Deb Schmill (MA):

My daughter Becca was 18 when she died of fentanyl poisoning from a pill she purchased through a social media platform. Purdue Pharma, owned by the Sackler family,  manufactured a drug, OxyContin, that became widely used by doctors in the U.S. The Sacklers knew their product was addictive and continued to misinform doctors and the public. The result has been devastating to communities across the U.S. Your product features are designed to trigger the same brain reward response as a drug like OxyContin. Many teens wish they could quit using social media but are unable to do so. Do you think that a significant number of young people are addicted to your platform?  Do you think that your product features that leverage the Fear of Missing Out contribute to the unhealthy use of social media by some young users? Do you make more money when young people use your platform compulsively?  How have you changed your products to be less addictive and harmful to young people?

Compassion, children are dying, question from Annie McGrath (WI):

Children are dying because of dangerous online challenges like “the pass-out challenge,” which killed my son Griffin when he was 13. It is still widely available on social media for children to see, despite the fact it has taken thousands of children’s lives. Children are dying because drugs are laced with Fentanyl, and drugs like Xanax, Adderall, and Vicodin can be bought on social media platforms as easily as ordering a package on Amazon. Children are dying because they are bullied. 46% of teens say they have been bullied online, which is leading to skyrocketing suicide rates. Children are dying because of sextortion. More than 3,000 minors were targeted last year with sextortion threats. Your platforms are places of great danger to our children, yet you keep luring them in. What will you do today to make your platforms safe?  And if you cannot, will you commit to keeping children out?

Ease of reporting, question from Sharon Winkler (GA):

My son Alex was 17 when he died by suicide after being influenced by anonymous users online. Will you commit to creating a process for teen users to flag unwanted content that is as easy as “liking” a post or posting an emoji?

Unwanted sexual advances, question from Sharon Winkler (GA):

What number of teen users received unwanted sexual advances in the last 7 or 14 days? On which parts of your product? (i.e. direct messaging, comments, etc.). How do you define and determine when teens receive unwanted sexual advances? Do you have a goal for the maximum percentage of teens who could receive unwanted sexual advances in your product? What number of teen users does that represent? 

How do you become aware of teens receiving unwanted sexual advances? Is there a way for teens to report or flag them? How many teen users utilize the reporting tools in one month? Are there product managers and engineers fully dedicated to reducing unwanted sexual advances for teenagers? Do these employees have access to senior executives in your company? Do you have data regarding users who initiate unwanted sexual advances, such as age, membership in groups, etc.?

Compassion, question from Todd and Mia Minor (MD): 

Our son Matthew was 12 years old when he died as a result of accidental asphyxiation after participating in the online “Blackout Challenge.” Big Tech CEOs, you are aware of the intense publicity surrounding online child safety. As you read news articles and parents’ and families’ calls for action surrounding your platform and online child safety, do you ever think that could be my child, niece, nephew, or the child of a close friend? There are many parents here today who have lost a child to social media harms. Can you look them in the eye and tell them you’ve done everything in your power to make your platforms safe for young people? Or have you prioritized profits and engagement over safety? 

 

Questions for Meta (Facebook, Instragram)

Eating Disorders, question from Deb Schmill (MA):

A report from Fairplay in April 2022 showed that content on Instagram glorifying and promoting eating disorders was being accessed by girls who acknowledged they were under 13, even though your platform doesn't allow under 13 users. Anorexia is the most deadly of all psychiatric diseases, and promoting this content increases the likelihood that young people will suffer life-threatening or lifelong devastating consequences. Three weeks ago you announced you would finally hide eating disorder content from teens, but a user can still see search results when they search for terms like “bulimia,” just one additional click away. Your top results for search terms like "bulimia" are accounts like "bulimia_versus_me," "ana and mia," and "wishing death would come soon." Why have you continuously lied about your efforts to reduce eating disorder content on your platforms?

COPPA, age verification, question from Christine McComas (MD):

COPPA makes clear that children under 13 can’t use social media without parental permission, and all of your platforms say ages 13 and up in your terms of service. Yet research indicates there are millions of children under 13 using your platforms, and a new study from Harvard found that social media platforms make $2.1 billion from users under 13.  You insist there is no way to verify age, yet you collect millions of data points on every user that allow you to micro-target content and ads. One would think that with all that data and your cutting-edge technologies, your platforms could tell the difference between an 8-year-old and a 13-year-old. Can you describe your efforts to detect under 13 users and explain why these efforts have failed so spectacularly? 


Reporting, question from Joann Bogard (IN):

Mr. Zuckerberg, in 2018 you stated, “I started Facebook, I run it, and I’m responsible for what happens here." I would like to address your reporting mechanisms. Since my son Mason died of the choking challenge at age 15, I search every week for this deadly challenge and report it wherever I find it. Yet my reports rarely receive a response, let alone lead to content being taken down. While recognizing that reporting mechanisms are only one aspect of content moderation, changes to these current mechanisms are necessary to mitigate the harmful online content. Based on a recent study, less than 5% of videos across all of your platforms are removed after being reported. When a person reports a harm on your platform, how do you measure whether or not you have helped that person? Transparent audits of your Quality Assurance of these reports should show the Reporting Action Rates--How many were acted on? How many were not acted on? What happened to the content after it was removed/hidden/ignored? Does the person reporting have a means of rebuttal if the harm was not addressed? What are your plans to fix your flawed reporting system?

Exposure to dangerous drugs, question from Julianna Arnold (NY):

My daughter Coco died at age 17 of fentanyl poisoning after buying laced drugs from a dealer on Instagram. The simultaneous rise of social media and the worsening of our fentanyl crisis are painfully interconnected. The availability of fentanyl and other illicit substances online is staggering. Drug sellers have come to prefer the convenience and discretion afforded by transacting over social media, relying on the companies’ unwillingness to prevent drug activity on their platforms. Meta has publicly stated that they don’t allow people to buy, sell or trade pharmaceutical or non-medical drugs on Facebook and Instagram. However, if one goes to either platform and types ‘Xanax’ in the search bar, the platform auto-fills alternative hashtags for the drug, providing the user with a multitude of accounts advertising and selling illegal and often deadly drugs. On Facebook, there are profiles with the name “Xanax.” What is Meta doing to live up to its statement that it prohibits people from buying, selling or trading pharmaceutical or non-medical drugs on Facebook and Instagram? Are you willing to commit to using the same tools and algorithms that are used to increase engagement and sales to make your platforms safer for our children?

 

Question for X (Twitter)

Trust and safety staff, question from Sharon Winkler (GA): 

Elon Musk famously said, "Turns out you don't need that many people to run Twitter." and laid off 80% of its staff after he took over, including 15% of Trust & Safety personnel.  Since 2022, the company has refused to publish any of its transparency reports. 1) What are the current numbers of employees working on child safety? 2) What has been the effect of these layoffs on youth user reporting regarding cyberbullying, stalking and sexual exploitation? 3) What percentage of reports were closed with corrective action and what percentage were closed without removing content or resolving inter-user conflicts? 4) What percentage of youth X users felt that X's response to their reports helped their situation?

 

Question for Snap

Product testing, question from Kristin Bride (AZ): 

Most companies have a safety evaluation process before releasing a product to the public. However, we have seen some of the worst and most unsafe product ideas released from your platforms and marketed to teens. My son Carson died by suicide on Snap after he was cyberbullied over anonymous apps that connect to Snap’s back-end, which was predictable given the history of cyberbullying and suicide associated with anonymous apps. Other product features like Speed Filters and disappearing messages have led to cyberbullying, deaths, and sexual exploitation. We know you test potential new features to see if they increase young people’s engagement - do you do any testing or analyses of product features from the perspective of young people’s well-being and safety? If so, describe that process. Can you explain how an idea like the speed filter, which predictably encouraged developing teens to drive at reckless speeds, ends up being released? 

 

Question for Discord

Moderation, exposure to strangers, question from Maurine Molak (TX):

Why has Discord continuously refused to implement more rigorous community moderation and left server moderation to private individuals?  Discord has large open servers that anyone can join, creating a pool of potential targets for groomers. What does Discord plan to do to reduce these risks?

 

Previous
Previous

ParentsSOS Issues Statement After Crucial Senate Judiciary Committee Hearing on Online Safety

Next
Next

Parents who lost children to online harms launch initiative in support of Kids Online Safety Act