Facebook’s Policy Leak Exposes User Vulnerability

Our work is directly affected by the protocols established by social media companies. So when Facebook’s internal policies regarding content published on its platform were leaked earlier today, we had to address it. We’re sure you’ve noticed that users have become increasingly outraged by the ease in which alarming footage – rapes, murders, suicides – have been uploaded, livestreamed, and shared.  Frankly, this is nothing new, and you might be surprised to know that Facebook, like other social media companies, claims to not be legally responsible for removing anything that shows up on your feed.

Facebook’s content is made by us. 

Facebook’s intent is simple – create an online community that’s driven by user content. With nearly 2 billion users and 1.3 million posts shared a minute, it’s obvious the company has achieved its goal better than any other platform out there, and who does it have to thank? Us. We ultimately decide to use their product, consume their advertising and indulge their data mining by disclosing our personal information and friendships. And the diversity of content reflects the multitude of reasons why a person creates a Facebook profile in the first place. For every user who just wants to upload cute photos of their kids on a merry-go-round, there are millions of other users who want to connect with long-lost friends or spread fake news or simply read posts without engaging.

Facebook depends on us to report bad content. 

Facebook has always relied on users to report disturbing content, but a few weeks ago a man live-streamed a video where he murdered his daughter. The video was shared many times before being taken down. Many users wondered why it was even put up in the first place. In response, Facebook’s CEO Mark Zuckerberg pledged to add “3,000 people to . . .review the millions of reports we get every week.” Yes, the sheer volume of reports they get must be taken into consideration when we critique Facebook’s efforts to remove offending content. Let’s face it, there is A LOT to sift through, judge, and report. But that’s what Facebook signed up for when it dreamed up a product this widely used — they must also respond to crises when somebody’s life is being destroyed on their platform. Since Zuckerberg has been more forthcoming about how hard it is to review its content in a timely manner, we can only hope that its inadvertent transparency nudges them to change.

Social media companies are protected.

It’s reasonable for users to be upset by what appears like a lack of desire from the c-suite to make Facebook safer. But, like other social media companies, they are riding the wave of opportunity. Why do most brick and mortar companies take proactive steps in the name of safety? Because they have to – they’ll be sued or jailed for negligence. Well, that’s not the case with internet companies.

They get special treatment because of a 1996 law protecting them from liability for conduct by users. Back then, the Internet was this tiny lil’ thing that Congress wanted to protect from big bad plaintiffs. That law is Section 230 of the Communications Decency Act and it’s how websites get away with abdicating responsibility for what their users do. They can treat their community standard rules and policies as decoration – and have no consequences for ignoring them. Clearly, one of the benefits to Section 230 is that it saves Internet companies a lot of money. They can sit idly by, printing money, while all hell breaks loose on their platforms. It begs many questions, such as:

Would some of these heinous acts happen if not for the platform that people use to publish them on?

When somebody is murdered on Facebook Live, is the satisfaction for the murderer or the immediate and vast viewership?

Would it still have happened otherwise?

Section 230 closes the door on even asking those questions in a legal setting.

All users are vulnerable to harm without justice.

The truth is, social media companies can be weaponized against anyone through revenge porn, a live-streamed assault, or conspiracy theories claiming someone is a pedophile. And the CDA makes it really hard for an individual to seek legal recourse against a company hosting and profiting off damaging content. If we can’t sue ’em why should they put money and brain power into policing it to user satisfaction?

Speaking out leads to change.

Facebook is answering the call and wants to build methods that will make it a safe place for its users. It was one of the first to ban non-consensual pornography, and recently launched a photo-matching tool that more aggressively prevents the sharing of revenge porn. These initiatives demonstrate a want to be more proactive.

Final thoughts.

Social media companies should be more transparent about their in-house rules on content monitoring.  By shedding that secrecy, the public can better manage its expectations. 

Related posts

Leave your comment Required fields are marked *

We are not your attorney. Nothing on our website, blog, or social media should be interpreted as legal advice or the creation of an attorney-client relationship. You should not act or rely on the basis of information on this site without seeking the advice of an attorney. Prior results do not guarantee a similar outcome. Please keep in mind that the success of any legal matter depends on the unique circumstances of each case: we cannot guarantee particular results for future clients based on successes we have achieved in past legal matters.