With 1.39 billion active users worldwide, Facebook’s social network is the closest thing we have to a universal communication platform. And people post — or try to post — just about everything you can imagine.
On Monday, the company will clarify its community standards to give its users more guidance about what types of posts are not allowed on the service.
Facebook walks a delicate line when it tries to ban violent or offensive content without suppressing the free sharing of information that it says it wants to encourage. Its audience is vast, with a huge variance in age, cultural values and laws across the globe. Yet despite its published guidelines, the reasoning behind Facebook’s decisions to block or allow content are often opaque and inconsistent.
For example, the company flip-flopped repeatedly on whether to allow beheading videos on the service before recently deciding to ban them. In December, it blocked a page in Russia that was promoting an antigovernment protest, then allowed copycat pages to stay up. And in October, it created an exception to its requirement that people use their real names on the service when it allowed San Francisco’s drag queens to use their stage names while continuing to crack down on others using false names.
“We’re trying to strike the balance based on the way our community works,” Monika Bickert, Facebook’s head of global policy management, said in an interview. “The landscape is complicated.”
The company hopes that more specific explanation of its rules will take some of the mystery out of what it will and will not allow.
Terrorist organizations like the Islamic State have long been banned from the service. But supporting or praising groups involved in “violent, criminal or hateful behavior” is also banned, the updated rules say.
Threatening people with physical or financial harm, or bullying them by posting items intended to degrade or shame them, is also prohibited. So is anything that encourages suicide or eating disorders.
Facebook has always banned pornography and most other nudity, but it is now diving into the nuances. “We remove photographs of people displaying genitals or focusing in on fully exposed buttocks,” it says. It also restricts some images of female breasts if the nipple shows, “but we always allow photos of women actively engaged in breast-feeding or showing breasts with post-mastectomy scarring.” Photos of paintings, sculptures and other art that depicts nude figures are also fine.
The company is for the first time explicitly banning content promoting sexual violence or exploitation, including so-called revenge porn, which it defines as intimate images “shared in revenge or without permission from the people in the images.” (Twitter has also updated its rules to forbid revenge porn.)
One thing that has not changed: Facebook has no plans to automatically scan for and remove potentially offensive content, Ms. Bickert said.
Facebook will still rely on users to report violations of the standards. Ms. Bickert said that the company had review teams working at all hours of the day around the globe, and that every report was examined by one of them before a decision was made.
The process can take time — typically 48 hours on matters of safety, she said. That may not be fast enough for some people in an era where graphic content can go viral in minutes. Twitter, which is a much more public forum, has come under fire from women’s advocates and antiterror groups for not responding quickly enough to reports of abusive or violent tweets.
But Facebook wants to take into account the full context of a post, Ms. Bickert said. For example, a victim of a violent attack might post images on Facebook as a way of raising public awareness. “Sometimes the best way to share information about atrocities in the world is Facebook,” she said. “We recognize that is a very challenging issue.”
Facebook’s rulings can also be appealed. “If a person’s account is suspended, those appeals are read by real people who can look into the specifics,” she said.
Ms. Bickert said that clarifying its rules helped not only Facebook users but also the people who reviewed possible violations to decide what was permissible. “We can only do this if we have objective rules,” she said.
Governments also ask Facebook to take down posts. In conjunction with the updated community standards, the company plans to publish on Monday its latest transparency report, which discloses country-by-country information on government requests for user data and the removal of content.
In the report, Facebook says that in the second half of 2014, it restricted 9,707 pieces of content for violating local laws, up 11 percent from the first half of the year. Of those, India requested the most takedowns, with 5,832, and Turkey was not far behind with 3,624. No content was restricted in the United States based on government requests.
The number of government requests for account data increased slightly, to 35,051, compared with 34,946 in the first half. The United States was at the top of the list, making 14,274 requests for information on 21,731 Facebook accounts, with the company agreeing to turn over information in 79 percent of the cases.
“Moving forward, we will continue to scrutinize each government request and push back when we find deficiencies,” Chris Sonderby, Facebook’s deputy general counsel, said in a statement. “We will also continue to push governments around the world to reform their surveillance practices in a way that maintains the safety and security of their people while ensuring their rights and freedoms are protected.”