"Accountable to the community".
According to the numbers, covering the six-month period from October 2017 to March 2018, Facebook's automated systems remove millions of pieces of spam, pornography, graphic violence and fake accounts quickly - but that hate-speech content, including terrorist propaganda, still requires extensive manual review to identify.
It's a major effort towards transparency from Facebook in the wake of the Cambridge Analytica scandal.
It also explains some of the reasons, usually external, or because of advances in the technology used to detect objectionable content, for large swings in the amount of violations found between Q4 and Q1.
Facebook's new Community Standards Enforcement Report "is very much a work in progress and we will likely improve our methodology over time", Chris Sonderby, VP and deputy general counsel, wrote in a blog post about the report. "This is especially true where we've been able to build artificial intelligence technology that automatically identifies content that might violate our standards".
On child exploitation imagery, Schultz said that the company still needed to make decisions about how to categorise different grades of content, for example cartoon child exploitation images. Rosen added that the reviewers will speak 50 languages in order to be able to understand as much context as possible about content since, in many cases, context is everything in determining if something is, say, a racial epithet aimed at someone, or a self-referential comment.
21 million pieces of content depicting inappropriate adult nudity and sexual activity were taken down, 96 percent of which were first flagged by Facebook's tools. "This increase is mostly due to improvements in our detection technology", the report notes.
Adult nudity and sexual activity: Facebook says.07% to.09% of views contained such content in Q1, up from.06% to.08% in Q4.
In total the social network took action on 3.4m posts or parts of posts that contained such content.
Most of the content was found and flagged before users had a chance to spot it and alert the platform.
While artificial intelligence is able to sort through nearly all spam and content glorifying al-Qaeda and ISIS and most violent and sexually explicit content, it is not yet able to do the same for attacks on people based on personal attributes like race, ethnicity, religion, or sexual and gender identity, the company said in its first ever Community Standards Enforcement Report.
As Facebook continues to grapple with spam, hate speech, and other undesirable content, the company is shedding more light on just how much content it is taking down or flagging each day.
The social network says when action is taken on flagged content it does not necessarily mean it has been taken down.
It took action on 837 million pieces of spam, though it didn't have view numbers.
Facebook estimates that 3-4% of monthly active users during the last three months of 2017 and first three months of 2018 were fake. Those tools worked particularly well for content such as fake accounts and spam: The company said it managed to use the tools to find 98.5% of the fake accounts it shut down, and "nearly 100%" of the spam.
Facebook has faced a storm of criticism for what critics have said was a failure to stop the spread of misleading or inflammatory information on its platform ahead of the USA presidential election and the Brexit vote to leave the European Union, both in 2016.