"Accountable to the community".
Facebook is sliding Tuesday, after the company announced it shut down 583 million fake accounts in the past three months.
The problem is that, as Facebook's VP of product management Guy Rosen wrote in the blog post announcing today's report, AI systems are still years away from becoming effective enough to be relied upon to catch most bad content.
It also explains some of the reasons, usually external, or because of advances in the technology used to detect objectionable content, for large swings in the amount of violations found between Q4 and Q1.
The report did not cover the spread of false news directly, which it has previously said it was trying to stamp out by increasing transparency on who buys political ads, strengthening enforcement and making it harder for so-called "clickbait" from showing up in users' feeds. The rate at which we can do this is high for some violations, meaning we find and flag most content before users do.
Now, however, artificial intelligence technology does much of that work.
The data also illustrates where Facebook's AI moderation systems are effectively identifying and taking down problematic content - and the areas where it still struggles to identify problems.
It attributed the increase to the enhanced use of photo detection technology.
Facebook "took action" on 3.4 million pieces of content that contained graphic violence.
It took action on 21 million pieces of content containing nudity and sexual activity.
The figure represents between 0.22 and 0.27 percent of the total content viewed by Facebook's more than two billion users from January through March.
Several categories of violating content outlined in Facebook's moderation guidelines - including child sexual exploitation imagery, revenge porn, credible violence, suicidal posts, bullying, harassment, privacy breaches and copyright infringement - are not included in the report.
"Hate speech content often requires detailed scrutiny by our trained reviewers to understand context and decide whether the material violates standards", the company added in the report.
The social network says when action is taken on flagged content it does not necessarily mean it has been taken down.
The company took down 837 million pieces of spam in Q1 2018, almost all of which was flagged before any users reported it. Overall, the social giant estimated that around 3%-4% of active Facebook accounts on the site during Q1 were still fake.
The firm disabled about 583 million fake accounts which were disabled minutes after registering. "Our metrics can vary widely for fake accounts acted on", the report notes, "driven by new cyberattacks and the variability of our detection technology's ability to find and flag them".
Facebook has faced a storm of criticism for what critics have said was a failure to stop the spread of misleading or inflammatory information on its platform ahead of the US presidential election and the Brexit vote to leave the European Union, both in 2016.