Facebook is getting better at detecting hate speech
The social network just released its first content moderation report.
In the first quarter of 2018...
In the first quarter of 2018...
- 583 million fake accounts were closed. Three to four percent of Facebook’s monthly users are fake.
- Facebook took enforcement action against 21 million posts containing nudity.
- The company found 2.5 million posts containing hate speech, a 56-percent increase over the last quarter of 2017.
- The number of terrorism-related posts removed increased by 73 percent over the previous quarter.
Why the numbers matter:
It’s a picture of the sheer quantity of content Facebook’s
moderators—both human and algorithmic—churn through. But remember, these
numbers represent only what’s actually been identified.
The road ahead: Facebook’s software for policing content is getting better, but it’s nowhere near ready to take over sole responsibility: 62 percent of the hate speech posts on which action was taken were first reported by users.
The road ahead: Facebook’s software for policing content is getting better, but it’s nowhere near ready to take over sole responsibility: 62 percent of the hate speech posts on which action was taken were first reported by users.
No comments:
Post a Comment