Jakarta - Facebook has removed 583 million fake accounts in the first quarter of 2018. The move is done up as a form of corporate transparency after the scandal of Cambridge Analytica.
Quoted from the blog Facebook, Wednesday (05/16/2018), the company based in Menlo Park, United States, released this community standard enforcement report is for the first time do.
VP Product Management Facebook Guy Rosen, said it often gets questions as to how Facebook decides what is allowed to be present inside on its platform.
"Over the years, we have community standards that explain what's fixed and what's down, three weeks ago, for the first time, we publish the internal guidelines we use to enforce those standards, and today we release figures in reports enforcement of community standards so you can assess our performance for you, "he said.
This report was collected between October 2017 and March 2018. There are six regulated areas: graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, spam, and fake accounts.
To clear its platform of these inappropriate content, Facebook relies on artificial intelligence technology (AI).
"Most of the action we do is by removing bad content around spam and fake accounts, because that's what they use to spread it," he said.
As Facebook lowered 837 million spam during the first quarter of 2018. Almost 100% found and marked this is done up, before anyone reported it. Then, Facebook also disable 583 million fake accounts.
"Overall we estimate about 3% -4% of active Facebook account is still fake," said Rosen.
Then, Facebook also lowered 21 million pieces of adult nudity and sexual activity in the first quarter, of which 96% of them were discovered and marked by AI technology, before users reported it. Rosen estimates that every 10,000 content viewed on Facebook, 7-9 views is content that violates adult pornography and adult nursing standards.
Facebook also helped remove 3.5 million violent content, 86% of which have been identified before being reported.
Regarding hate speech, the company led by Mark Zuckerberg has reduced 2.5 million content during the first three months of 2018, of which 38% have been marked by technology.
Facebook still has a myriad of homework to prevent abuse. The utilization of artificial intelligence technology is temporarily effective, but still needs to be improved again.
"We believe that increasing this transparency tends to increase accountability and responsibility over time and publishing this information will encourage us to improve faster too," Rosen said.
"This is the same data we use to measure our progress internally and you can now see it to assess our own progress We look forward to your feedback," he concluded.