Industry

Facebook shut 583 million fake accounts in first 3 months of 2018

Facebook shut 583 million fake accounts in first 3 months of 2018

Facebook stepped further into its new era of data transparency Tuesday with the release of its inaugural Community Standards Enforcement Report.

The inappropriate content includes vilification, graphic violence, adult nudity and sexual activity, terrorist propaganda, spam and fake accounts. The company says more than 96 percent of the posts removed by Facebook for featuring sex, nudity or terrorism-related content were flagged by monitoring software before any users reported them.

Under increasing pressure to disclose how it polices its platform, Facebook revealed it took down 837 million pieces of spam content between January and March of this year.

The firm disabled about 583 million fake accounts which were disabled minutes after registering.

Facebook only recently developed the metrics as a way to measure its progress, and would probably change them over time, said Guy Rosen, its vice president of product management.

Facebook says it disabled almost 1.3 billion fake accounts in the six months through March. The company estimates that out of every 10,000 pieces of content viewed on Facebook, 7-9 views were of content that violated its adult nudity and pornography standards.

- Facebook took enforcement action against 21 million posts containing nudity. Facebook said that Zuckerberg "has no plans to travel to the United Kingdom", said Damian Collins, the leader of the UK's Digital, Culture, Media and Sport Committee, in a statement Tuesday.

In the first quarter of 2018, Facebook removed 2.5 million pieces of hate speech from its social network.

While artificial intelligence is able to sort through nearly all spam and content glorifying al-Qaeda and ISIS and most violent and sexually explicit content, it is not yet able to do the same for attacks on people based on personal attributes like race, ethnicity, religion, or sexual and gender identity, the company said in its first ever Community Standards Enforcement Report.

"For example, artificial intelligence isn't good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue". But the report also indicates Facebook is having trouble detecting hate speech, and only becomes aware of a majority of it when users report the problem.

The report also addressed that users will be notified about the detection of these flagged posts and this amounts to 85.6 percent of the user base. The company credited better detection, even as it said computer programs have trouble understanding context and tone of language. The company's reputation took a serious hit after news broke of their alleged role in facilitating questionable use of user data and they desperately need a win to help get them back on their feet.