Facebook does not fully know why people are posting more graphic violence but believes continued fighting in Syria may have been one reason, said Alex Schultz, Facebook's vice president of data analytics.
Of these, the company says that it has shut down some 583 million fake accounts within the firsts quarter of 2018; most of which within minutes of being created.
The company removed or put a warning screen for graphic violence in front of 3.4 million pieces of content in the first quarter, almost triple the 1.2 million a quarter earlier, the world's largest social network was quoted as saying in a published document.
"Today, as we sit here, 99 percent of the ISIS and al-Qaida content that we take down on Facebook, our AI systems flag before any human sees it", Zuckerberg said at the hearing.
The prevalence of graphic violence was higher and received 22 to 27 views-an increase from the previous quarter that suggests more Facebook users are sharing violent content on the platform, the company said. Facebook's vice president of product management, Guy Rosen, noted that the social network has increased its efforts significantly over the past year and a half, in order to flag and remove content that is inappropriate and derogatory.
Overall, Facebook estimates that around 3% to 4% of the active Facebook accounts on the site during this time period were still fake. This also covers for 837 million spam posts that were published on the social media of which 1.9 million encouraged terrorism, hate speeches amounting to 2.5 million posts and 21 million posts on sexual activity.
Nigeria name uncapped strikers for World Cup
GOALKEEPERS: Ikechukwu Ezenwa (Enyimba), Daniel Akpeyi (Chippa United), Francis Uzoho, (Deportivo de La Coruna) and Dele Ajiboye (Plateau United).
"We have a lot of work still to do to prevent abuse", Zuckerberg said".
In the first quarter of 2018, Facebook removed 2.5 million pieces of hate speech from its social network.
Facebook on Tuesday unveiled for the first time a transparency report that shows an increasing number of posts identified as containing graphic violence in the first of quarter of 2018.
"Artificial intelligence isn't good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue", said Rosen. The company credited better detection, even as it said computer programs have trouble understanding context and tone of language. In this case, 86% was flagged by its technology.
"In addition, in many areas - whether it's spam, porn or fake accounts - we're up against sophisticated adversaries who continually change tactics to circumvent our controls, which means we must continuously build and adapt our efforts". The Facebook leader added that the company would notify users if their data were compromised.