On March 15, a white supremacist killed 51 people in two mosques in New Zealand and broadcast it live on the Internet. The horrendous crime exposed, in all its cruelty, the inability of regulators and platforms to prevent the spread of hate messages on social networks. Two months later, 26 countries and internet giants have committed themselves to the 'call of Christchurch' that something like this can not happen again. How? Improving, among other things, detection systems to get ahead of terrorists and those who disseminate their messages on networks.
The four-page document has been promoted by French President Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern, and details the commitments of governments and digital platforms to fight against terrorism and violent extremism on the Internet. It is something similar to a voluntary code of conduct, which today was signed in the Elysee Palace by heads of state and government such as King Abdullah of Jordan, Canadian Prime Minister Justin Trudeau, and British Theresa May, in addition to the technology managers such as Google, Microsoft or Facebook.
The social network founded by Mark Zuckerberg was the first to respond to the call, and today promised to limit its Facebook Live tool, which allows the live broadcast of images, to those who have been sanctioned by the platform for violating its most sensitive standards .
The appeal, Macron said, associates "all internet actors" and proposes "concrete actions to accompany him". Governments are asked to reinforce legislation that already exists in their countries or to create new laws to prohibit violent extremist content, and platforms to avoid, in a transparent manner, the downloading of hate content and its dissemination on the networks, as well as the immediate and permanent withdrawal of those that may be published. Each company must decide, however, what it considers violent extremist content and, therefore, unacceptable, since the document does not include a definition.
The signing platforms have committed to share figures on the number of hate messages they remove from their pages, something that most of them have already begun to do. Twitter, for example, recently announced closing 166,513 accounts for "terrorism" in the last 6 months of 2018, and Facebook has ensured that its tools manage to detect 99% of terrorist content, and that half of them do not last longer than two minutes on your website. The appeal calls, in addition, the improvement and generalization of bases of prohibited content, promoted by the European Commission in 2016 – this Wednesday at the signing of the document was also its president, Jean Claude-Juncker, and that already has 100,000 contents that they are detected automatically as terrorist propaganda.
The Christchurch bombing, whose macabre video went viral in a matter of hours, tested the ability of giants like Facebook to quickly detect content that needed to be removed, a real technical challenge because, among other things, users They had made different assemblies with the images. The platform announced on Wednesday that it will invest 7.5 million dollars (6.7 million euros) to improve the analysis of still images and videos.
"I will always see social networks as something positive," said Jacinda Ardern. However, unfortunately, we believe that it can also be used to spread hatred. Our challenge is to find how to guarantee free and secure access to these platforms while preventing the evil we have seen in the future. "
. (tagsToTranslate) fight (t) messages (t) hate