MADRID, 16 Jul. (Portaltic/EP) –

Facebook has started some tests, upon the recommendation of its Content Advisory Board, to inform its users when one of its posts is moderated by the automatic tools of the social network.

Facebook has published this Thursday its first report, relating to the first quarter of 2021, on its Content Advisory Board, composed of external experts and that reviews appeals of complex moderation cases and issues recommendations to the company.

Although they are not binding, Facebook has committed to partially or fully implement 14 of the 18 recommendations issued by the Content Advisory Council during the first quarter of 2021, while three are under consideration and one has been discarded.

Facebook has already started to test some of the recommendations, such as notify users whenever automated tools engage in a moderation process against your posts.

At the moment, this novelty is being tested among a limited number of users and the company will assess the impact of providing these notifications.

During the first quarter of 2021 Facebook also launched, on the advice of its body of experts, new experiences to provide users Learn more about why your posts are being removed.

In the case of hate speech notifications, Facebook has incorporated an additional classifier with which you can predict what kind of hate is present in the content: violence, dehumanization, poking fun at hate crimes, visual comparisons, inferiority, contempt, profanity, exclusion or slander.

This more specific information on moderating hate speech It is already provided to Facebook users in English, and the company has committed to extending it to more languages in the future.

Likewise, Facebook has updated its policy on dangerous organizations and individuals and has created three content moderation ratings based on severity, adding definitions of key terms.

By Editor

Leave a Reply