Tired of toxic commenters or trolls joining your online discussions without really contributing in a good way? Google and Jigsaw recently rolled out a new technological tool that can identify toxic comments, allowing users/publishers to weed them out.
The new tech tool that employs machine learning is called Perspective. Users of the tool get to have the upper hand over online trolls, since they can decide how to handle comments that the system identifies as toxic.
Harnessing the Power of Machine Learning
As Google’s former CEO Eric Schmidt tweeted, “Machine learning has so much potential. Proud to see Jigsaw Team & Google using it to help publishers fight comment trolls.” The users who will benefit from the new tool include not just self-publishers but also news organizations trying to cultivate healthy discourse.
Given the tendency of an increasing number of people to stray from the topic and resort to name-calling or to accuse well-meaning online discussion initiators of politicking or other offensive acts, having such a tech tool — as its name connotes — helps put things in perspective. It also ends up curbing online forum abuse. Jigsaw, an Alphabet company focused on security, wrote: “Perspective puts the power of machine learning in the hands of publishers to host better discussions.”
How It Works
Comments in news feeds and websites run the gamut from simple to insightful to absurd, to toxic. A toxic online comment is a kind that may prompt people to leave the conversation.
If you or one of your incisive friends posts a sentence or a lengthier piece about a raging issue of the day, like how you feel about Donald Trump’s temporary immigration ban on refugees entering the U.S., for instance, the publisher will most likely get a barrage of comments. The fast `spray’ of words from commenters may include nasty names or cuss words, as well as off-tangent remarks, not just reasonable reactions to posts.
Through machine learning models, the Google application program interface scores the perceived impact a comment might have on a conversation. The Perspective tool has a ‘toxicity’ meter that shows the level of toxic comments in a thread.
Once toxicity scores have been assigned, publishers (including websites that adopted the tool) can decide what to do with the information. The publisher may opt to flag comments for human moderators’ review and then determine whether to include them or not in a conversation.
The other option the publisher may choose is to provide tools to help participating members of the online community understand the impact of what they are writing. Websites, through their managers, can set Perspective to remove toxic comments directly.
To request for API access, users may visit the Perspective API site. A short description about how the user intends to use the API to improve conversations online, along with the link to his website or research project, will be requested.