Since high tech algorithms can’t censure all of the hate speech that comes through Google or YouTube, the titan platforms also rely on the human factor to help combat this pervasive problem. Google has about 10,000 “quality raters” who act as a human-based control agent to help cull out offensive material submitted by users in traditional display ads or videos, according to nbcnews.com.
These contracted workers check thousands of websites for content, giving recommendations that will lift quality sites higher in the search ranks and sink poor quality lower. The raters do not directly change the rankings, but their reports inform the Google encoders, who then upgrade the searching process.
Recently, Google has amended the raters’ manual to include tools developed to help combat false news or hateful content. The new “Upsetting/Offensive” marker will be used by these quality checkers if they come across content that advocates violence against a group of people, crime how-to information, or graphic violence.
The initiative against extremist content was kicked into high gear last month on the Google and YouTube (Google-owned) platforms. After several big brands in the UK began pulling out because their ads were placed alongside some extremist content, a number of American-based big brands followed suit. Some them stated they would only return to the platforms if Google could give substantial guarantees that the situation would not be repeated. With its revenue-base in jeopardy, Google began brainstorming harder for solutions to a long-standing problem.
False news and extremist contents is a present problem on all the major platforms. Last month, Facebook rolled out its own tactic to combat false information. If a “news” content is suspected to be false, it can be flagged by viewer as disputed. If FB checks it out and finds that it is not reliable information, then indicators are put in place to warn other users of the highly questionable content.