Results 1 to 3 of 3

Thread: Internet Researchers Harnessed the Power of Algorithm to Find 'Hate Speech'

  1. #1
    Funding Member
    "Friend of Germanics"
    Skadi Funding Member

    Schmetterling's Avatar
    Join Date
    Jan 2008
    Last Online
    @
    Ethnicity
    German
    Gender
    Age
    36
    Posts
    756
    Thanks Thanks Given 
    30
    Thanks Thanks Received 
    63
    Thanked in
    30 Posts

    Internet Researchers Harnessed the Power of Algorithm to Find 'Hate Speech'

    via Twitter

    Researchers at the University of Rochester have developed an artificial intelligence system that can identify coded hate speech online.

    In early 2016, Google unveiled tech incubator Jigsaw, with the intention of “substantially reducing” online hate and harassment.

    But the plan backfired when trolls responded with the “Operation Google” campaign, which replaces racial slurs with names of technology brands and products.

    “Google,” for example, refers to black people, while fellow search engines “Yahoo” and “Bing” allude to Mexicans and Asians, and Jewish folks are called “Skypes.”

    The idea was to force Google to censor its own websites by making the common word synonymous with bigotry.

    Now, analysts at New York’s University of Rochester are fighting back with an AI of their own.

    The team collected about a quarter of a million unique English tweets from between Sept. 23—the first reported incident of hate-code words—and Oct. 18—a week after the second US presidential election debate between Hillary Clinton and Donald Trump.

    “There were some interesting observations that we noted,” they wrote in a paper that will be presented in May at Montreal’s International Conference on Web and Social Media.

    The top 10 terms among those tweets labeled as hateful include hashtags #MAGA (Trump’s campaign slogan “Make America Great Again”), #MAWA (the more racist “Make America White Again”), and #ALTRIGHT. Researchers also found words like “white,” “war,” “hate,” and “destroy.”

    Of course, none of that should surprise anyone who has not been living in a cave without Internet access for the past two years. What may (and definitely should) appall people, though, is the occurrence of the term “gas,” used almost exclusively within hateful tweets about Jews.

    “It is particularly this unchecked abject display of hatred and calls for violence that we hoped to capture through our system,” the scientists said.

    With their project a success—”apart from the system’s ability to predict for a given tweet whether it is hateful or not, the system also generates a list of users who frequently post such content”—the team intends to continue working to curb online harassment.
    Source

    When it comes to excising hate speech videos and illicit content from YouTube, Google plans to go hard on developing automated programs and machine learning to scan the world’s second most popular website for extremist and controversial materials. While human moderators take care of the bulk of this work right now, YouTube is already unrolling A.I. systems to handle these ditties, and machines will take on more of that work as time passes.

    About a month ago, a YouTube spokesperson acknowledged hate speech was becoming a bigger issue. The company hopes A.I. can be the solution to cracking down on the dissemination of extremist speech across the website.

    “Our initial use of machine learning has more than doubled both the number of videos we’ve removed for violent extremism, as well as the rate at which we’ve taken this kind of content down,” a YouTube spokesperson told The Guardian. Over 75 percent of the videos we’ve removed for violent extremism over the past month were taken down before receiving a single human flag.”

    About 400 hours of content are uploaded to YouTube every minute. Moderating all that video requires a highly efficient system, which makes machines an ideal solution.

    Of course, the big question is exactly what standards YouTube plans to enforce. According to The Guardian, the company doesn’t necessarily have to do anything about content that is objectionable, but not illegal. Those kinds of videos will not be taken down, but rather made available in a “limited state.” It’s unclear exactly what that means, but the company spokesperson told The Guardian, “The videos will remain on YouTube behind an interstitial, won’t be recommended, won’t be monetized, and won’t have key features including comments, suggested videos, and likes.”

    Moreover, YouTube is forced to censor different types of videos depending on where the user is accessing the site. Trained algorithms will need to be customized according to different countries’ standards and laws.

    One big concern that seems to go unaddressed is whether these A.I. censorship programs may go too far — needlessly scrubbing content which violates no laws or speech standards, or if they may perhaps be hijacked for nefarious purposes. Those issues, of course, permeate the rest of the internet as well, and it’s unlikely YouTube and Google will be able to fully escape such concerns.
    Source

    How the Rules Work

    According to Facebook’s rules, there are protected categories—like sex, gender identity, race and religious affiliation—and non-protected categories—like social class, occupation, appearance, and age. If speech refers to the former, it’s hate speech; if it’s refers to the latter, it’s not. So, “we should murder all the Muslims” is hate speech. “We should murder all the poor people” is not.

    This binary designation might make some uncomfortable, but it’s when protected and unprotected classes get linked together in a sentence—a compound category—that Facebook’s policies become extra strange. Facebook’s logic dictates the following:

    Protected category + Protected category = Protected category

    Protected category + Unprotected category = Unprotected


    To illustrate this, Facebook’s training materials provide three examples—“white men”, “female drivers”, and “black children”—and states that only the first of these is protected from hate speech. The answer is “white men.” Why? Because “white” + “male” = protected class + protected class, and thus, the resulting class of people protected. Counterintuitively, because “black” (a protected class) modifies “children” (not protected), the group is unprotected.

    Math + Language = Murky

    In math, this kind of logical rule-setting is called symbolic logic, and it has understandable rules. The word-based logic discipline was first created in the nineteenth century by mathematician George Boole, and has since become essential to the development of everything from computer processors to linguistics. But you don’t need to have a PhD in logic or the philosophy of language to recognize when basic rules are being violated. “Where did @facebook’s engineers take their math classes? Members of subset C of set A are still members of A,” tweets Chanda Prescod-Weinstein, an astrophysicist at the University of Washington.

    Philosophers of language think a lot about how modifying a category alters the logic of a sentence. Sometimes when you have a word for a category—like white people—and you replace it with a subset of that same category—like white murderers—the inference doesn’t follow. Sometimes it does. For instance, take the phrase “All birds have feathers” and replace it with “All white birds have feathers.” The second sentence still makes logical sense and is a good inference. But take “Some bird likes nectar” and replace it with “Some white bird likes nectar,” that may not be true anymore—maybe only green birds like nectar. It’s a bad inference.

    Facebook’s rules appear to assume that whenever a protected category is modified with an unprotected category, the inference is bad. So just because “black people” is a protected class, it explicitly doesn’t follow that “black children” is a protected class, even though the average person looking at that example would say that black children is a subset of black people.
    The fact is, there isn’t a way to know systematically whether replacing a category with a subcategory will lead to a good or bad inference. “You have to plug in the different examples,” says Matt Teichman, a philosopher of language at University of Chicago. “You have to just look at the complexity of what’s happening to see for sure.”

    Teichman muses over one example that might support Facebook’s algorithm: White murderers should all die. “Whenever I come across wacky policies like this I try to think, is there any conceivable way to justify it?” he says. There, the subset “murderers” is, in most cases, bad. So maybe it makes sense to be able to direct hate speech at them. But a murder’s race is—or at least it should be—completely irrelevant to the badness, and including race in that sentiment seems problematic at best.

    Knowing the Rules Changes the Game

    Now that people know what Facebook’s rules are, there are a lot of ways to break them. For instance, if someone uses the term “radicalized Muslims” on Facebook, that isn’t hate speech (the modifier “radicalized” makes that group unprotected). By simply applying a modifier to a protected class, a person can perpetuate a stereotype and disparage a subcategory while dog-whistling about the whole group, all while not breaking Facebook’s rules.

    “There’s an interesting legal difference between literal meaning and implied meaning. You're on the hook for what you literally say, but you can often kind of weasel out of really being committed to the stuff you just implied,” says Teichman.

    Following the rules can very quickly become an exercise in absurdity. One could modify a protected group in such a way to include larger and larger swaths of that group. Take, for example, saying “Black children shouldn’t be allowed in our town” versus “Black adults shouldn’t be allowed in our town.” By writing the latter, one can perpetrate hateful speech that includes the entire black community–without breaking Facebook’s rules. And the gaming of the rules doesn’t end there. Just by modifying a protected group’s name with a description of appearance—“ugly” “fat”—one can add insult to demeaning injury.

    Looking at the rules as a whole, ProPublica reports that Facebook developed the rules in reaction to specific actors’, such as governments and users, complaints. At one point, the rules were open-ended, including a general rule that said, “‘Take down anything else that makes you feel uncomfortable,’” says Dave Willner, a former Facebook employee, in the ProPublica report. Wilner revised the current rules to make them more rigorous. The result, Teichman says, appears to be patchwork constructed not out of some top-down ethical determination, but rather a list slapped together over time. “Categories get this hodgepodge when they're just the result of being stitched together out of complaints people made,” says Teichman.
    Source

    In the wake of a terrorist attack in London earlier this month, a U.S. congressman wrote a Facebook post in which he called for the slaughter of “radicalized” Muslims. “Hunt them, identify them, and kill them,” declared U.S. Rep. Clay Higgins, a Louisiana Republican. “Kill them all. For the sake of all that is good and righteous. Kill them all.”

    Higgins’ plea for violent revenge went untouched by Facebook workers who scour the social network deleting offensive speech.


    But a May posting on Facebook by Boston poet and Black Lives Matter activist Didi Delgado drew a different response.

    “All white people are racist. Start from this reference point, or you’ve already failed,” Delgado wrote. The post was removed and her Facebook account was disabled for seven days.


    A trove of internal documents reviewed by ProPublica sheds new light on the secret guidelines that Facebook’s censors use to distinguish between hate speech and legitimate political expression. The documents reveal the rationale behind seemingly inconsistent decisions. For instance, Higgins’ incitement to violence passed muster because it targeted a specific sub-group of Muslims — those that are “radicalized” — while Delgado’s post was deleted for attacking whites in general.

    One Facebook rule, which is cited in the documents but that the company said is no longer in effect, banned posts that praise the use of “violence to resist occupation of an internationally recognized state.” The company’s workforce of human censors, known as content reviewers, has deleted posts by activists and journalists in disputed territories such as Palestine, Kashmir, Crimea and Western Sahara.
    Source
    "Tradition doesn't mean holding on to the ashes, it means passing the torch."
    - Thomas Morus (1478-1535)

  2. #2
    Senior Member Theunissen's Avatar
    Join Date
    Aug 2017
    Last Online
    3 Hours Ago @ 03:22 PM
    Ethnicity
    Germanic
    Ancestry
    North Western Europe
    Country
    South Africa South Africa
    State
    Transvaal Transvaal
    Location
    South Africa
    Gender
    Posts
    543
    Thanks Thanks Given 
    195
    Thanks Thanks Received 
    296
    Thanked in
    163 Posts
    Shows you were they are heading: Control of information and communication to create an environment serving globalist agenda.

    Their opposition? All kinds of Nationalists, but also Christians and certain other groups although the issues may differ there concerning the agenda violated.

  3. #3
    Funding Member
    "Friend of Germanics"
    Skadi Funding Member

    Nachtengel's Avatar
    Join Date
    Jul 2008
    Last Online
    @
    Ethnicity
    German
    Gender
    Posts
    5,916
    Thanks Thanks Given 
    94
    Thanks Thanks Received 
    765
    Thanked in
    420 Posts

    Internet Researchers Harnessed the Power of Algorithm to Find 'Hate Speech'

    During the municipal elections in spring 2017, a group of researchers and practitioners specialising in computer science, media and communication implemented a hate speech identification campaign with the help of an algorithm based on machine learning.

    At the beginning of the campaign, the algorithm was taught to identify hate speech as diversely as possible, for example, based on the big data obtained from open chat groups. The algorithm learned to compare computationally what distinguishes a text that includes hate speech from a text that is not hate speech and to develop a categorisation system for hate speech. The algorithm was then used daily to screen all openly available content the candidates standing in the municipal elections had produced on Facebook and Twitter. The candidates’ account information were gathered using the material in the election machine of the Finnish Broadcasting Company Yle.

    All parties committed themselves to not accepting hate speech in their election campaigns. On the other hand, if the candidate used a personal Facebook profile instead of the page created and reported for the campaign, it was not included in the monitoring. Finnish word forms and the limited capability of the algorithm to interpret the context the same way humans do also proved to be challenging. The Perspective classifier developed by Google for the identification of hate speech has also suffered from the same problems in recognising the context and, for example, spelling mistakes.

    Once the messages have been identified, it is key to define the actions that will follow.

    ‘From the point of view of the authorities, there were no more than 20 messages that caused measures. Listing words as such is not sufficient because words get their meaning from the way they are combined. On the other hand, without the hate speech machine and researchers, we would not have the resources to do monitoring on this scale’, says Non-Discrimination Ombudsman Kirsi Pimiä.

    Hate speech focuses on emotions and beliefs of inequality

    To teach the algorithm, the researchers prepared material consisting of thousands of messages and cross analysed it to be able to make it scientifically valid.

    ‘When categorising messages, the researcher has to take a stance on the language and context, and it is therefore important that several people participate in interpreting the teaching material’, says Salla-Maaria Laaksonen from the University of Helsinki.

    It was important that all types of hate speech could be found during the campaign. Immigration and asylum seekers are often the most prominent themes, but it is equally important to identify hate speech targeted at women, ethnic minorities or certain political opinions.

    ‘Hate speech has always existed. It has always been produced to support the status of one’s own group and to discriminate against the others, but social media has now made it more visible than before. Expression and beliefs based on emotions are emphasised and they are also circulated online. For example, if the candidate removed what he or she had written soon after it had been published during the campaign, it could still remain as a screen capture’, says Reeta Pöyhtäri from the University of Tampere.

    Hate speech is defined in the legislation in many European countries, whereas ordinary people use the term hate speech with very broad meanings. All angry speech is not punishable hate speech from the point of view of the law. For example, it has to be targeted at groups that are in positions that are more vulnerable, be discriminatory or contain a threat of violence. The project used the definition of hate speech drawn up by the Council of Europe and the Ethical Journalism Network.

    Hate speech as a conference topic

    According to Salla-Maaria Laaksonen, social media services and platforms, such as Facebook and Twitter, could utilise identification of hate speech if they wanted to and that way influence the activities of internet users.

    ‘There is no other way to extend it to the level of individual citizens.’

    Apart from the changes in Finnish society and culture, the economic situation is also regarded as a factor that increases xenophobia. Changing the behaviour involving hate speech therefore seems to be a challenging task in spite of the monitoring, moderation, campaigns to change attitudes and media education that are carried out.

    ‘We should analyse the reasons behind hate speech in more detail. It would be interesting to know who are the people sending those hate messages, what motivates them and how many of them are trolls. Are there any common factors in their circumstances, such as social exclusion, and why do they have to demonstrate their hatred by despising people and by questioning other people’s human dignity’, Kirsi Pimiä says.

    The work done during the campaign will continue in a conference organised by the Association of Internet Researchers in Tartu between 18 and 21 October. One of the workshops will discuss the state of hate speech on the internet, the possibilities and challenges in the identification of hate speech, and the ways to respond to the challenges hate speech poses online. The workshop is organised jointly by the researchers of Aalto University and the Universities of Helsinki and Tampere who participated in the campaign and Open Knowledge Finland.

    ‘It was important for us to reflect on how researchers could contribute to the solution of such an important societal problem. Confrontation takes place at many levels in society today and we would like to challenge the international science community to discuss this phenomenon together in our workshop’, says Matti Nelimarkka who is a researcher at Aalto University and HIIT.

    In addition to the three universities, the Office of the Non-Discrimination Ombudsman and the Finnish League for Human Rights together with researchers from the Advisory Board for Ethnic Relations, Open Knowledge Finland, Futurice and Rajapinta ry participated in the campaign implemented during the municipal elections. The project is linked to four research projects funded by the Academy of Finland and the Kone Foundation.
    https://redice.tv/news/internet-rese...nd-hate-speech

Similar Threads

  1. How The Battle For Free Speech Was Won: Canada Ends Internet 'Hate Speech' Ban
    By Nachtengel in forum Articles & Current Affairs
    Replies: 17
    Last Post: Sunday, July 1st, 2012, 11:17 AM
  2. Replies: 6
    Last Post: Thursday, August 9th, 2007, 10:52 PM
  3. The Internet and the Global Power Shift
    By Ahnenerbe in forum Strategic Intelligence
    Replies: 3
    Last Post: Friday, March 31st, 2006, 10:39 AM
  4. Researchers find sauropod dinosaur skulls
    By morfrain_encilgar in forum Natural Sciences & Environment
    Replies: 0
    Last Post: Saturday, June 4th, 2005, 11:51 AM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •