Most Powerful Supplement Available
Edison Labs
  • Home
  • Blog
    • Innovation
    • Artificial Intelligence
    • Virtual Reality
  • Imagination
  • Ideas
  • Innovation
  • More
    • Artificial Intelligence
    • Virtual Reality
  • Shop
Edison Labs
  • Home
  • Blog
    • Innovation
    • Artificial Intelligence
    • Virtual Reality
  • Imagination
  • Ideas
  • Innovation
  • More
    • Artificial Intelligence
    • Virtual Reality
  • Shop
No Result
View All Result
Edison Labs
No Result
View All Result

Content Moderation Becoming a Big Business with AI Enlisted to Help 

admin by admin
September 15, 2021
in Artificial Intelligence News
0
Content Moderation Becoming a Big Business with AI Enlisted to Help 
402
SHARES
2.4k
VIEWS
Share on FacebookShare on Twitter
Most Powerful Supplement Available




Content moderation of social media and website content is becoming a big business with AI at the center of a challenging automation task.  

By John P. Desmond, AI Trends Editor  

Content moderation is becoming a bigger business, expecting to reach a volume of $11.8 billion by 2027, according to estimates from Transparency Market Research. 

The market is being fueled by exponential increases in user-generated content in the form of short videos, memes, GIFs, live audio and video content and news. Because some percentage of the uploaded content is fake news, or malicious or violent content, social media sites are employing armies of moderators equipped with tools employing AI and machine learning to attempt to filter out inappropriate content. 

Facebook has employed Accenture to help clean up its content, in a contract valued at $500 million annually, according to a recent account in The New York Times, based on extensive research into the history of content moderation at the social media giant.  

Julie Sweet, CEO, Accenture

The Times reported that Accenture CEO Julie Sweet ordered a review of the contract after her appointment in 2019, out of concern for what was then seen as growing ethical and legal risks, which could damage the reputation of the multinational professional services company.  

Sweet ordered the review after an Accenture worker joined a class action lawsuit to protest the working conditions of content moderators, who review hundreds of Facebook posts in a shift and have experienced depression, anxiety and paranoia as a result. The review did not result in any change; Accenture employs more than a third of the 15,000 people Facebook has hired to inspect its posts, according to the Times report.  

Facebook CEO Mark Zuckerberg has had a strategy of employing AI to help filter out the toxic posts; the thousands of content moderators are hired to remove inappropriate messages the AI does not catch.   

Cori Crider, Cofounder, Foxglove

The content moderation work and the relationship of Accenture and Facebook around it have become controversial. “You couldn’t have Facebook as we know it today without Accenture,” stated Cori Crider, a co-founder of Foxglove, a law firm that represents content moderators, to the Times. “Enablers like Accenture, for eye-watering fees, have let Facebook hold the core human problem of its business at arm’s length.” 

Facebook has hired at least 10 consulting and staffing firms, and a number of subcontractors,  to filter its posts since 2012, the Times reported. The pay rates vary, with US moderators generating $50 or more per hour for Accenture, while moderators in some US cities get starting pay of $18 per hour, the Times reported. 

Insights From an Experienced Content Moderator  

The AI catches about 90% of the inappropriate content. One supplier of content moderation systems is Appen, based in Australia, which works with its clients on machine learning and AI systems. In a recent blog post on its website, Justin Adam, a program manager overseeing several content moderation projects, offered some insights.   

The first is to update policies as real world experience dictates. “Every content moderation decision should follow the defined policy; however, this also necessitates that policy must rapidly evolve to close any gaps, gray areas, or edge cases when they appear, and particularly for sensitive topics,” Adam stated. He recommended monitoring content trends specific to markets to identify policy gaps.  

Second, be aware of the potential demographic bias of moderators. “Content moderation is most effective, reliable, and trustworthy when the pool of moderators is representative of the general population of the market being moderated,” he stated. He recommended sourcing a diverse group of moderators as appropriate.    

Third, develop a content management strategy and have expert resources to support it. “Content moderation decisions are susceptible to scrutiny in today’s political climate,” Adam stated. His firm offers services to help clients employ a team of trained policy subject matter experience, establish quality control review, and tailor quality analysis and reporting.   

Techniques for Automated Content Moderation with AI  

The most common type of content moderation is an automated approach that employs AI, natural language processing and computer vision, according to a blog post from Clarifai, a New York City-based AI company specializing in computer vision, machine learning, and the analysis of images and videos.   

AI models are built to review and filter content. “Inappropriate content can be flagged and prevented from being posted almost instantaneously,” to support the human moderator’s work, the company suggested.  

Techniques for content moderation include image moderation that uses text classification and computer vision-based visual search techniques. Object character recognition can identify text within an image and moderate that as well. The filters are looking for abusive or offensive words, objects and body parts within all types of unstructured data. Content flagged as inappropriate can be sent for manual moderation.  

Another technique, for video moderation, requires that the video be watched frame by frame and the audio screened also. For text moderation, natural language processing algorithms are used to summarize the meaning of the text or gain an understanding of the emotions in the text. Using text classification, categories can be assigned to help analyze the text or sentiment.    

Sentiment analysis identifies the tone of the text and can categorize it as anger, bullying, or sarcasm, for example, then label it as positive, negative, or neutral. The named entity recognition technique finds and extracts names, locations, and companies. Companies use it to track the number of times its brand is mentioned or the brand of a competitor, or the number of people from a city or state that are posting reviews. More advanced techniques can rely on built-in databases to make predictions about whether the text is appropriate, or is fake news or a scam.  

With little doubt, AI is needed in online content moderation for it to have a chance of being successful. “The reality is, there is simply too much UGC for human moderators to keep up with, and companies are faced with the challenge of effectively supporting them,” the Clarifai post states. 

Limitations of Automated Content Management Tools  

The limitations of automated content moderation tools include accuracy and reliability when the content is extremist or hate speech, due to nuanced variations in speech related to different groups and regions, according to a recent account from New America, a research and policy institute based in Washington, DC. Developing comprehensive datasets for these categories of content was called “challenging” and developing a tool that can be reliably applied across different groups and regions was described as “extremely difficult.”  

In addition, the definitions of what types of speech fall into inappropriate categories is not clear.   

Moreover, “Because human speech is not objective and the process of content moderation is inherently subjective, these tools are limited in that they are unable to comprehend the nuances and contextual variations present in human speech,” according to the post. 

In another example, an image recognition tool could identify an instance of nudity, such as a breast, in a piece of content. However, it is not likely that the tool could determine whether the post depicts pornography or perhaps breastfeeding, which is permitted on many platforms.  

Read the source articles and information from Transparency Market Research, in The New York Times, in a blog post on the website of Appen,  a blog post on the website of Clarifai and an account from New America. 




Source link

Most Powerful Supplement Available
Previous Post

This New Company Says It’s Bringing Back the Woolly Mammoth, As A Way to Fight Climate Change

Next Post

VRML Hosting Zombieland VR Tournament With $10K Prize Pool

Next Post
VRML Hosting Zombieland VR Tournament With $10K Prize Pool

VRML Hosting Zombieland VR Tournament With $10K Prize Pool

The New Innovation Concept That Shouldn’t Be New At All (And That Might Just Save Your Innovation Agenda From Failure)

The New Innovation Concept That Shouldn’t Be New At All (And That Might Just Save Your Innovation Agenda From Failure)

February 4, 2023
John Carmack’s Full Statement On Echo VR’s Planned Closure

John Carmack’s Full Statement On Echo VR’s Planned Closure

February 4, 2023
Ricky Ray Butler, CEO Of California-Based BENLabs, Will Explain How Artificial Intelligence Can Help Unlock Creativity At LEAP 2023’s DeepFest

Ricky Ray Butler, CEO Of California-Based BENLabs, Will Explain How Artificial Intelligence Can Help Unlock Creativity At LEAP 2023’s DeepFest

February 3, 2023
Fitness Service Supernatural Adds Single Controller Mode

Meta One Step Closer To Completing Supernatural Acquisition

February 3, 2023
Expo Live’s Latest Cohort Opens Applications For Climate-Related Solutions Ahead Of The UAE’s COP28 Hosting Duties

Expo Live’s Latest Cohort Opens Applications For Climate-Related Solutions Ahead Of The UAE’s COP28 Hosting Duties

February 2, 2023
Quest 3 Will Have Color Mixed Reality, Meta Confirms

Quest 3 Will Have Color Mixed Reality, Meta Confirms

February 2, 2023
Most Powerful Supplement Available

About

Welcome to Edisonlabs.net Find the latest innovative technology news articles, videos on edisonlabs.net. Read about new inventions shaping the world and the future.

Categories

  • Innovation & Imagination
  • Innovation Technology
  • Innovation Ideas
  • Artificial Intelligence
  • Virtual Reality

Quick Links

  • Home
  • Blog
  • Privacy Policy
  • Disclaimer

Recent

  • The New Innovation Concept That Shouldn’t Be New At All (And That Might Just Save Your Innovation Agenda From Failure)
  • John Carmack’s Full Statement On Echo VR’s Planned Closure

© 2021 Edisonlabs.net - All Rights Reserved

No Result
View All Result
  • Home
  • Blog
    • Innovation
    • Artificial Intelligence
    • Virtual Reality
  • Imagination
  • Ideas
  • Innovation
  • More
    • Artificial Intelligence
    • Virtual Reality
  • Shop

© 2021 Edisonlabs.net - All Rights Reserved