Facebook is using Machine learning model to stop misinformation

Facebook is using Machine learning model to stop misinformation

Facebook has announced today on its blog post about a new pilot program it has built to leverage the Facebook community that will allow fact-checkers to quickly see whether a representative group of Facebook users found a claim to be corroborated or contradicted. 

Misinformations and fake news are widely spread generally and social media is just another platform to add fuel to the fire. It is a widespread problem that causes confusions about current issues and sometimes create havoc to the existing peace and harmony which ends up damaging societies.

Facebook will hire community reviewers who will work as researchers to find information that can contradict the most obvious online hoaxes or corroborate other claims. However, they are not authorized to make the final decisions themselves, instead, their findings will be shared with the third-party fact-checkers as additional context as they do their own official review. These reviewers are not Facebook employees but instead will be hired as contractors through Facebook partners.

The growth of machine learning is accelerating and are giving organizations powerful ways to utilize the vast amounts of data they are collecting. Facebook is also using the same model in it’s pilot program that will allow fact-checking. According to this program, Facebook’s Machine Learning model will identify potential misinformation using variety of signals. These include comments on the post that express disbelief, and whether a post is being shared by page that has spread misinformation in the past. If there is an indication that a post may be misinformation, it will be sent to a diverse group of community reviewers. 

These community reviewers will be asked to identify the main claim in the post. Research will then be conducted to find other sources that either support or refute that claim. Fact-checking partners will then be able to see the collective assessment of community reviewers as a signal in selecting which stories to review and rate.

 

Facebook has started exploring this idea earlier this year. However, they have been working on this for the last 2 years by expanding their efforts to fight false news by using both technology and people. To spot false news Facebook deployed various new measures to get more context about the stories they see in news feed and has grown its third-part fact-checker program to include 45 certified fact-checking partners who review content in 24 languages.

Henry Silverman, Operational Specialist at Facebook says, “With more than a billion things posted to Facebook each day, we need to find additional ways to expand our capacity. The work our professional fact-checking partners do is an important piece of our strategy”, says in a Facebook blog post and also adds, “But there are scale challenges involved with this work. There simply aren’t enough professional fact-checkers worldwide, and like all good journalism, fact-checking- especially when it involves investigation or more nuanced or complex claims- takes time”.

Facebook wants to find solutions that support original reporting, promote trusted information and allow for people to express themselves freely. Its goal is to help  fact-checkers address false content faster.

Facebook plans on piloting this process in the US over the coming months and evaluate closely how it’s working through their own research, help from academics and feedback from its third- party fact-checking partners. Facebook believes that by combining the expertise of third-party fact-checkers with a group of community-based reviewers, it can evaluate misinformation faster and make even more progress reducing its prevalence on Facebook.

Relate