Filtered By: Scitech
SciTech

Facebook is removing COVID-19 related misinformation that can cause 'imminent physical harm'


Facebook has updated their existing community standards policies to help prevent the spread of misinformation related to coronavirus disease 2019 (COVID-19).

In a virtual press briefing on Tuesday, Facebook's Product Policy Manager on Misinformation Alice Budisatrijo explained how the team specifically updated each of their policies to remove COVID-19 misinformation that can cause "imminent physical harm."

Its "Coordinated Harm" policy was updated to remove content that encourages further spread of COVID-19; content like violating social distancing or those that encourage people to go out of their homes and infect others.

Meanwhile, under the "Hate Speech" policy, Facebook will remove content that claims people of a specific race or religion is responsible for the existence of COVID-19 or its spread.

Facebook has updated the "Bullying and Harassment" policy, prohibiting content that  claims that a private individual has contracted the disease, or that they are violating not lockdown restrictions, when the information is not publicly available or self-declared.

Facebook will also ban advertising of COVID-19 related products that guarantee or claim prevention of or cure for COVID-19 including face masks, test kits and medical treatments.

According Budisatrijo, Facebook has consulted the World Health Organization (WHO) and other health authorities in identifying certain categories of misinformation related to COVID-19 that might cause harm to people.

Facebook has come up with a few categories of content that are being removed on the social media platform, including false claims on cures, treatments, tests and transmissions and the severity of the outbreak.

"For example any I'm sure you have seen content that says garlic or ginger or herbal tea or warm water can cure COVID, we've been removing a bunch of those," she said.

Budjisatrijo said Facebook has started to remove memes like "if hold your breath for 10 seconds you don't have COVID-19" and similar.

According to Budisatrijo, in March alone they have already removed hundreds of thousands of content that fall under those categories with harmful misinformation related to COVID-19.

For Facebook users who have engaged with false COVID-19 content, either by liking, reacting, or sharing, a pop-up warning on the top of their newsfeeds will appear. It will then link them to WHO's myth debunking page.

Facebook has 60 fact-checking teams from all over the world, covering 50 languages globally.

They have also given $2M grant to fact checkers to help expand their capacity as Facebook has been flooded with content, since the beginning of the COVID-19 pandemic.

Facebook's fact checker partners have written 4,000 debunking articles, and based on those, they have displayed warning labels on 40 million pieces of content.

"We believe that these warning labels worked because 95% of the time, people who see the warning label choose not to click and read the article or look at the false picture or video,"  Budjisatrijo said.

Although they rely more on automated systems to remove and detect violating content, Facebook assures that they review and prioritize harmful content reported by users with greatest potential to harm the community.

Similar to Facebook's action against spreading misinformation amid the pandemic, YouTube has taken action against misinformation, taking down video content that's promoting "medically unproven methods" to prevent COVID-19. — LA, GMA News