Menu
In an effort to fight disinformation as well as protect public figures online, Meta recently announced that it will be rolling out a free digital security and safety program dubbed the “Journalist Safety Hub.” Several updates have been implemented to Facebook’s Community Standards “including expanding protections for public figures such as journalists and human rights defenders,” the company explained.
“We now remove more types of harmful content such as claims about sexual activity, comparisons to animals, and attacks through negative physical descriptions. Our policies now also provide stronger protections against gender-based harassment for everyone, including public figures,” Meta elaborated.
Over the past year, new policies were also launched that repress “mass harassment and brigading.” “This includes attacks against dissidents—even if the content on its own wouldn’t violate our policies. We also remove state-linked and adversarial networks of accounts, Pages and Groups that work together to harass or try to silence people,” it said.
Meta noted that these initiatives are guided by its independent Human Rights Impact Assessment Report on the Philippines published in 2021. Seeing these strategies in action, more than 400 accounts, pages, and groups have since been taken down by the tech conglomerate in the run-up to the May 2022 elections.
David Agranovich, Meta’s Threat Disruption Director, affirmed that these accounts were seen to be involved in coordinated harm, bullying, harassment, hate speech, misinformation, and violence which are in violation of the platform’s community standards. “We’ve removed a coordinated violating network that claims credit for bringing down website[s] and defacing them,” said Agranovich.
“The network included over 400 accounts, pages, and groups that work together to systematically violate multiple policies on our platform against coordinated harm, bullying, harassment, hate speech, misinformation, and incitement to violence.” The company is also looking into trends that are considered to be “lower sophistication but still problematic.”
These threats include the changing of Facebook focus to grow their audience, known as context switching; deceptive efforts where certain groups monetize the election by selling merchandise or redirecting users to other websites; and inauthentic engagements. Groups identified as “dangerous” have also been removed, according to Meta. “This included a network of Facebook Pages, groups, and accounts maintained by the New People’s Army (NPA), a banned terrorist organization, for violating our policies prohibiting groups that have a violent mission or are engaging in violence.”
Ad transparency has, likewise, been ramped up last month after its tools were expanded to cover other issues during the elections such as immigration, crime, and the economy, according to Meta’s Regional Program Manager for Strategic Response in APAC Aidan Hoy. When posting paid ads, advertisers will be required to confirm their identity as well as location, and a “paid for by disclaimer” will also be indicated in ads. Facebook ensured that it is coordinating with the Commission on Elections regarding other information campaigns. An Elections Operations Center is also targeted to be activated for any potential abuse of its services related to the elections.
#Top Tags COVID Covid-19 Technology Finance Investing Sustainability Economy
and receive a copy of The Crypto Cheat Sheet (PDF)
and NFT Cheat Sheet for free!
Comments are closed for this article!