Search

Content Moderation and the U.S. Election: What to Ask, What to Demand - EFF

kosongkosonig.blogspot.com

With the upcoming U.S. elections, major U.S.-based platforms have stepped up their content moderation practices, likely hoping to avoid the blame heaped upon them after the 2016 election, where many held them responsible for siloing users into ideological bubbles—and, in Facebook’s case, the Cambridge Analytica imbroglio. It’s not clear that social media played a more significant role than many other factors, including traditional media. But the techlash is real enough.

So we can’t blame them for trying, nor can we blame users for asking them to. Online disinformation is a problem that has had real consequences in the U.S. and all over the world—it has been correlated to ethnic violence in Myanmar and India and to Kenya’s 2017 elections, among other events.

But it is equally true that content moderation is a fundamentally broken system. It is inconsistent and confusing, and as layer upon layer of policy is added to a system that employs both human moderators and automated technologies, it is increasingly error-prone. Even well-meaning efforts to control misinformation inevitably end up silencing a range of dissenting voices and hindering the ability to challenge ingrained systems of oppression.

We have been watching closely as Facebook, YouTube, and Twitter, while disclaiming any interest in being “the arbiters of truth,” have all adjusted their policies over the past several months to try arbitrate lies—or at least flag them. And we’re worried, especially when we look abroad. Already this year, an attempt by Facebook to counter election misinformation targeting Tunisia, Togo, Côte d’Ivoire, and seven other African countries resulted in the accidental removal of accounts belonging to dozens of Tunisian journalists and activists, some of whom had used the platform during the country’s 2011 revolution. While some of those users’ accounts were restored, others—mostly belonging to artists—were not.

Back in the U.S., Twitter recently blocked a New York Post article about presidential candidate Joe Biden’s son on the grounds that it was based on hacked materials, and then lifted the block two days later. After placing limits on political advertising in early September, Facebook promised not to change its policies further ahead of the elections. Three weeks later it announced changes to its political advertising policies and then blocked a range of expression it previously permitted. In both cases, users—especially users who care about their ability share and access political information—are left to wonder what might be blocked next.

Given the ever-changing moderation landscape, it’s hard to keep up. But there are some questions users and platforms can ask about every new iteration, whether or not an election is looming. Not coincidentally, many of these overlap with the Santa Clara Principles on Transparency and Accountability in Content Moderation, a set of practices created by EFF and a small group of organizations and advocates, that social media platforms should undertake to provide transparency about why and how often they take down users’ posts, photos, videos and other content.

Is the Approach Narrowly Tailored or a Categorical Ban?

Outright censorship should not be the only answer to disinformation online. When tech companies ban an entire category of content, they have a history of overcorrecting and censoring accurate, useful speech—or, even worse, reinforcing misinformation. Any restrictions on speech should be both necessary and proportionate.

Moreover, online platforms have other ways to address the rapid spread of disinformation. For example, flagging or fact-checking content that may be of concern carries its own problems–again, it means someone—or some machine—has decided what does and does not require further review, and who is and is not an accurate fact-checker. Nonetheless, this approach has the benefit of leaving speech available for those who wish to receive it.

When a company does adopt a categorical ban, we should ask: Can the company explain what makes that category exceptional? Are the rules to define its boundaries clear and predictable, and are they backed up by consistent data? Under what conditions will other speech that challenges established consensus be removed?  Who decides what does or does not qualify as “misleading” or “inaccurate”? Who is tasked with testing and validating the potential bias of those decisions?

Does It Empower Users?

Platforms must address one of the root causes behind disinformation’s spread online: the algorithms that decide what content users see and when. And they should start by empowering users with more individualized tools that let them understand and control the information they see.

Algorithms used by Facebook’s Newsfeed or Twitter’s timeline make decisions about which news items, ads, and user-generated content to promote and which to hide. That kind of curation can play an amplifying role for some types of incendiary content, despite the efforts of platforms like Facebook to tweak their algorithms to “disincentivize” or “downrank” it. Features designed to help people find content they’ll like can too easily funnel them into a rabbit hole of disinformation.

Users shouldn’t be held hostage to a platform’s proprietary algorithm. Instead of serving everyone “one algorithm to rule them all” and giving users just a few opportunities to tweak it, platforms should open up their APIs to allow users to create their own filtering rules for their own algorithms. News outlets, educational institutions, community groups, and individuals should all be able to create their own feeds, allowing users to choose who they trust to curate their information and share their preferences with their communities.

In addition, platforms should examine the parts of their infrastructure that are acting as a megaphone for dangerous content and address that root cause of the problem rather than censoring users.

During an election season, the mistaken deletion of accurate information and commentary can have outsize consequences. Absent exigent circumstances companies must notify the user and give them an opportunity to appeal before the content is taken down. If they choose to appeal, the content should stay up until the question is resolved. Smaller platforms dedicated to serving specific communities may want to take a more aggressive approach. That’s fine, as long as Internet users have a range of meaningful options with which to engage.

Is It Transparent?

The most important parts of the puzzle here are transparency and openness. Transparency about how a platform’s algorithms work, and tools to allow users to open up and create their own feeds, are critical for wider understanding of algorithmic curation, the kind of content it can incentivize, and the consequences it can have.

In other words, actual transparency should allow outsiders to see and understand what actions are performed, and why. Meaningful transparency inherently implies openness and accountability, and cannot be satisfied by simply counting takedowns. That is to say that there is a difference between corporately sanctioned ‘transparency,’ which is inherently limited, and meaningful transparency that empowers users to understand Facebook’s actions and hold the company accountable.

Is the Policy Consistent With Human Rights Principles?

Companies should align their policies with human rights norms. In a paper published last year, David Kaye—the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression—recommends that companies adopt policies that allow users to “develop opinions, express themselves freely and access information of all kinds in a manner consistent with human rights law.” We agree, and we’re joined in that opinion by a growing international coalition of civil liberties and human rights organizations.

Content Moderation Is No Silver Bullet

We shouldn’t look to content moderators to fix problems that properly lie with flaws in the electoral system. You can’t tech your way out of problems the tech didn’t create. And even where content moderation has a role to play, history tells us to be wary. Content moderation at scale is impossible to do perfectly, and nearly impossible to do well, even under the most transparent, sensible, and fair conditions – which is one of many reasons none of these policy choices should be legal requirements. It inevitably involves difficult line-drawing and will be riddled with both mistakes and a ton of decisions that many users will disagree with. However, there are clear opportunities to make improvements and it is far past time for platforms to put these into practice.  

Let's block ads! (Why?)



"What" - Google News
October 27, 2020 at 06:41AM
https://ift.tt/31Nbpdd

Content Moderation and the U.S. Election: What to Ask, What to Demand - EFF
"What" - Google News
https://ift.tt/3aVokM1
https://ift.tt/2Wij67R

Bagikan Berita Ini

0 Response to "Content Moderation and the U.S. Election: What to Ask, What to Demand - EFF"

Post a Comment

Powered by Blogger.