Twitter will remove or label “manipulated images and videos” on its platform in a bid to control disinfo, it has announced – though its new policy reads more like it’s setting itself up as a clairvoyant arbiter of truth.
“Synthetic or manipulated media that are likely to cause harm” will be removed – or at least plastered with warning labels – under new rules announced by Twitter on Tuesday.
Citing overwhelming demand for stricter content regulations, perhaps in a preemptive attempt to excuse what some have already interpreted as overreach, the microblogging platform laid out a lengthy list of criteria that would supposedly be considered before removing or labeling a tweet.
Media is considered “synthetic or manipulated” if it is edited to significantly change its meaning, sequence, or other attributes, and of course ‘deepfakes’ are right out. But “any visual or auditory information… that has been added or removed,” including subtitles or audio overdubbing, can also get content flagged. Technically, this creates a loophole that makes even content that has simply been translated from another language a potential target.
A tweet may be labeled as “deceptive” if the context in which it is shared “could result in confusion or misunderstanding” or “suggests a deliberate intent to deceive people.” No one can control another user’s (mis)interpretation of their tweet – some people are just easily confused – and similar rules have already been used to target politically-charged satire and memes. No matter how clearly labeled, one man’s joke is inevitably declared another man’s fake news. But a deliberate intent to deceive people? How does Twitter propose to determine who is telling an innocent joke and who is maliciously trolling?
Making the viewer’s sense of humor the responsibility of the poster is likely to have a profoundly chilling effect on memes and other political humor, already besieged by ‘fact-checkers’ sinking their fangs into everything from the parody site Babylon Bee to the obviously-photoshopped image of US President Donald Trump giving a Medal of Honor to a terrorist-killing military dog. With the Pentagon itself taking aim at “polarizing viral content” – i.e., political memes – and so-called “malicious intent” in a sinister project announced in September, Twitter may have unwittingly volunteered itself as the first battlefield in the War on Memes.
While tweets containing synthetic or deceptive content will merely get slapped with a warning label when the new rules take effect on March 5, content “likely to impact public safety or cause serious harm” is singled out for removal. This seemingly-uncontroversial rule becomes menacingly vague on closer examination, listing “targeted content that includes tropes, epithets, or material that aims to silence someone” and “threats to the privacy or ability of a person or group to freely express themselves” among the categories of banned speech.
While this would seem to outlaw the tactics of groups like Sleeping Giants whose literal goal is to get those it unilaterally deems ‘fascists’ deplatformed by ginning up outrage mobs against them, Twitter is unlikely to defend the victims of such groups, if pastbehavior is any indication.
Which begs the question: what constitutes ‘serious harm’, or for that matter ‘public safety’, and who determines what is likely to result in it? Twitter has allowed faux-Iranian bots operated by the anti-Tehran Mojahedin-e Khalq (MEK) cult to run rampant on the platform, demanding an American invasion of “their” country – some of which have been retweeted by Trump himself as “proof” the Iranian people want regime change.
The new rules leave a wide swath of content open to interpretation, giving Twitter carte blanche to determine the intent and likely repercussions of any given tweet. While no one wants to be flooded with deepfakes or other truly deceptive content, especially during an election season, in practice these rules have been applied unevenly to silence political and social viewpoints that diverge from ‘woke’ centrist orthodoxy. Giving Twitter the power to determine both truth and intention is conferring an authority the platform has already shown it cannot handle responsibly.
By Helen Buyniski
Helen Buyniski is an American journalist and political commentator at RT.