The Mujahedin-e-Khalq (MEK), an Albania-based militant organization dedicated to the overthrow of the Iranian government, has long played an unusual and outsized role in online discourse about American foreign policy toward Iran. The group, which is classified as a terrorist organization by Iran and Iraq, is reportedly behind a fake Twitter “sockpuppet” persona named Heshmat Alavi with over 80,000 followers, who has been featured in publications such as Forbes, The Hill, and The Daily Caller. Alavi’s account frequently shares personal criticism of foreign policy specialists who are perceived as dovish toward Iran, and despite its artificiality, its content regularly percolates into the wider anti-Iran discourse by real people on Twitter. While Twitter prohibits coordinated inauthentic behavior, standalone inauthentic accounts are not against the platform’s terms of service, effectively enshrining a place for this runaway sockpuppet success in the Middle East’s political milieu.
MEK does, however, also carry out coordinated inauthentic influence operations on social media as well — and one such troll farm operating in Farsi, Arabic, and English was taken down on Facebook in March. The network, which primarily targeted Iranians and accumulated a global following of about 120,000, included 128 Facebook accounts, 41 Pages, 21 Groups, and 146 Instagram accounts. Facebook also took down networks of several dozen Facebook and Instagram profiles from Iran targeting Israel; about two dozen accounts from Egypt targeting Ethiopia, Sudan, and Turkey; and several dozen domestically-oriented accounts in Israel.
The MEK network, which was the largest taken down in March, used fake personas with faces created by an artificial intelligence technique known as a generative adversarial network, or GAN — the seventh of its kind that Facebook has identified and taken down. A GAN pits two AIs — a generative network and a discriminative network — against one another: the generative network is trained on a data set to generate new, artificial entries that fool the discriminative network to identify them as genuine. The most visible use of this technology has been to generate fake human faces that achieve photorealistic verisimilitude, but it has also been used to create fake art, fake chemical compounds, and (of course) fake cats.
While GANs and other technologies like deepfakes raise alarming prospects for the future of disinformation, researchers have sought to soften some of the public’s concern. Small irregularities often slip past the discriminative network, such as deformed ears, asymmetrical glasses, mismatched earrings, unusual clothing, or bizarrely unreal backgrounds. While sometimes these abnormalities are readily apparent and even jarring to a human viewer, often they can be quite subtle, especially with a human intermediary supervising the algorithm.
By Michael Sexton
Fellow and Director of MEI’s Cyber Program