Assessing Risks of AI Misinformation and Disinformation

Big Tech, including Amazon, Alphabet, Meta and Microsoft, is investing hundreds of billions of dollars to dominate the AI race. This year, shareholder proposals ask tech companies to increase transparency around AI and assess AI-related risks, particularly to children and elections; increase investment in content moderation; report on the human rights impacts of their AI-driven advertising practices; establish principles for ethical AI development; and appoint directors with substantial AI expertise. It is clear that AI will be a topic for shareholder engagement for years to come.

Two proposals co-filed by Open MIC, at Meta and Alphabet, could have repercussions for democracy around the world. They ask the companies to assess the risks posed by misinformation and disinformation powered by generative AI (gAI) to their companies’ operations and finances, as well as to public welfare and specifically the potentially disastrous effects on the world’s more than 60 elections slated for 2024. These resolutions build on a similar proposal at Microsoft, which earned 21 percent support in December.

Generative AI (gAI) tools, like chatbots and automated content creation tools, can accelerate research, writing, media production and coding—and many other creative tasks. Built on foundation models, such as large language models (LLMs), these tools can accept prompts in one mode, such as text, and produce new outputs in other modes, like audio and video. But when that content is designed to intentionally misinform or deceive people, as the deep fake Biden robocall did in the New Hampshire primary, it can have catastrophic consequences for high-stakes decisions, such as whether to get vaccinated, how to invest one’s savings or how to cast a vote.

Eurasia Group ranked gAI as the third highest political risk confronting the world, warning new technologies “will be a gift to autocrats bent on undermining democracy abroad and stifling dissent at home.” Even Open AI’s Sam Altman, has expressed worry “that these models could be used for large-scale disinformation.”

Further, when campaign content is combined with microtargeting driven by generative AI, it poses a threat to fair elections. During the February elections in Indonesia, the world’s third-largest democracy, campaign workers claimed to have used products from Open AI to “craft hyper-local campaign strategies and speeches,” defying company policies that ban the use of their tools for political campaigns. The tactic appears to have contributed to a victory for Gen. Prabowo Subianto, the country’s defense minister who also served during the Suharto dictatorship.

Despite acknowledging the risks of gAI, companies continue to release these tools into an information environment already besieged by falsehoods. At the same time, they are hollowing out their Trust and Safety teams, seemingly abdicating their responsibility for the effects of their technologies.

Regulation is coming via the EU’s AI Act and new bills being proposed by American states, but investor pressure can help accelerate the sustainable integration of AI by signaling to companies that they must take seriously how these technologies threaten the underpinnings of democracy, including the free flow of information, a sense of a shared reality and trust in our institutions.

 

Jessica Dheere
Advocacy Director, Open Mic