Investors continue to file shareholder resolutions that mirror the public debate about the negative influence of electronic media on public and private discourse and behavior—and the related risks to companies. As in 2018, resolutions concern hate speech and the Internet as well as privacy and cybersecurity.
Content management: Arjuna Capital has returned to the three big social media companies— Alphabet, Facebook and Twitter—with a resolution similar to last year’s on problematic content, issues it raised first two years ago. It has been joined in its critiques of the media firms by leading pension fund NYSCRF and the Illinois Treasurer’s Office, as well as Harrington Investments.
At Alphabet, the proposal is a resubmitted request for a report “on major global content management controversies (including election interference)... reviewing the efficacy of governance, oversight and policies on content disseminated on its platform and assessing the magnitude of any risks posed to the company’s finances, operations, and reputation.” The resolution earned 12.8 percent in 2018, a high vote where the founders control a large swathe of the stock.
The resolution notes concerns about Russian interference in U.S. elections and concludes the company’s response has been “insufficient” and puts long-term value at risk. The proposal reiterates the concept of the company as an “information fiduciary” and says Alphabet must “demonstrate how it responsibly manages content on its platform.” The proposal suggests the scrutiny in Congress and elsewhere may prompt the regulation of tech companies as public utilities.
At Facebook, the resolved clause is the same. Last year it earned 10.3 percent, after a much lower vote of 0.8 percent in 2017. It points out a $100 billion decline in Facebook’s market capitalization after news broke in March 2018 that personal data had been used by Cambridge Analytica, followed by another $100 billion decline after its July quarterly earnings call reported growing costs and declining revenue growth. Arjuna concludes these risks are being priced into the market but that the company’s “approach to content governance has proven ad hoc, ineffectual, and poses continued risk to shareholder value.” More than 40 percent of young Americans deleted the company’s app and even more now use it less frequently, it says.
The Facebook proposal urges consideration of the Santa Clara Principles about content moderation and transparency, which call for release of the following metrics:
Numbers (posts removed, accounts suspended)
Notices (of content removals, account suspensions)
Appeals (for users impacted by removals, suspensions)
FACEBOOK INVESTORS PRESS FOR CONTENT GOVERNANCE
Managing Partner, Arjuna Capital
News of Cambridge Analytica’s misappropriation of millions of Facebook users’ data preceded a decline in Facebook’s stock market capitalization of over $100 billion in March 2018. Another 100 billion plus decline in market value—a record-setting drop—came in July 2018 after Facebook’s quarterly earnings report reflected increasing costs and decreasing revenue growth.
Recent efforts to address these concerns are insufficient, in Arjuna’s view, and continue to raise fundamental concerns about “democracy, human rights, and freedom of expression”—involving violence against the Rohingya and refugees in Germany, as well as racist or sexist posts.
A slightly different list of concerns is in the Twitter proposal, which last year earned 35.7 percent support. The resolution seeks a report “reviewing the efficacy of its enforcement of its terms of service related to content policies and assessing the risks posed by content management controversies (including election interference, fake news, hate speech and sexual harassment) to the company’s finances, operations and reputation.” The Twitter proposal presents concerns similar to those in the other two proposals, about management of the platform and Russian interference in U.S. elections and misinformation, as well as “hate speech that can threaten marginalized groups and undermine our democracy.” The resolution criticizes Twitter’s reported use in 2017 of racist and sexist words for ad targeting, and expresses concern about fake user accounts and tweets from bots. The supporting statement says the requested report should “include assessment of the scope of platform abuses, impacts on free speech, and address related ethical concerns.”
Network access: Harrington Investments filed and then withdrew a proposal to Verizon Communications about alleged “throttling” of its network during the 2018 California wildfires. The proposal sought a report
with a summary analysis on whether our Company “throttled” service during the 2018 Mendocino Complex Fire and other similar 2018 fire events, the Company’s assessment of whether any such throttling interfered with fire safety personnel’s ability to function effectively in emergency firefighting activities, and any measures the Company is taking to prevent similar actions in the future to reduce the risk to our Company’s reputation and corporate responsibility profile.
Harrington noted in its withdrawal the company’s review of its services during the wildfires. Verizon also had challenged the proposal at the SEC, arguing it related to ordinary business since it addressed products and pricing and customer relations. The challenge described a board review of company service during the fires and the company’s plan to prevent future similar problems.
Privacy: One more new resolution takes up additional concerns about privacy at Amazon.com. The Sisters of St. Joseph of Brentwood want the company to “prohibit sales of facial recognition technology to government agencies unless the Board concludes, after an evaluation using independent evidence, that the technology does not cause or contribute to actual or potential violations of civil and human rights.” The proposal says privacy and civil liberties risks associated with the company’s Rekognition facial analysis application justify the report. The company has challenged it at the SEC, arguing it is not significantly related to its business and is ordinary business. Critics contend that the Rekognition program can enable surveillance that may violate human rights and target minority groups. The proponents point out that Amazon Web Services provide cloud computing services to the U.S. Immigration and Customs Enforcement agency and may be marketing Rekognition to ICE, “despite concerns Rekognition could facilitate immigrant surveillance and racial profiling.” The supporting statement says the company should assess:
The extent to which such technology may endanger or violate privacy or civil rights, and disproportionately impact people of color, immigrants, and activists, and how Amazon would mitigate these risks.
The extent to which such technologies may be marketed and sold to repressive governments, identified by the United States Department of State Country Reports on Human Rights Practices.
(Also see p. 53 for proposal at Northrop Grumman on its human rights policy and biometric identification.)