Investors cannot rely on established regulation to ensure responsible development and use of AI. The abrdn objective is to work with the companies we invest in to encourage a future where AI delivers sustainable benefits for shareholders and other stakeholders.
Heightened investor scrutiny of AI practices has become evident in shareholder resolutions filed at the annual meetings of several companies - from technology giants to entertainment businesses. These annual meetings allowed us to connect our research on AI with targeted engagement, voting, and, where necessary, public statements to encourage change.
AI has no concept of the value of materials or ethics. Without clear governance and oversight, its outcomes may diverge from important, qualitative objectives and threaten sustainable value creation.
AI has rapidly emerged as a transformative force with the potential to greatly benefit users and society. Its ability to enhance efficiency, productivity, and innovation is promising. However, it also poses risks. AI has no concept of the value of materials or ethics. Without clear governance and oversight, its outcomes may diverge from important, qualitative objectives and threaten sustainable value creation.
That is why we consider it crucial that companies with significant exposure to AI demonstrate:
- Robust governance and oversight
- Strong ethical guidelines
- Appropriate due diligence
- Transparent practices
Where AI is likely to significantly impact operations and labor relations, we believe it is prudent for companies to demonstrate a responsible approach at the earliest opportunity. Collaborating with the workforce is not just a strategy, but a necessity that can enable companies to mitigate adverse outcomes and avoid costly disruption to labor relations.
Governance and oversight
Some companies may be able to demonstrate robust governance and oversight mechanisms through existing structures. However, those with extensive and complex operations involving both the development and use of AI technologies could benefit from dedicated governance structures.
We considered Amazon to be such a company and supported a shareholder resolution requesting the creation of a board committee on AI. We also made a public statement to encourage the company and other investors to consider how governance and oversight could be developed.
Governance and oversight structures always need to reflect a company’s circumstances. In some circumstances, a dedicated structure for AI could consolidate and enhance oversight, helping a company ensure a consistent approach across its operations.
Ethical guidelines
Ethical guidelines, or responsible AI principles, offer companies another way to ensure consistency. They set out the principles guiding how a company will develop and deploy AI ethically, responsibly, and trustworthy. In February, we voted for an AI resolution at Apple. The resolution asked Apple to prepare a transparency report and disclose any ethical guidelines the company has adopted regarding its use of AI technology.
We met Apple and the resolution proponent to discuss their opposing arguments in more detail. In our view, the company is exposed to various risks associated with AI. The requested disclosure, including ethical guidelines, could provide shareholders with evidence of an approach that can protect long-term value.
We were concerned that Apple had not indicated when it would disclose ethical guidelines. To reinforce our message to the company, we made a public statement supporting the resolution. Although the resolution failed to pass, we were encouraged by the notable level of support it received.
Due diligence
To be truly effective, robust governance, oversight, and ethical guidelines must be accompanied by rigorous due diligence. Conducting due diligence allows companies to identify and address risks. This may slow down aspects of development, implementation, and launch; however, if it enables a bold idea to be delivered responsibly, that is a price worth paying. This was a key topic in our engagement with Meta.
Advertising is Meta’s primary source of revenue. Using personal and behavioral data in targeted advertising exposes users to the risk of privacy violations, while algorithms may unintentionally encourage bias and discrimination. When we met the company, we discussed how it uses AI in targeted advertising. AI presents opportunities for Meta to deliver targeted advertising more profitably, but its capabilities pose risks as scale and complexity increase.
We recognize these opportunities but maintain that an independent review of the company’s risk management would provide investors with a proportionate level of assurance and support sustainable value creation.
The company has also faced high-profile controversies regarding misinformation and disinformation, and the use of generative AI gives rise to new issues.1 Meta is clearly taking some steps to manage risk through mechanisms for content removal, content identification, and labeling.
Nonetheless, after our engagement, we remained concerned that Meta is insufficiently prepared to manage the potential volume of third-party, AI-generated content. The risks associated with this, in a critical year for democratic elections worldwide, are well documented.
Our research and engagement led us to support resolutions to address these concerns and demonstrate to investors how Meta’s AI due diligence is building on its governance and oversight mechanisms and Responsible AI Pillars to protect shareholder interests.
Transparency
A common thread underpins the principles discussed above: transparency. Without it, investors cannot understand a company's approach. There are limits—some information will be commercially sensitive.
It is also important to acknowledge that reporting standards for AI are limited and that disclosures must evolve to keep pace with technological developments. However, these are also the factors that make transparent reporting so crucial. Without voluntary transparency, there is a risk that companies will be subject to burdensome regulation.
We have discussed AI transparency with several companies and were encouraged by their interest in investor views and desire to make disclosure valuable and efficient. Microsoft has disclosed extensive information on its approach to AI. We are pleased to have provided feedback on how its pioneering Responsible AI Transparency Report could evolve.2
Labor relations
To support the adoption of AI, companies may also need to consider its impact on the workforce. As AI use becomes more widespread, non-technical staff will require training to understand its opportunities, limitations, and ethics. Like those impacted by the energy transition, workers may also require access to retraining to adapt to the changing labor market.
The entertainment industry has already witnessed debate and disruption due to concerns over the use of AI in film and television production, resulting in the Hollywood strikes of 2023.3 Several entertainment companies received shareholder resolutions on AI use as a result. This serves as a cautionary example of how apprehension about AI's role can disrupt businesses.
There appears to be merit in demonstrating a responsible approach to adopting AI as quickly as possible to reassure key stakeholders. We used our engagement and voting to encourage this approach at selected companies in the sector. Ultimately, collaborating with the workforce will help companies to realize the full potential of AI.
Final thoughts
Heraclitus, an ancient Greek philosopher, said, “There is nothing permanent except change.” This certainly appears to be true when we consider the AI landscape. Companies face a significant and evolving challenge in adapting to, harnessing, and mitigating the risks of AI. As investors, we aim to understand how we can support and collaborate with companies to help them meet this challenge. A focus on robust governance and oversight, ethical guidelines, appropriate due diligence, and transparency will continue to define the abrdn approach. As the technology develops, we believe these issues will remain crucial to AI's responsible development and use.
1 "Meta shutters tool used to fight disinformation, despite outcry." NPR News, August 2024. https://www.npr.org/2024/08/14/nx-s1-5074143/meta-shutters-tool-used-to-fight-disinformation-despite-outcry.
2 Responsible AI Transparency Report. Microsoft, May 2024. https://cdn-dynmedia-1.microsoft.com ... Responsible-AI-Transparency-Report-2024.pdf.
3 "What’s behind Hollywood’s latest wave of layoffs? The business is in reset mode." Los Angeles Times, August 2024. https://www.latimes.com/entertainment-arts/business/newsletter/2024-08-20/wide-shot-summer-of-layoffs-paramount-television-studios-the-wide-shot.
Important information
Projections are offered as opinion and are not reflective of potential performance. Projections are not guaranteed and actual events or results may differ materially.
Any individual companies or other securities discussed above have been selected for illustrative purposes only to demonstrate abrdn's views or investment management style. They're not meant as an investment recommendation, indication of future performance or as an indication of any holdings by abrdn.
AA-230824-182190-1