Big institutional investors, representing $6.9tn in assets under management, are increasing pressure on technology companies to take responsibility for the potential misuse of artificial intelligence (AI). Concerns about liability for human rights issues linked to AI have led the Collective Impact Coalition for Digital Inclusion, comprising 32 financial institutions, to urge tech businesses to commit to ethical AI practices.
Why it matters: This development highlights the growing demand for ethical AI and the need for companies to address the potential risks associated with AI deployment. Institutional investors are increasingly considering the societal impacts of AI, including issues related to human rights, privacy, and job security. Technology leaders must recognize that responsible AI practices are crucial not only for ethical reasons but also to maintain trust among investors, regulators, and the public.
- The coalition, which includes Aviva Investors, Fidelity International, and HSBC Asset Management, has been engaging with tech companies, warning them to strengthen protections against risks such as surveillance, discrimination, and unauthorized facial recognition.
- Failure to address these concerns could result in shareholder activism, voting against management, divestment, or regulatory scrutiny.
- Technology leaders must prioritize ethical AI practices to address investor concerns, maintain trust, and mitigate potential risks associated with AI deployment.