Essay
The increasing integration of autonomous weapons systems and artificial intelligence (AI) technologies into military operations around the globe is creating human rights risks that tech investors should not ignore.
Investors should have clear investment standards and also assess whether businesses they invest in are likely to respect international humanitarian law, also known as the laws of war, given the heightened risk of facilitating abuses, in addition to respect for international human rights law.
Investors considering supporting tech companies should ask whether they have any mechanism to monitor or limit the use of their products by military agencies. Investors should also ask what contractual clauses or other measures the company has in place to prohibit use that violates international human rights or humanitarian law, including through customization, targeting, servicing or otherwise. Once a tech product or system has been sold or contracted for classified purposes, companies may have little to no control over how a military agency uses their products or systems.
Investors should consult with a variety of stakeholders about a company or product’s potential human rights impact, including stakeholders advocating humanitarian disarmament and human rights safeguards around the use of AI in military contexts.
As military deployment of autonomous weapons systems and AI technologies grows, along with partnerships between tech companies and governments, investors should think carefully about their investment philosophy, develop red-lines, and question the human rights and humanitarian impact of military technologies. Investors have significant power to shape how these technologies are developed and used, including refraining from investing when the risks are simply too high.

