Essay

What investors should know about defence tech investing

What investors should know about defence tech investing

The increasing integration of autonomous weapons systems and artificial intelligence (AI) technologies into military operations around the globe is creating human rights risks that tech investors should not ignore.

Ali Keegan, Chief Legal Officer, Head of Policy and Head of IP Acquisition and Distribution, Wonder

As governments compete for technological dominance on the battlefield, the specter of autonomous weapons systems hangs over the world.  Because of the legal, ethical, and security risks these weapons present to civilian populations, Human Rights Watch and our partners have long advocated a new international treaty to prohibit autonomous weapons systems that operate without meaningful human control and those that target people. 


Investors can play a critical role in ensuring that there are appropriate limits to these new weapons. A group of investors and companies in 2018 signed a pledge to press for governments to properly limit and regulate the technology and not to participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.


Beyond automated weapons systems, other uses of AI in military settings, for example, to support targeting or other life-and-death decisions, risk increasing civilian harm and could violate international humanitarian law because of a lack of discrimination and proportionality, which are required under international law. Many militaries are currently integrating decision support systems (DSSs) that use AI to inform these critical decisions.  


Machine learning systems often rely on mass surveillance for training data, risk embedding and augmenting structural biases in their models. Given the dual-use nature of many technologies, investors should be clear about any tech product’s potential use in military contexts and any national or international legal requirements around investment in weapons systems or technology for military use. If companies are located in a country that is a party to a relevant disarmament treaty, for example, national implementation legislation or policy may limit investment in weapons systems. Many states parties to the Convention on Cluster Munitions, for example, consider investment in the production of those weapons banned under the prohibition on assistance with prohibited activities.


Even beyond legal compliance, investors have a responsibility under the United Nations Guiding Principles on Business and Human Rights to avoid contributing to human rights abuses through their investments in the tech sector. This means assessing the potential human rights impact of an investment prior to a transaction and on an ongoing basis once they make the investment.

At Wonder, we sit in an unusual position. We're not just a company that uses AI tools — we're a studio that builds original IP and productions using them, while also running an agency that creates content for some of the world’s leading brands and artists. That dual identity matters when it comes to AI ethics, because the stakes are different on each side. When it's our own IP, we’re accountable to ourselves. When it’s a client’s brand, we’re accountable to them too - and their risk tolerance, their legal exposure, and their audiences are all part of the equation. That means the ethical questions around AI aren't abstract for us. They show up in our work every day, often in two different registers at once, and the answers we arrive at have real consequences for the creators on our team, the clients we serve, and the industry we're helping to shape.


So here's my honest take on where we are, and where I think this is going.


The tool landscape moves faster than any policy can

The pace of change in generative AI is genuinely hard to overstate. In the time it takes to develop a thoughtful internal framework for one tool, three more have launched, each with different training data, different licensing terms, and different risk profiles. For a studio like Wonder, where creative teams are naturally drawn to every new capability, this creates real tension. The interest in exploring new tools isn't reckless — it's core to what makes us good at what we do. But "move fast" and "protect the business" are not always comfortable bedfellows.


What we’re working toward is a posture of structured curiosity. We want to enable our team to explore, but with guardrails that make the risk visible, named and understandable before it becomes a problem. That means trying to ask harder questions earlier: Who trained this model, and on what? What are the indemnification terms? What do our client contracts actually say about AI-generated content? These aren't questions that slow creativity down. They're questions that make creativity sustainable.


Copyright is the defining question of this moment

The legal landscape around AI and copyright is genuinely unsettled, and anyone who tells you otherwise is selling something. Training data disputes, output ownership questions, the murky line between inspiration and reproduction — courts and regulators are still working through the fundamentals. For a studio that makes its own intellectual property, this isn't just a compliance concern. It's an existential one.

Arvind Ganesan, Director of Economic Justice and Rights Division, Human Rights Watch

Meghan Stevenson-Krausz, CEO,
Diversity VC

Investors should have clear investment standards and also assess whether businesses they invest in are likely to respect  international humanitarian law, also known as the laws of war, given the heightened risk of facilitating abuses, in addition to respect for international human rights law.


Investors considering supporting tech companies should ask whether they have any mechanism to monitor or limit the use of their products by military agencies. Investors should also ask what contractual clauses or other measures the company has in place to prohibit use that violates international human rights or humanitarian law, including through customization, targeting, servicing or otherwise. Once a tech product or system has been sold or contracted for classified purposes, companies may have little to no control over how a military agency uses their products or systems.


Investors should consult with a variety of stakeholders about a company or product’s potential human rights impact, including  stakeholders advocating humanitarian disarmament and human rights safeguards around the use of AI in military contexts.


As military deployment of autonomous weapons systems and AI technologies grows, along with partnerships between tech companies and governments, investors  should think carefully  about their investment philosophy, develop red-lines, and  question the human rights and humanitarian impact of military technologies. Investors have significant power to shape how these technologies are developed and used, including refraining from investing when the risks are simply too high.

All Rights Reserved

All Rights Reserved