The UN Bets on Human Wisdom to Tame Artificial Intelligence
© Unsplash/Igor Omilaev/ The UN Security Council is meeting to discuss the impact of artificial intelligence on international peace and security.
The United Nations has never been accused of moving quickly. But faced with a technology that is reshaping economies, militaries and democracies at bewildering speed, it has made an unusually bold institutional bet. The Independent International Scientific Panel on AI, the first global scientific body of its kind, is preparing for its inaugural in-person summit. Its mandate is sweeping: to assess how artificial intelligence is transforming human life, and to arm policymakers with the evidence they need to respond.
The panel is not a regulator. It will set no rules, enforce no standards, and prescribe no policy. Think of it instead as the IPCC of artificial intelligence: a body charged with producing rigorous, evidence-based analysis that governments can draw on when making decisions of enormous consequence. Its 40 members, formally appointed by the General Assembly in February, bring together technologists, ethicists, policymakers, academics and representatives of civil society from across the world.
Among them is Menna El-Assady, an assistant professor at the Swiss Federal Institute of Technology in Zurich, who was recommended to serve by the UN Secretary-General. Her research sits at the intersection of machine learning and human cognition, and she has developed what she calls “augmented intelligence”: the idea that AI should enhance human capability rather than supplant it entirely. “We are not just focusing on AI as a mathematical or algorithmic field,” she explains. “We are also looking at ensuring that humans are central to decision-making.”
That principle, which technologists sometimes call the co-adaptation loop, describes the evolving relationship between human judgment and automated systems. The question of when to rely on human expertise and when to delegate to an algorithm is, Ms. El-Assady argues, one of the defining challenges of the age. It is also one that the panel will examine across a wide range of domains, from labour markets to healthcare.
Her concerns extend beyond the technical. She is an advocate for what she terms “public digital infrastructure”: open resources that would allow developers anywhere in the world to build and train AI systems, rather than concentrating that power in a handful of wealthy countries. She also presses for AI models that incorporate a genuine diversity of languages and cultures, rather than reflecting the priorities of a small number of dominant nations.
The panel was born, in part, out of alarm. UN Secretary-General António Guterres told the Security Council in September 2025 that humanity’s fate could not be left to an algorithm. Volker Türk, the UN High Commissioner for Human Rights, warned in February that AI developers who build systems without grounding in fundamental social and ethical principles risk creating what he memorably called “Frankenstein’s monster.” These are not fringe views. The risks of unregulated AI, from weaponised disinformation to opaque decision-making in criminal justice and medicine, have moved from the margins of policy debate to its centre.
One practical proposal on the table is AI watermarking: a technical mechanism for distinguishing human-generated content from machine-generated content. The idea is simple in principle, fiendishly difficult in practice, and emblematic of the broader challenge the panel faces. Technology tends to outrun governance. Whether a body of scientists can help close that gap will become clearer when the panel releases its first report at the Global Dialogue on AI Governance in Geneva, scheduled for 6 and 7 July.
Sources: UN News, “Putting humans at the centre: UN AI panel begins work on global impact study,” by Conor Lennon and Khaled Mohamed, 11 April 2026. Independent International Scientific Panel on AI, mandate documentation, February 2026.
