AI’s ethical dilemmas – what Artificial Intelligence can learn from human mistakes

In 2023, the United Kingdom convened the UN Security Council’s first debate on Artificial Intelligence (AI) in order to discuss the current and potential impacts of AI on international peace and security, and in November it hosted the first AI Safety Summit. The subsequent Bletchley Park Declaration on frontier AI saw 28 countries agree to support an ‘inclusive network of scientific research… including through existing international fora’.[i]  As openly acknowledged by General Secretary Guterres in his statement to the Security Council, while the rise of AI poses a range of opportunities and challenges for the international community these are, as yet, not well understood by world leaders – this is mainly due to a huge skills and knowledge gap in governments and institutions worldwide.[ii]

While the term ‘artificial intelligence’ has been around for a long time – the field was established in the 1950s – in the last 18 months, it feels as if the way we talk about AI, and the frequency of those conversations, has increased. Although there have been countless incremental breakthroughs in the development of artificial intelligence and machine learning, which have largely passed most of us by, the arrival last autumn of ChatGPT-4, a generative-AI tool that is able to answer questions and generate text, based on the prompts that you feed it, has changed the public consciousness.

While the technology behind ChatGPT to power chatbots is not exactly new either, where ChatGPT has made a difference is in its usability. With a simple and easily navigable interface that is free to try out, it has allowed the general public to engage with AI in a way that hasn’t previously been possible. The widespread uptake of this particular AI tool has taken global leaders in politics, business and even tech by surprise. Gutteres said himself: “while it took more than 50 years for printed books to become widely available across Europe, ChatGPT reached 100 million users in just two months”. Politicians, rarely known for being tech-savvy, are not alone. There has been a spate of high-profile announcements from leaders in the tech world warning of the potential for AI to cause extinction-level harm.[iii] Earlier this year, an open letter signed by over 1000 experts (now more than 30,000) called for a widespread pause of future development following the release of GTP-4, citing their concerns about “an out-of-control race to develop and deploy ever more powerful digital minds.”[iv]

As may perhaps be expected, on the other side of the digital coin are an equal number of technological optimists that argue AI is the answer to our collective problems. A quick internet search will identify a variety of examples showcasing AI’s ability to help humans take on mammoth tasks, previously seen as the stuff of science-fiction; from advancing financial inclusion[v] to supporting the development of Smart Cities of the future[vi]. Huge leaps seen in AI development may not be seen as a bad thing when viewed in the context of a rapidly changing climate, against which AI can be seen as a valuable tool.  In 2023, Google alongside American Airlines brought together and analysed vast datasets using AI in an attempt to develop route forecasting that would allow pilots to avoid creating contrails, which account for over half the impact of the world’s jet fuel[vii].

Yet even when credited for breakthroughs in battling climate change, AI faces significant criticism. The sheer processing power required by large AI models are becoming a threat to the climate in their own right; some calculations show that just training one deep learning model can emit more carbon than 41 round trips from New York to Sydney[viii]. With technological power increasing drastically each year, more calculations will be needed to determine whether AI is supporting solutions or causing more ethical issues.

As in these cases, the pros and cons of using AI to solve global issues has been the subject of increasingly frequent debate, but the launch of Chat GPT and subsequent letters of concern from the ‘forefathers’ of AI has thrown these ethical concerns into the mainstream. Issues are abundant, and present old problems in entirely new ways, with exponentially increasing speed. The control of intellectual property in AI and whether advances should be ‘open’ to all developers replicates a long-standing debate over potentially weaponised technologies; who has the right to these technologies, and can they be trusted? Those who gain and gatekeep innovations may claim to ensure its safe application, but can just as easily be accused of withholding information, avoiding the democratisation of untold power.

Power is not only a concern for the use of the technology itself, but for the power imbalances it may yield and replicate if its creation and development is closed and uncollaborative. The gatekeeping of innovations by companies, while attempting to ensure developers behave ethically, could simultaneously limit the involvement and insights of entire countries, populations and of diverse voices. A lack of education and engagement with vast numbers of the global population has the potential not only to recreate power imbalances in the technology’s creation, but also in how it is applied and its subsequent impacts – something we’ve blogged about in the past. Disinformation campaigns, for example – already a serious global issue in conflicts, health crises and climate disasters – enhanced by artificially generated imagery and spread throughout populations without an understanding of the potential misuse of AI, is a disturbing prospect.

In these ways, the ethical dilemmas facing the emergence of powerful AI are much the same as any global threat tackled by the international development community; a multifaceted issue which demands international collaboration in order to avoid mistrust, misuse and exclusion. This makes it a challenge, but could also provide an opportunity for the global network of development and climate action actors to be at the forefront of discussions and conclusions to uphold principles of safety in the use and development of AI.

A number of hard-learned lessons from the international development sector on the failure of siloed or non-inclusive approaches to those challenges, could provide valuable insights into how we may approach the unknown ethical consequences of technology that is evolving faster than world leaders can act. Lessons such as; the importance of inclusion across social, gender, economic and other barriers in discussions; of local approaches to local problems, led by the communities affected; the value of open dialogue across governments, regions, and international bodies. All of these will be key to mitigating the significant risks of a new race towards AI supremacy.

The international network that exists for tackling climate change and international development issues may even provide an existing forum through which these conversations, and possibly resulting principles and commitments to ethical AI could take place. It may already have been considered in formation the Bletchley Park Declaration, which notes existing international fora as a source of inclusive networks. In this case, the development community would have to heed its own warnings and commit to a truly democratic and open dialogue on AI issues, to ensure power dynamics established decades ago are not re-established through even more powerful technology and resource imbalance. If the optimists are right, AI itself could be used to help overcome the woeful imbalances that exist worldwide and even within the development network itself.

Whatever the right approach to the task at hand, what is clear is that time is running out. Even the experts calling for a pause in progress on AI of just 6 months may be optimistic to imagine a solution to the ethical questions it raises will be found in this timeframe. But as with other crises the world has faced, solutions, however imperfect, are more likely to be found when communities, nations and networks come together to share knowledge, enable learning and reach consensus through inclusive dialogue. At Agulhas, we see an opportunity for the international development community, more renowned for its promotion of human rights than technological prowess, to provide an enabling environment for the important discussions surrounding ethical and inclusive applications of the technology. A focus on the humans involved, rather than the technology, might be just what is needed.



[i] UK government (2023) The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023

[ii] UN (2023) Secretary-General Urges Security Council to Ensure Transparency, Accountability, Oversight, in First Debate on Artificial Intelligence

[iii] Centre for AI Safety, Statement on AI Risk

[iv] Tech Crunch (2023) 1,100+ notable signatories just signed an open letter asking ‘all AI labs to immediately pause for at least 6 months’

[v] WEF (2021) How to harness AI and data portability for greater financial inclusion

[vi] Forbes (2023) On The Horizon For Smart Cities: How AI And IoT Are Transforming Urban Living

[vii] Google (2023) How AI is helping airlines mitigate the climate impact of contrails

[viii] MIT Technology Review (2023) Achieving a sustainable future for AI

Sign up to our bulletin

Sign up to our free weekly bulletin for the latest international development news and analysis