WE APPRECIATE YOUR PATIENCE: As we transition our order fulfillment and warehousing to W. W. Norton select titles may temporarily appear as out-of-stock. Please check your local bookstores or other online booksellers.

Photo by Tara Winstead on Pexels

Finding an Ethics for Artificial Intelligence

Nigel Shadbolt—

A day doesn’t go by without Artificial Intelligence (AI) grabbing the headlines. The increase in computer power combined with a new breed of AI algorithms trained on huge amounts of content from the web has brought forth AI systems that some argue pose an existential threat to humanity. At the very least they present a host of ethical questions around how this technology is to be deployed and the impact it could have.

Should an AI algorithm fix the bail of someone before the courts, or determine whether a candidate makes it to the next stage of a job interview? Does the latest AI medical diagnostic treat all ethnicities with equal levels of precision?  When can AI generated text and content be included in original works—from books to scientific papers, software code to photographic competitions?

My own research in AI dates to the mid 1970s. Almost 50 years of working with different strains and varieties of AI system. It was already a decades-old field when I began my work. AI is not new. As the science fiction writer William Gibson observed, “the future is already here , it’s just not very evenly distributed”.1

There have been concerns about the ethics of advanced computing systems from the outset. Norbert Wiener the father of cybernetics wrote in 1948: “Long before Nagasaki and the public awareness of the atomic bomb, it had occurred to me that we were here in the presence of another social potentiality of unheard-of importance for good and for evil”.2 The 1980’s saw AI systems used for medical diagnosis, logistics planning, and robot manufacturing. In the 1990’s, Ronald Regan’s Star Wars initiative developed a host of military AI applications. AI ethics is as old as the technology itself.

So, what is different now? The emergence of Generative AI and so-called Large Language Models has produced conversational systems that bring us all into direct, immediate, and personal contact with AIs. Systems capable of extended interaction on any topic; informed, coherent, and knowledgeable. Systems able to compose, summarise, translate, and code. Systems that are ubiquitous and pervasive, being deployed by every kind of company and organisation, and encountered by all manner of individual whether consumers, citizens, students, or professionals.

The spread of this AI technology demands our attention. It is now a widespread general purpose and dual use technology. A technology that is being repurposed for good and ill, that offers opportunities and challenges, benefits, and harms. Some uses are clearly beneficial: in health, transport, retail. But even here there are legitimate concerns around safety, consent, and the ways in which it can fail. Other uses give rise to more ambivalent attitudes: in law enforcement, leisure, gaming, and warfare. And a few other uses are more worrying still, from deepfakes to disinformation. Beware the adoption of AI by bad actors.

There are currently many efforts underway to understand the value systems and moral content of Large Language Models. What biases do they contain? Are they able to reason ethically? How can we train them to be more aligned with our ethics and values? From Anthropic’s Constitutional AI, in which widespread consultation builds a distributional view of how to respond in various ethical contexts, to OpenAI’s Superalignment effort, where brigades of researchers attempt to refine and tune models to various ethical sensibilities, these newly birthed hugely wealthy companies are trying to understand the moral content and possibilities of their creations. Although in the case of OpenAI, and to the consternation of many, it disbanded its Superalignment team in May.

Amid these challenges what ethical framework should we reach for? We should treat AI systems as if they were human. This lets us apply to these new objects in our world centuries of evolved moral and ethical principles. Principles rooted in how we would wish to see humans behave. If we treat our systems as if human we should hold them to high standards—standards we would expect from the best of us. 

Whilst we commend the heuristic of treating AIs as if human we do not believe that at this time nor for the medium term is sentience and moral awareness at home in their extensive digital circuity. Nevertheless, if we design and shape them to be respectful, benevolent, and humane they will reflect back the best of us and produce outcomes that augment rather than diminish our own intelligence and moral nature.

In doing this we draw on various moral frameworks from utilitarianism to deontological ethics, virtue ethics to consequentialism—all have their various strengths and weaknesses. This is a “mixed philosophy” approach, one that is needed to account for what the great Oxford philosopher Isiah Berlin called “the crooked timber of humanity”.3

But much modern work on AI ethics has tended to assume some version of utility theory as an ethical foundation—programs should be designed that seek to optimise for the greatest good for the greatest number. This runs into problems of complex trade-offs and fails to notice important aspects about how we think ethically. 

We highlight virtue ethics. With its roots as far back as Aristotle, this approach emphasises that it is not enough to do the right thing. It should be done for the right reasons. Virtues such as respect and honesty are crucial. Many of our interactions with AI systems are shot through with conflict and confrontation, are designed to elicit polarised attitudes and interactions and are often duplicitous. If our machines treat us without respect and we reciprocate, then our interaction with the world we inhabit becomes coarsened, less civil. In my research group at Oxford, we have been puzzling out exactly what respectful interactions with chatbots should look like.

Another overarching principle we espouse, one that embodies both respect and honesty, meets, for instance, the challenges of voice cloning and deepfakes. A thing should say what it is and be what it says.Good actors in the AI service space must deliver products   each one of which clearly, explicitly, announces that it is an AI and that its content is the product of AI algorithms. Alongside this immutable design requirement needs to sit sanctions:  systems that do not advertise their synthetic nature will be held accountable and appropriate action taken. It could become an offence to produce deepfakes, content without clear provenance.

We need to act urgently in applying strong ethical principles to our design and deployment of AI systems. Whether it is assuring a reasonable expectation of privacy or protecting children, balancing the benefits of, and access to, AI driven precision medicine, or indeed the very framing of the types of laws and regulations we need around AI this is a conversation for the many and not the few. Decisions that affect a lot of humans should involve a lot of humans.


Nigel Shadbolt is principal of Jesus College, Oxford, and professor in the Department of Computer Science at the University of Oxford. He lives in Lymington, UK. With Roger Hampson, he is the author of As If Human: Ethics and Artificial Intelligence, published by Yale University Press.  Roger Hampson is an academic and public servant and former chief executive of the London Borough of Redbridge. He lives in London, UK.


  1. https://www.npr.org/2018/10/22/1067220/the-science-in-science-fiction ↩︎
  2. Norbert Wiener 1948 Cybernetics ↩︎
  3. The Crooked Timber of Humanity: Chapters in the History of Ideas, John Murray, 1990. 2nd ed., Pimlico, 2013. ISBN 978-1845952082. 2nd ed., 2013, Princeton University Press. ISBN 978-0691155937. ↩︎

Recent Posts

All Blogs

Categories