Guest Blog Author! Ethics and AI with Dr. Graham Culbertson (part 1 of 3)

In the last year or so, ethics has become a hot topic in the world of AI. It’s always been a part of the AI discussion, but since the generative AI explosion and the subsequent warnings issued by industry leaders and others it has come front and center. Whether it’s in reference to what big tech companies are developing, or how governments and companies are using AI, or what students and others are doing, there’s a lot of concern around “AI ethics.” But what does that mean? Generally people follow this term with a discussion of how AI should be developed or used, and there’s talk of using AI “responsibly,” or “morally,” or even “correctly.” While we can all (ideally) agree that AI should be used in these ways, these terms are not synonymous with “ethically,” and they may mean very different things to different people. 

I have been trying to write a piece on ethics and AI for a while now, and I keep getting hung up because philosophy historically has not resonated with me. I definitely struggled through Intro to Philosophy in college. Maybe I had a bad professor, or more likely I was young and impatient. In any case, I didn’t enjoy the readings or the discussions; it was all too hypothetical and disconnected from reality to be interesting to me.

That all changed when I met Dr. Graham Culbertson in 2020. At that time we were both working at the North Carolina School of Science and Mathematics. As the Instructional Designer in their newly formed AI program, I was tasked with converting two of Graham’s courses into formats that could be duplicated at public schools across North Carolina. While observing his classes, suddenly philosophy all made sense. Graham made it fun and accessible, and all of the discussions explored how the different schools of thought worked when applied to AI. It was great! It reminded me of taking physics in high school and how I loved it because I could finally see the real world application of all that math which before had seemed so pointless.

When I set out to write about AI and ethics, I tried to channel what I had learned watching Graham’s classes. After a while it became apparent that I was out of my depth, so I gave up struggling and contacted Graham, because he can do this better than I can. And he has. Dr. Culbertson is currently a professor at the University of North Carolina at Chapel Hill, and hosts the podcast Plumbing Game Studies, in which he explores game studies through the lens of philosophy and vice versa. He has been teaching at the high school and college level since 2006. 

I am thrilled to introduce this three part series on ethics and AI. We will begin with the difference between morals and ethics, and slowly work our way forward to talk about how they relate to AI. I would add a quick disclaimer: this is not a guide for talking to your students about whether or not it’s ethical to use ChatGPT on their homework or for a research paper. This is about what ethics is and how we can approach solving ethical dilemmas, as well as having a little fun with what that might look like in a super-intelligent AI. I hope that you enjoy it as much as I have.

-Hale Durant, aiEDU

Ethics and AI, part I

As regular readers of this blog know, people have been worried about AI at least since Descartes, 375 years ago. The goal of this series is to provide some background in the field of ethics so that anyone teaching or thinking about AI has a set of philosophical tools they can use to think through the questions currently being raised in the field of AI. 

Let’s begin with the question of ethics. Like so many concepts in the humanities, the word ethics isn’t clearly defined, with different people using it to mean different things. The core idea behind ethics, like its cousin morality, is to help people do the right thing. For the purposes of this series, I will use the definition of ethics provided by Rushworth Kidder in his book How Good People Make Tough Choices

Before I get to Kidder, let’s deal with morality, which can be used as a synonym for ethics but which I will use to mean something very different. Morality is, put simply, the choice between a right thing and a wrong thing. When people do something immoral, that means they do something bad (like lying or theft) for no good and compelling reason. It’s wrong to lie to someone just because a lie will make your day a little easier; it’s wrong to steal from someone just because you like their stuff more than you like yours. So moral choices, in this sense, are the choices that we would only make when we’re being bad, or acting “in bad faith” as the philosopher Jean-Paul Sartre would put it.

By contrast, ethics isn’t about right-versus-wrong. It’s about right-versus-right. As Kidder puts it “Tough choices, typically, are those that pit one “right” value against another.” As you can see, this is totally different from morality, which is about easy choices which we sometimes screw up because we do the wrong thing. Kidder gives the following example: “It is right to take the family on a much-needed vacation—and right to save money for your children’s education.” This is the kind of tough choice that Kidder discusses in his book, and it’s this kind of choice that I’m calling ethics: “right-vs-right.”

This definition of ethics is particularly useful when thinking about many of the debates currently happening around AI ethics. One of those active debates is about the use of generative AI to create art. A simplistic understanding of AI art would suggest that it’s wrong to use AI instead of paying an artist - a simple question of morality. But it’s actually a question of right-versus-right. When Charlie Warzel used Midjourney to create some art for his Atlantic Monthly newsletter, an art director wrote in that “they were frustrated and a little shocked that a national magazine like The Atlantic was using a computer program to illustrate stories instead of paying an artist to do that work.” And that art director was totally right! But print journalism has struggled to make ends meet for decades, and using Midjourney was much cheaper than paying an artist. So it was also totally right to save money by using an generative AI art program!

This is what Kidder means by “right-vs-right,” and it’s what I mean by ethics. It is right, if you are running a magazine on low margins, to save money on art. And it is right, if you are a magazine editor, to support artists and art directors. Both of these things are right, which is what makes it an ethical dilemma. A similar example of this is the many lawsuits filed against OpenAI by writers and writing organizations like The New York Times and George R.R. Martin. OpenAI has trained its generative AI, ChatGPT, by training it on work from newspapers like The New York Times and novels like Game of Thrones by Martin. Because ChatGPT has been trained on all of this text, it can regurgitate some of it verbatim with the proper prompting. This is a right-vs-wrong question: ChatGPT should not be plagiarizing news stories and novels. But OpenAI is working to fix that issue. The larger question - should generative AI be allowed freely to scrape the internet for text? - is a right-vs-right question. It is right to let information flow as freely as possible for the purposes of developing technology. It is also right to make sure that journalists and authors who publish their work digitally are properly paid and credited. The courts will decide this particular question, but the larger values at hand will remain open for us to think about.

And in fact, that’s precisely how Kidder structures his theory of ethics: as a struggle between values. In a question of morality, or right vs. wrong, we are considering acting against one of our stated values (if a theft would benefit us, but hurt our neighbor, we are upholding the value to esteem ourselves but we are not upholding the value to esteem our neighbor). But in these ethical questions, the choice is between two values that we want to uphold, but we are unable to uphold both of them at the same time. Kidder suggests that most of the ethical dilemmas we face are between four sets of binary values (truth vs. loyalty; individual vs. community; short-term vs. long-term; justice vs. mercy). I don’t think these binaries are that useful in general, and I think they’re particularly not useful in terms of AI, but now I’d like to take some time to talk through the ethical dilemmas I’ve mentioned so far and show what values they are clashing between.

So let’s talk about working backwards from dilemmas! If Kidder is right, then when we see a conflict or an ethical dilemma, rather than a mere moral temptation, that’s because we have values that are in conflict with one another. So Kidder’s analysis is particularly useful in working backwards from an ethical dilemma, to show us what values are in conflict. And when someone doesn’t see something as an ethical dilemma, that suggests that they don’t hold the same conflicting values - that they have already resolved the underlying conflict in favor of one of the sides, and thus don’t feel pulled in multiple directions by this question.

Let me give you an example from my own life, before heading back to generative AI examples from The Atlantic and The New York Times that I used earlier. I had a truly passionate disagreement with a colleague on the topic of autonomous weapons systems (LAWS) or, as their opponents prefer to call them, “killer robots.” A LAWS or killer robot is a military vehicle that can, just as a soldier or pilot, engage and kill the enemy - including humans. LAWS seem to be their on the way; for example, the US Navy is testing autonomous ships, although at this time they aren’t armed: https://www.surfpac.navy.mil/usvdiv1/ .

My colleague stated, vehemently, that robots should never be allowed to choose to fire themselves. My position was that robots should be allowed to fire themselves, as long as they do a better job than humans at avoiding the deaths of civilians. I never quite was able to find out her justification for opposing killer robots so strongly, but I can tell you that for me this is a difficult right vs. right question. On the one hand, it is right to use technology like LAWS/killer robots that will make war safer and more humane. On the other hand, it is right to prevent the creation of technologies that will make wars easier and simpler to wage.

But for a pacifist, the question of whether or not to create LAWS is not an ethical dilemma. A pacifist believes that war - all war, in every situation - is morally wrong, and that no effort should be put into preparing for war or even reforming it. Perhaps this was my colleague's position: that war itself was wrong. If so, there’s no right vs. right question here, simply a matter of right vs. wrong. By the same token, someone horrified by the number of highway deaths in the United States might choose to become an enthusiastic supporter of self-driving cars, even if those cars might cause deaths or other problems in the short-term (this is one of Kidder’s binaries, short-term vs. long-term). But someone who is horrified by highway deaths, but is an enthusiastic supporter of mass transit, and wants to reduce cars in the United States, is in the same position as the pacifist in the LAWs example. They don’t care about the tough choice: less safe cars now vs. more safe cars later. Getting rid of cars is their underlying value, and that underlying value means that they won’t perceive an ethical dilemma. This means that ethical dilemmas, with their right vs. right framing, are only shared by people who share the values creating a dilemma. For a proponent of a powerful military, LAWS are simply a good thing. For a proponent of world peace, LAWS are simply a bad thing. For anyone who wishes to have both a strong military and world peace, LAWS create a difficult ethical choice.

This is why I started this series with Kidder’s question of right vs. right, because it helps us see values that we may not have realized we had. You might expect an ethics professor to reflexively think that war is bad. And, of course, all things being equal, war is bad, in the same way that theft is. But since LAWS could be used to wage just wars, or to make wars more humane, they are a question of right vs. right for most people.

In the same way, when reading my examples of The Atlantic not compensating art directors or OpenAI not compensating authors, you might expect an ethics professor to reflexively think that corporate profits are bad. But in the economic system of capitalism, we use the profit motive to reward individuals and businesses that create things that we value; the love of money may be the root of all evil, but the use of money is a way for us to quantitatively support the things that we value. We could, of course, adopt the same position towards money that the pacifist adopts towards war: any and all use of capitalism is bad, and should be opposed. But this extreme form of anti-capitalism won’t help us at all in these dilemmas: neither The Atlantic nor an artist nor George R.R. Martin nor OpenAI should be able to profit from their efforts in a system without the profit motive. These particular questions about right vs. right all revolve around the tradeoffs and efficiencies that the capitalist system is designed to encourage - and that the legal system is supposed to resolve if one side has gone too far in pursuing profit unfairly. We cannot appeal to some obvious moral solution, unless we wish to take a stand against the market itself. 

What you can do, however, is use one of the techniques devised in ethical philosophy in order to make what is otherwise an insoluble choice. When your values come into conflict, you have to make a choice. For the next post in this series, we’ll look at two two major schools of moral decision making, deontological (or rules-based) and utilitarian (or outcome-based) that help you make that choice.

Previous
Previous

Empowering Educators: Highlights from Ohio's AI Summits 

Next
Next

AI Summer Reading List