From Descartes to the Center for AI Safety; 375 years of Musings, Warnings, and Arguments Against AI (PART I)

This is Part I in a two part series. We will begin with events in the past year before heading back to the mid 1600s, and cover up to the early 20th century. Stay tuned for Part II in a few weeks.

If your LinkedIn feed is anything like mine, you have been bombarded with posts and reposts of people sharing articles, editorials, and opinions on the recent turmoil surrounding Sam Altman and OpenAI. I don’t know any of the people involved, and even if I did I wasn’t in the room, so I don’t care to comment. Although I must say it has been the most intriguing boardroom drama in recent history. What I have found interesting is the suggestion, coming from several news outlets now, that there was some breakthrough development that scared the OpenAI board enough to sack their CEO. This is a fascinating idea, and seems to be gaining steam in the news. It also brings to mind the recent warnings from industry leaders that AI is something that might cause great harm to humanity.

On June 30, 2023 the Center for AI Safety (CAIS) issued the following 22 word statement: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. As of November 27, 2023 there were (friends and neighbors, I’m not making this up) 666 signatories listed on the CAIS site, including many researchers, academics, and tech executives, including CEOs from three leading A.I. companies: Sam Altman himself from OpenAI, Demis Hassabis (DeepMind Technologies), and Dario Amodei (Anthropic). Despite its vagueness, the statement generated a lot of press, which isn’t surprising considering how ChatGPT has dominated the headlines since its release last November. The CAIS statement follows an open letter from the Future of Life Institute in March calling for a pause on AI development to conduct safety research, which was signed by Elon Musk and currently over 33,000 other people.

Fears, warnings, and concerns about AI are nothing new; AI gone wrong has been a staple science fiction villain for a long time. It’s not just in movies and pulp novels though, there are a number of warnings similar to the one issued by CAIS and the Future of Life Institute. Here are a few examples from the past decade:

  • In 2018 a group of 26 AI experts published The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, a report which focused on the dangers and risks of AI. Authors included researchers from Open AI, Cambridge, Oxford, the Future of Humanity Institute, and others.

  • The AI Now Institute at NY University published its 2018 Report which called for regulation and oversight of the AI Industry as well as how governments, corporations, and individuals use AI.

  • In 2014 Stephen Hawking, in an interview with the BBC, warned that “The development of full artificial intelligence could spell the end of the human race.”

To oversimplify: Quite a few AI experts, executives, developers, and others have recently expressed a lot of concerns about the future of AI. So if you have concerns about AI yourself, rest assured you are not alone. Even if we ignore these more recent warnings, you are a lot less alone than you might imagine. Scientists, philosophers, mathematicians, inventors, authors, and plenty of other people have been questioning, warning against, and wondering about the future of artificial intelligence for hundreds of years. There are entirely too many examples to explore in the space of this article, so I will focus on a handful of the more prominent ones.

The examples I will highlight generally fall into one of three categories. The first are articles or other published statements from computer scientists, executives, inventors, researchers, and academics, similar to the warnings that have been published in the recent past. The second are arguments made by philosophers. In many cases these are less warnings about AI, but more explorations on why AI isn’t what we think it is, and may not be achievable. The third are works of fiction: stories, novels, movies, and the like. While the fiction examples tend to be wildly improbable, they often capture our imagination and get us thinking about AI in ways we might not otherwise, and typically to a wider audience than the other two categories.

To kick things off, we’re going to go back more than 375 years to the Netherlands in 1637. René Descartes, often referred to as the father of modern philosophy, is most widely recognized for writing Cogito, ergo sum (I think, therefore I am). He wrote this in 1637, in his Discourse on the Method of Rightly Conducting the Reason, and Seeking Truth in the Sciences, commonly shortened to Discourse on the Method. Not just a philosopher, Descartes also had keen interests in science and mathematics. Discourse of the Method outlined Descartes’s method for scientific and philosophical inquiry, which broke with the previous reliance on authority and tradition in favor of reason and skepticism.

In Part V of Discourse, Descartes makes an aside to explore the difference between humans and automata (machines). Automatons were a curiosity of the time, mechanical creations made to move and mimic people in some way. This led some ambitious and imaginative inventors to claim and strive towards building automatons that would be as capable as humans. This of course led to some conjecture that one day, these automatons might be the equals or even superior to humans. As you might imagine, this idea, dismissed by many as impossible, was also the subject of some philosophical and religious soul searching.

Descartes argued:

“although such machines might execute many things with equal or perhaps greater perfection than any of us, they would, without doubt, fail in certain others from which it could be discovered that they did not act from knowledge, but solely from the disposition of their organs: for while reason is an universal instrument that is alike available on every occasion, these organs, on the contrary, need a particular arrangement for each particular action; whence it must be morally impossible that there should exist in any machine a diversity of organs sufficient to enable it to act in all the occurrences of life, in the way in which our reason enables us to act.”

In Descartes’s mind, the ability to reason set us apart from the automata, and the programming, or disposition of their organs as Descartes says, required to achieve human reasoning is simply beyond our ability. Furthermore, these automata did not act out of knowledge, but simply performed the tasks they were programmed to perform. While the language has evolved, this argument is very similar to ones being made today.

Jumping forward a bit, more than a century later automatons and automation remained a curiosity as well as a pursuit of many inventors and scientists. In 1770 Wolfgang von Kempelen built the Turk, a chess playing automaton. For many years the Turk played and mostly won chess games across Europe and North America. Not surprisingly, the Turk was not a true automaton at all, but rather made to look like one, with all manner of gears and pulleys and such inside it. However it also had a secret (somewhat cramped) compartment where a hidden human chess master would operate the machine. The Turk attracted the attention of Edgar Allen Poe, who published an essay in 1836 in the Southern Literary Messenger, debunking it as a hoax. The crux of his argument was that while automatons could all be proven to provide specific outputs based on specific inputs, chess was dependent on so many different possible outputs and strategy that the operation of the Turk was “regulated by mind, and by nothing else.” Perhaps it was Poe’s essay, or perhaps everyone else was arriving at the same conclusion, but around this time the public lost interest in the Turk. It ended up in a Philadelphia museum and was destroyed by a fire in 1854.

While the standard concept of creating AIs in the form of automatons (and later robots and computers) generally involves metal and other inorganic materials, beginning with the 1800s we see a delightfully creepy shift to the idea of making artificial human intelligence out of repurposed organic materials.

In 1818, Mary Shelley warned the world against attempting to create human-like intelligence in Frankenstein. Frankenstein’s monster is one of fiction’s earliest and widely recognizable creations of artificial human intelligence. If you are unfamiliar with the story, suffice to say that things do not go particularly well for Dr. Frankenstein or his monster. Among a host of other themes, the story serves as a warning against the unchecked advancement of science and technology.

In 1896 HG Wells followed in Shelley’s monstrous footsteps with The Island of Dr. Moreau. If animating a bunch of sewn together human body parts seems like a poor idea, it turns out that vivisecting animals and transforming them into human-like hybrids is even worse. While the obvious common theme that Frankenstein and Moreau share is that creating monsters is bad, both works are also calls for scientific regulation and oversight. This past September, the US Senate held two days of hearings to explore concerns and how AI might be regulated moving forward, making these two classic monster tales weirdly relevant today.

During this century there were also more conventional warnings against automation. In 1863 Samuel Butler wrote Darwin among the Machines. Butler’s writings harken back to the Luddite movement in the early 1800s, in which automated textile machines were taking the place of skilled laborers and driving wages down in England. The Luddites went beyond standard labor protests by attacking factories and specifically destroying the machines. Published in New Zealand’s The Press, Butler warned that beginning with the lever, wedge, and the inclined plane, we (humans) had ushered into existence the Mechanical Kingdom, which would one supplant us as the dominant form of life on Earth:

“Day by day, however, the machines are gaining ground upon us; day by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life . . . War to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well-wisher of his species. Let there be no exceptions made, no quarter shown; let us at once go back to the primeval condition of the race."

Continuing the theme of machines and automation becoming dominant over the human race, Czech playwright Karel Čapek published his play Rossum’s Universal Robots (R.U.R) in 1921. It is Čapek we have to thank for the word robot, which is derived from robota, the Czech word for forced labor. The play’s prologue sets the stage for what is to come:

“When the play opens, a few decades beyond the present day, the factory had turned out already, following a secret formula, hundreds of thousands, and even millions, of manufactured workmen, living automats, without souls, desires or feelings. They are high-powered laborers, good for nothing but work.”

Given the plethora of science fiction movies and novels that stand on the shoulders of Čapek’s work, it should come as no surprise what happens next. One word: Mayhem:

“Dr. Gall, the head of the physiological and experimental departments, has secretly changed the formula, and while he has partially humanized only a few hundreds, there are enough to make ringleaders, and a world revolt of robots is under way.”

Debuting just a few years after the end of World War I, R.U.R. was embraced as a warning against mass production and mechanization, with the horrors of machine guns, flamethrowers, and tanks fresh in the public’s mind. The play was immediately a success. Within two years it had been translated into thirty languages, and the word robot was cemented into the global lexicon.

Having crossed into the 20th century, we will pause for now. In Part II we will begin with Isaac Asimov and end up back where we started with the present state of things.

Previous
Previous

Empowering Educators and Advancing AI Literacy with the AI Trailblazers Fellowship

Next
Next

aiEDU Secures Transformational Grants to Advance AI Literacy and Bridge the Opportunity Gap in Education