Defining Artificial Intelligence: Why it’s hard and what you can do about it

What is AI? Whether you are new to the term, an enthusiast, or even a professional in the industry, this is not easily answered. As is often the case with fields of study that are broad and currently evolving, the definitions are also broad and evolving. An acceptable but not very helpful answer would be it depends on who you ask. 

The Cambridge Handbook of Artificial Intelligence (2014) includes, at different points, three definitions of AI:

  • “AI is a cross-disciplinary approach to understanding, modeling, and replicating intelligence and cognitive processes by invoking various computational, mathematical, logical, mechanical, and even biological principles and devices.”

  • “AI is the field devoted to building artifacts capable of displaying, in controlled, well-understood environments, and over sustained periods of time, behaviors that we consider to be intelligent, or more generally, behaviors that we take to be the heart of what it is to have a mind.”

  • “The attempt to make computers do the sorts of things that human and animal minds can do - either for technical purposes and/or to improve our theoretical understanding of psychological phenomena.”

I am not pointing this out to pick on Cambridge; the book is a collection of well-written essays on a variety of different topics within the field of AI, and these definitions all come from different experts in the field. I would hazard that none of these authors were attempting to set in stone THE definition of AI that would apply to every situation for all eternity. All three are certainly valid definitions, but depending on your own knowledge base and needs, you might find these overly complicated, or perhaps too broad. 

If you google “artificial intelligence definition,” you will be rewarded with several million results. Some of these will be fairly simple, such as “AI is automated decision making” and “we think about AI as a branch of computer science that allows computers to make predictions and decisions to solve problems.” Some definitions are more complex, such as The US National Artificial Intelligence Initiative Act of 2020: “a machine based system that can, for a given set of human-defined objects, make predictions, recommendations or decisions influencing real or virtual environments.” Oxford languages provides the following definition: “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” Most of the current commonly accepted definitions include variations of the phrase “the ability to learn and solve problems in a similar manner to humans.”

The term Artificial Intelligence, coined by Stanford Professor John McCarthy in 1956, was defined as “the science and engineering of making intelligent machines.” While this definition worked well at the time, it is no longer considered adequate as computer scientists have appropriately moved the goalpost over time. “Intelligent machines” could be said to include computer programs (or robots, etc) which mimic intelligence through traditional programming, as well as early machine learning systems that were hindered by insufficient memory and processing power. These are no longer considered AI. 

An artificially intelligent computer program today is one that, within a very specific and narrow set of parameters, is able to learn. Which is to say that it makes predictions based on inputs, and the more inputs (data) it accumulates, the more accurate the predictions become (hopefully). The AI programs we have today are single task programs; they can only do one thing. A computer or robot might be equipped with multiple AI programs, but they do not function together as a cohesive “mind.” Some researchers are working to combine these into more general use AIs that could be used more broadly and handle more tasks. 

A subject of much debate is whether or not we will ever achieve artificial general intelligence (AGI). These are the robots that make up our favorite villains in science fiction and action movies: Skynet from the Terminator movies, Ultron from the Marvel Cinematic Universe, the Cylons from Battlestar Galactica, etc. They’re also on occasion sidekicks to the good guys: Vision from Marvel Cinematic Universe, R2D2 from Star Wars, Data from Star Trek, etc. AGI, also called “strong AI” or “full AI,” would be a program that could learn any intellectual task that humans are capable of learning. Depending on who you ask, one day AGI may simply be called AI, and what we call AI today will be called “single function intelligence” or something along those lines.

In The New Breed (2021), Kate Darling describes her struggle to define the word “robot”: 

Without a concise definition, how can anyone even begin to write a book about robots? I asked one of my most respected friends and mentors, law professor Jamie Boyle, and he responded: “If anyone insists you give them an essential definition of robot, you tell them, ‘Definitions don’t work the way you think they do’” . . . The idea that there could be a definition of anything is a mistake. Our language is community- and context-specific . . . how you define it depends on the field you’re in.

The terms “AI” and “robot” belong to fields that are rapidly evolving, and as such the above quote applies as much to AI as it does to robots; the definitions of AI should vary depending on who is using them. Software engineers and developers are probably going to talk about AI in more specific and technical ways than a 3rd grade class will. That doesn’t mean that one group’s definition is any better than any other’s assuming the definitions provide clarity and differentiation. Which is to say a good definition should clarify what a thing is to its intended audience, as well as clarify what makes it different from other, perhaps similar things. This is one reason why the 1956 original definition of AI: “the science and engineering of making intelligent machines” no longer works; it does not differentiate between a machine that mimics intelligence and one that can learn. Let us reconsider the Oxford language definition: “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” The first part of the definition clarifies what AI is: “computer systems able to perform tasks that normally require human intelligence,” and the second part differentiates by giving examples of the types of human intelligence based tasks that AI is currently capable of.

So what can we do about it? Once again, a not very helpful response is it depends on who you are. If you’re someone who is just learning more about AI on your own, I would advise googling some definitions and finding something that makes sense to you. If you can take the definition and explain it in your own words in a way that makes sense to you and the people around you, you’re probably good to go. As you learn more about AI over time, how you define AI will probably change too.

If you are a teacher, consider for your first lesson having students construct their own definition of AI as a group. One strategy is to record what everyone thinks AI is or some examples of AI before presenting them with any definitions. Then have students research what AI is on their own, and bring back to the group examples of what they have found. Put all of these together and have them make a working definition for your time together, whether you are teaching AI for a week, a unit, or a semester. Make it clear that they will revisit the definition from time to time to refine it (it is also recommended to first have your students perform this process with something that they are more familiar with as practice). Having students construct the definition gives students ownership; they have a definition they have built in their own context, and can reflect on how much they have learned when they compare their initial definition to later versions. 

If you are an administrator or education official, it is important to build a working definition of AI to steer your curriculum and policy. I would still advise you to follow the same steps as teachers with their students. Build your own definition and refine it as you learn more. You can require it to be part of your curriculum, and expect that students should be familiar with it, but also give your teachers and students the space to explore AI together from the ground up, and build definitions that make sense to them. In all likelihood the definitions they work up to won’t be all that different from what you have set for them to learn, and their fundamental understanding of AI will be that much stronger for having arrived there themselves.

References

AI4ALL. “What Is Ai.” AI4ALL, 20 Dec. 2022, https://ai-4-all.org/about/what-is-ai/. 

Darling, Kate. (2021). The New Breed: How to Think about Robots. Penguin Books.

Frankish, Keith and Ramsey, William, (Eds.). (2014). The Cambridge Handbook of Artificial Intelligence. Cambridge University Press.

Manning, Christopher. “Artificial Intelligence Definitions.” Stanford University Human-Centered Artificial Intelligence, Stanford University, Sept. 2020, https://hai.stanford.edu/sites/default/files/2020-09/AI-Definitions-HAI.pdf. 

Martin, Fred. “Technology Has Vastly Improved Our World, and Artificial Intelligence Will Keep Making It Better.” The CSTA Advocate Blog, 16 Aug. 2019, http://advocate.csteachers.org/category/artificial-intelligence/. 

Previous
Previous

Tales from the Chat Log

Next
Next

Is AI Coming for Your Job?