7 A.I. Terms Every Designer Should Know

As artificial intelligence grows more prevalent, designers need to be able to talk the talk

Esther Blankenship
Modus
5 min readNov 25, 2019

--

Photo: metamorworks/Getty Images

AsAs artificial intelligence creeps into ever more aspects of the things we design, user experience professionals should be able to confidently follow and discuss the big tech advances of our times. These are not just dry terms for developers; they are at the core of important debates about the future of humanity.

Although most of you probably know by now what A.I. is, if pressed to define it, what would you say? That’s harder. And you’ve heard the term “algorithm,” but what is it, really? And could you explain what the difference is between narrow and general A.I.? Or how machine learning, neural networks, and deep learning are related? And what on Earth is this thing called “the Singularity”?

An inquisitive friend of mine asked me these questions and, embarrassingly, I was pretty stumped for good, concise answers. So I did my homework and created an A.I. cheat sheet that I’m sharing with you today. These are the super short definitions I came up with for the most important terms. After each, I suggest further reading.

Before we get to the fancier A.I.-related terms, let’s start off by defining plain old “intelligence” and then the more recent, human invention, “artificial intelligence.”

Intelligence: The ability to acquire and apply knowledge and skills to achieve a goal

Defining intelligence has long divided the scientific community, and controversies still rage over how to define and measure it. Most neural scientists, however, seem to agree that intelligence is an umbrella term that covers a variety of related mental abilities, such as problem-solving, mental speed, general knowledge, creativity, abstract thinking, and memory.

For more, check out this pdf on vetta.org: “A Collection of Definitions of Intelligence

Artificial intelligence (A.I.): Intelligence of a non-biological entity

Just as the definition of human intelligence is elusive and controversial, it is also hard to define what intelligence would mean in the context of a machine. The best definition I found is from Emerj: “Artificial intelligence is an entity (or collective set of cooperative entities), able to receive inputs from the environment, interpret and learn from such inputs, and exhibit related and flexible behaviors and actions that help the entity achieve a particular goal or objective over a period of time.”

See the whole article on Emerj

Now that we have those two cornerstones in place, we can tackle the terms related to A.I.

1. Narrow A.I.

A.I. focused on one predefined task

Also known as “artificial narrow intelligence” (ANI) and “weak A.I.” Technologists have successfully applied A.I. to very specific tasks (e.g., playing chess, filtering spam, driving in traffic, predicting which films you might like to watch, etc.). But an A.I. that was developed to play chess cannot also drive a car unless it is specifically programmed to do so. Thus, narrow A.I. is the A.I. that we experience today.

More on narrow A.I. at techopedia.com

2. General A.I.

Machine intelligence that rivals human intelligence

Also known as “artificial general intelligence” (AGI) and “strong A.I.” This level of A.I. has not yet been (and may never be) achieved. In theory, an AGI could match the flexibility of human cognitive abilities in addition to surpassing us with the advantages we already know computers have over us (memory, speed, network access, computational accuracy, etc.).

Read more about general A.I. on zdnet.com

3. Algorithm

A set of instructions that tells something or someone what to do

An algorithm must have a beginning, a middle, and an end. Interestingly, an algorithm doesn’t have to be a computer program, although in today’s parlance it usually is. A recipe, directions to someone’s house, or the mechanism that decides which ad to show you while you are browsing the web are all examples of algorithms.

Here’s a nice, concise explanation of an algorithm in the context of programming

4. Machine learning (ML)

The ability of a machine to learn and act without explicitly being programmed to do so

Machine learning is a branch of artificial intelligence. The goal is for systems to learn from data, identify patterns, and make decisions with minimal human intervention. By feeding algorithms with large amounts of data, the algorithms can adjust themselves and continuously improve (and thus “learn”).

Find out more about ML at Emerj

5. Neural networks

A computer system patterned loosely on how the human brain is structured

In this case, it’s more accurate to talk about artificial neural networks, since the non-artificial kind is what each of us has in our heads. In this model, the interconnected layers of a neural network process information in a way that is very similar to how our brains process information and learn.

This article on Forbes is a good and easy-to-understand resource

6. Deep learning

A subdivision of machine learning focused on training large neural networks

The “deep” in deep learning refers to the number of layers in the neural network. Each layer parses the input data and passes it on, in a more abstracted form, to the next layer, which then uses that data as input.

Recommended reading at Machine Learning Mastery

7. The Singularity

Potential starting point of technological growth that is no longer under human control. Such a radical development, if reached, could impose unforeseeable changes on human society and our universe.

Also known as “the technological singularity,” it is the hypothetical next step after artificial general intelligence. Because a super-intelligent machine could so rapidly learn and upgrade itself, the consequences are unfathomable.

Vernor Vinge, science fiction author and emeritus professor of computer science and mathematics at San Diego State University, introduced the concept in 1993 and postulated the end of the human era in his paper, “The Coming Technological Singularity.” He writes, “From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control... I think it’s fair to call this event a singularity (“the Singularity” for the purposes of this paper). It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown.”

I hope these definitions help you in your discussions with friends and colleagues so that you can design the best possible outcomes in the brave new digital world.

A printable pdf of these terms is available here.

Originally published at https://experience.sap.com on November 25, 2019.

--

--

Modus
Modus

Published in Modus

A former Medium publication about UX/UI design. Currently inactive and not taking submissions.

Responses (1)