There and back again
Gavin Lew1 and Robert M. Schumacher Jr.2
(1)
S Barrington, IL, USA
(2)
Wheaton, IL, USA
Name any field that’s full of complex, intractable problems and that has gobs of data, and
you’ll find a field that is actively looking to incorporate artificial intelligence (AI). There are
direct consumer applications of AI, from virtual assistants like Alexa and Siri to the
algorithms powering Facebook and Twitter’s timelines, to the recommendations that shape
our media consumption habits on Netflix and Spotify. MIT is investing over a billion dollars
to reshape its academic program to “create a new college that combines AI, machine learning,
and data science with other academic disciplines.” The college started September 2019 and
will expand into an entirely new space in 2022.1 Even in areas where you’d not expect to find
a whiff of AI, it emerges: in the advertising campaign to its new fragrance called Y , Yves Saint
Laurent showcased a model who is a Stanford University graduate and a researcher in
machine vision.2 The commercial showcases AI as hip and cool—even displaying lines of
Python code, as well as striking good looks to sell a fragrance line. AI has truly achieved
mainstream appeal in a manner not seen before. AI is no longer associated with geeks and
nerds. AI now sells product.
The Incredible Journey Of AI
GAVIN: The Hobbit, or There and Back Again by J. R. R. Tolkien tells of Bilbo Baggins’
incredible journey and how he brought his experience back home to tell his tale. That novel
opened the door to science fiction and fantasy for me.
BOB: Same for me as well. As I got older, science fiction became more real and approachable.
What was once fantasy is now tangible. Consider artificial intelligence. It has gone farther
and faster than I would have believed even a decade ago. And while AI did not encounter
dragons, wizards, and elves as in The Hobbit, AI did have perils and pitfalls on the journey.
GAVIN: Like Bilbo, the story of AI is a journey that carries lessons to be learned. I think
Tolkien’s story was not about where the future can take you, but to not forget what the past
can teach, inform, and make better. The Hobbit was the prelude to an even bigger story that
became The Lord of the Rings. AI may indeed have a great future, but getting it right will
require some new thinking; this book is a UX researcher’s tale on AI.
The point AI has a long history. Learning from its mistakes made in the past can set the AI
of today for success in the future.
,The world, both inside and outside the tech industry, is abuzz with AI.
There must be more to AI than being a company’s newest cool thing and giving fodder to
marketers. The foundation that unlocks the massive opportunity to answer questions and
make human lives easier is the power and intrigue of AI. But its potential is dependent upon
having technology work. Because when technology does not work, there are consequences.
Overhyped Failures have Consequences
GAVIN: The excitement around AI is white-hot. As an example, in health care, Virginia
Rometty, former CEO of IBM, said that AI could usher in a medical “Golden Age.”3 AI is in the
news everywhere.
BOB: When one thinks of overhyped environments, I think of another Golden Age: the tulip
craze in 17th-century Holland. Investing in tulip bulbs became highly fashionable, sending
the market straight up. As the hype grew, a speculative bubble emerged where a single bulb
hit 10 times an average worker’s annual salary.4 Inevitably the market failed to sustain the
crazy prices and the bubble burst.
GAVIN: Like “tulip mania,” the hype around AI is high, if not “irrationally exuberant.” But
what may be surprising to many is that this is not the first time AI has been hyped up. The
boom years of AI in the late1950s came to a crash in the decade that followed. Virtually all
funding for AI was cut and it took another couple of decades to see investment resume.
BOB: This slash of funding all things AI spanned a decade. As research began to move into
robotics, the accompanying hype around robots led to another crash in the 1980s. AI’s
history is long and has seen peaks and valleys. My hope is that today’s exuberance we will
remember the lessons from the past, so this new era of AI will see a more successful future.
The point Failures have implications and have occurred more than once with AI. Learning
from mistakes made in the past can set the AI of today for success in the future.
Artificial intelligence
The term “artificial intelligence” was originated by computer scientists John McCarthy,
Marvin Minsky, Nathaniel Rochester, and Claude Shannon in 1955. They defined AI as
“…making a machine behave in ways that would be called intelligent if a human were so
behaving.”5 Of course, this still leaves the definition of AI widely open to interpretation based
on the subjective definition of what is “intelligent” behavior. (Needless to say, we know a lot
of humans who we don’t think behave intelligently). AI’s definition remains elusive and
changeable.
The question “What is intelligence?” is outside the scope of this book, fraught as it is with
philosophical complications. But, in general, we would support a version of the original
definition of artificial intelligence. The domains contained within artificial intelligence all
, share a common thread of automating tasks that might otherwise require humans to exercise
their intelligence.
There are alternative definitions, such as the one offered by computer scientist Roger
Schank. In 1991, Schank laid out four possible definitions for AI as:
1. 1.
Technology that can divine insights with no direction from humans;
2. 2.
“Inference engines” that can be fed information about any particular field and
calculate proper courses of action;
3. 3.
Any technology that does something that has never been done by technology before;
and
4. 4.
Any machine capable of learning.
We see these as four different ways of defining “intelligence.” Schank endorses the fourth
definition, thereby endorsing the idea that learning is a necessary part of intelligence.
For the purposes of this book, we will not be using Schank’s definition of AI—or anyone else’s.
Doing so would require us to redefine past AI systems, and even some present AI systems,
as outside the realm of AI, which we do not intend to do. Machine learning is often a central
part of AI, but it’s fairly rare. Plenty of AI systems aren’t great at learning on their own, but
they can still accomplish tasks that many would consider intelligent. In this book, we want
to discuss as many applications of AI as possible, whether they are capable of learning or not.
So, we will define artificial intelligence in the most broad way possible.
Definition
Artificial intelligence, or AI, has a meaning that is much contested. For our purposes,
artificial intelligence is any technology that appears to adapt its knowledge or learns from
experiences in a way that would be considered intelligent.