We live in curious times. Of the nearly eight billion humans who live on Earth, for the first time
in history, the majority are literate—that is, able to communicate with other humans
asynchronously, with reasonably accurate mutual understanding.
But human expression goes beyond language. Design and art reflect that which might not be so
succinctly defined. The unspoken behavioral patterns of the world, writ large, are reflected in
excellent design. The emotions and social patterns that direct our unconscious brains are laid bare
in art: sculpture, dance, paintings, and music. But until the digital era, these areas of human
expression have been, in the end, always tied to physical constraints: physics, real materials, and
time.
Computers are, in essence, our attempt to express ourselves with pure energy—light and sound
beaming into eyes and ears, haptics buzzing, inputs manipulated any way we please. But, to date,
much like design and art, computers themselves have been restricted to very real-world limitations;
they are physics-bound glass windows beyond which we can see digital worlds, but to which
worlds we cannot go. Instead, we take computers with us, making them lighter, faster, brighter.
In 2019, we find ourselves in another curious position: because we have made computers more
mobile, we are finally able to move our digital worlds into the real world. At first glance, this
seems a relatively easy move. It’s pleasant to think that we can simply interact with our computers
in a way that feels real and natural and mimics what we already know.
On second glance, we realize that much of how we interact with the real world is tedious and
inconvenient. And on third glance, we realize that although humans have a shared understanding
of the world, computers know nothing about it. Even though human literacy rates have increased,
we find ourselves with a new set of objects to teach all over again.
In this part, we review several of the puzzle involved in moving computers out of two dimensions
into real spatial computing. In Chapter 1, Timoni West covers the history of human–computer
interaction and how we got to where we are today. She then talks about exactly where we are today,
both for human input and computer understanding of the world.
In Chapter 2, Silka Miesnieks, Adobe’s Head of Emerging Design, talks about the contexts in
which we view design for various realities: how to bridge the gap between how we think we should
interact with computers and real shared sensory design. She delves into human variables that we
need to take into account and how machine learning will play into improving spatial computing.
There is much we don’t cover in these chapters: specific best practices for standards like world-
scale, or button mappings, or design systems. Frankly, it’s because we expect them to be outdated
by the time this book is published. We don’t want to canonize that which might be tied to a set of
buttons or inputs that might not even exist in five years. Although there might be historical merit
to recording it, that is not the point of these chapters.
,The writers here reflect on the larger design task of moving human expression from the purely
physical realm to the digital. We acknowledge all the fallibilities, errors, and misunderstandings
that might come along the way. We believe the effort is worth it and that, in the end, our goal is
better human communication—a command of our own consciousnesses that becomes yet another,
more visceral and potent form of literacy.
Chapter 1. How Humans Interact with
Computers
Timoni West
In this chapter, we explore the following:
Background on the history of human–computer modalities
A description of common modalities and their pros and cons
The cycles of feedback between humans and computers
Mapping modalities to current industry inputs
A holistic view of the feedback cycle of good immersive design
Common Term Definition
I use the following terms in these specific ways that assume a human-perceivable element:
Modality
A channel of sensory input and output between a computer and a human
Affordances
Attributes or characteristics of an object that define that object’s potential uses
Inputs
How you do those things; the data sent to the computer
Outputs
A perceivable reaction to an event; the data sent from the computer
Feedback
A type of output; a confirmation that what you did was noticed and acted on by the other
party
, Introduction
In the game Twenty Questions, your goal is to guess what object another person is thinking of.
You can ask anything you want, and the other person must answer truthfully; the catch is that they
answer questions using only one of two options: yes or no.
Through a series of happenstance and interpolation, the way we communicate with conventional
computers is very similar to Twenty Questions. Computers speak in binary, ones and zeroes, but
humans do not. Computers have no inherent sense of the world or, indeed, anything outside of
either the binary—or, in the case of quantum computers, probabilities.
Because of this, we communicate everything to computers, from concepts to inputs, through
increasing levels of human-friendly abstraction that cover up the basic communication layer: ones
and zeroes, or yes and no.
Thus, much of the work of computing today is determining how to get humans to easily and simply
explain increasingly complex ideas to computers. In turn, humans are also working toward having
computers process those ideas more quickly by building those abstraction layers on top of the ones
and zeroes. It is a cycle of input and output, affordances and feedback, across modalities. The
abstraction layers can take many forms: the metaphors of a graphical user interface, the spoken
words of natural language processing (NLP), the object recognition of computer vision, and, most
simply and commonly, the everyday inputs of keyboard and pointer, which most humans use to
interact with computers on a daily basis.
Modalities Through the Ages: Pre-Twentieth
Century
To begin, let’s briefly discuss how humans have traditionally given instructions to machines. The
earliest proto-computing machines, programmable weaving looms, famously “read” punch
cards. Joseph Jacquard created what was, in effect, one of the first pieces of true mechanical art, a
portrait of himself, using punch cards in 1839 (Figure 1-1). Around the same time in
Russia, Semyon Korsakov had realized that punch cards could be used to store and compare
datasets.