Introduction
If the seventeenth and early eighteenth centuries are the age of clocks, and the later eighteenth
and the nineteenth centuries constitute the age of steam engines, the present time is the age of
communication and control.
Norbert Wiener (from the 1948 edition of Cybernetics: or Control and Communication in the
Animal and the Machine).
It is unfortunate that we don't remember the exact date of the extraordinary event that we are about
to describe, except that it took place sometime in the Fall of 1994. Then Professor Noah Prywes
of the University of Pennsylvania gave a memorable invited talk at Bell Labs, at which two
authors1 of this book were present. The main point of the talk was a proposal that AT&T (of which
Bell Labs was a part at the time) should go into the business of providing computing services—in
addition to telecommunications services—to other companies by actually running these
companies' data centers. “All they need is just to plug in their terminals so that they receive IT
services as a utility. They would pay anything to get rid of the headaches and costs of operating
their own machines, upgrading software, and what not.”
Professor Prywes, whom we will meet more than once in this book, well known in Bell Labs as a
software visionary and more than that—the founder and CEO of a successful software
company, Computer Command and Control—was suggesting something that appeared too
extravagant even to the researchers. The core business of AT&T at that time was
telecommunications services. The major enterprise customers of AT&T were buying the customer
premises equipment (such as private branch exchange switches and machines that ran software in
support of call centers). In other words, the enterprise was buying things to run on premises rather
than outsourcing things to the network provider!
Most attendees saw the merit of the idea, but could not immediately relate it to their day-to-day
work, or—more importantly—to the company's stated business plan. Furthermore, at that very
moment the Bell Labs computing environment was migrating from the Unix programming
environment hosted on mainframes and Sun workstations to Microsoft Office-powered personal
computers. It is not that we, who “grew up” with the Unix operating system, liked the change, but
we were told that this was the way the industry was going (and it was!) as far as office information
technology was concerned. But if so, then the enterprise would be going in exactly
the opposite way—by placing computing in the hands of each employee. Professor Prywes did not
deny the pace of acceptance of personal computing; his argument was that there was much more
to enterprises than what was occurring inside their individual workstations—payroll databases, for
example.
There was a lively discussion, which quickly turned to the detail. Professor Prywes cited the
achievements in virtualization and massive parallel-processing technologies, which were sufficient
,to enable his vision. These arguments were compelling, but ultimately the core business of AT&T
was networking, and networking was centered on telecommunications services.
Still, telecommunications services were provided by software, and even the telephone switches
were but peripheral devices controlled by computers. It was in the 1990s that virtual
telecommunications networking services such as Software Defined Networks—not to be confused
with the namesake development in data networking, which we will cover in Chapter 4—were
emerging on the purely software and data communications platform called Intelligent Network. It
is on the basis of the latter that Professor Prywes thought the computing services could be offered.
In summary, the idea was to combine data communications with centralized powerful computing
centers, all under the central command and control of a major telecommunications company. All
of us in the audience were intrigued.
The idea of computing as a public utility was not new. It had been outlined by Douglas F. Parkhill
in his 1966 book [1].
In the end, however, none of us could sell the idea to senior management. The times the
telecommunications industry was going through in 1994 could best be characterized as
“interesting,” and AT&T did not fare particularly well for a number of reasons.2 Even though Bell
Labs was at the forefront of the development of all relevant technologies, recommending those to
businesses was a different matter—especially where a proposal for a radical change of business
model was made, and especially in turbulent times.
In about a year, AT&T announced its trivestiture. The two authors had moved, along with a large
part of Bell Labs, into the equipment manufacturing company which became Lucent Technologies
and, 10 years later, merged with Alcatel to form Alcatel-Lucent.
At about the same time, Amazon launched a service called Elastic Compute Cloud (EC2), which
delivered pretty much what Professor Prywes had described to us. Here an enterprise user—located
anywhere in the world—could create, for a charge, virtual machines in the “Cloud” (or, to be more
precise, in one of the Amazon data centers) and deploy any software on these machines. But not
only that, the machines were elastic: as the user's demand for computing power grew, so did the
machine power—magically increasing to meet the demand—along with the appropriate cost; when
the demand dropped so did the computing power delivered, and also the cost. Hence, the enterprise
did not need to invest in purchasing and maintaining computers, it paid only for the computing
power it received and could get as much of it as necessary!
As a philosophical aside: one way to look at the computing development is through the prism of
dialectics. As depicted in Figure 1.1(a), with mainframe-based computing as the thesis, the
industry had moved to personal-workstation-based computing—the antithesis. But the spiral
development—fostered by advances in data networking, distributed processing, and software
automation—brought forth the Cloud as the synthesis, where the convenience of seemingly central
on-demand computing is combined with the autonomy of a user's computing environment.
Another spiral (described in detail in Chapter 2) is depicted in Figure 1.1(b), which demonstrates
how the Public Cloud has become the antithesis to the thesis of traditional IT data centers, inviting
the outsourcing of the development (via “Shadow IT ” and Virtual Private Cloud). The synthesis
, is Private Cloud, in which the Cloud has moved computing back to the enterprise but in a very
novel form.
Figure 1.1 Dialectics in the development of Cloud Computing: (a) from mainframe to Cloud; (b)
from IT data center to Private Cloud.
At this point we are ready to introduce formal definitions, which have been agreed on universally
and thus form a standard in themselves. The definitions have been developed at the National
Institute of Standards and Technology (NIST) and published in [2]. To begin with, Cloud
Computing is defined as a model “for enabling ubiquitous, convenient, on-demand network access
to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications,
and services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction.” This Cloud model is composed of five essential characteristics, three
service models, and four deployment models.
The five essential characteristics are presented in Figure 1.2.