Advertisement
X

Future Grid

The Computing Landscape of The 21st Century, All On One Canvas

The creation of the Periodic Table in the late 19th and early 20th centuries was an exquisite intellectual feat. In a small and simple data structure, it organises our knowledge about all the elements in our universe. The position of an element in the table immediately suggests its physical attributes and its chemical affinities to other elements. The presence of “holes” led to the search and discovery of previously unknown elements with predicted properties. This simple data structure has withstood the test of time. As new synthetic elements were created, they could all be accommodated within its framework. The quest to understand the basis of order in this table led to major discoveries in physics and chemistry. What this history teaches us is that there is high value in distilling and codifying taxonomical knowledge into a compact form.

Today, we face a computing landscape of high complexity reminiscent of the scientific landscape of the late 19th century. Is there a way to organise our computing universe into a simple, compact framework with explanatory power and predictive value? What is our analogue of the periodic table? Here, we describe our initial effort at such an intellectual distillation. The periodic table took multiple decades and the contributions of many researchers to evolve into the familiar form we know today. We offer this, therefore, only as the beginning of an important conversation.

Today’s computing landscape is best understood by a tiered model (see graphic). Each tier represents a distinct, stable set of constraints. Many alternative implementations of hardware and software exist at each tier, but all subject to the relevant set of design constraints. In each tier there is considerable churn at timescales of up to a few years, driven by technical progress as well as the market. But the relationship between tiers—including on the key variable: energy input—is stable over decade-long timescales, and can be used to both interpret the past six decades of computing and account for the future.

ALSO READ: Silver Jubilation

Tier 1: The Cloud

Two dominant themes here are elasticity and storage permanence. Cloud computing has almost unlimited elasticity: a Tier 1 datacenter can easily adapt its resources to workload changes, spinning up servers to meet peak demand, and dialling down. And in terms of archival preservation, the cloud is the safest place to store data with confidence that it can be retrieved far into the future—complex storage virtualization systems and practices like data backup and disaster recovery ensure that. Relative to the cloud, other tiers have both very limited elasticity, and more tenuous data safety. It also exploits economies of scale to offer very low total costs of computing.

Tier 3: Mobile devices, the Internet of Things (IoT)

Why not Tier 2 first? Because, to get there, we need to und­erstand Tier 3 first. Mobility is a defining attribute here: it places stringent constraints on weight, size and heat dissipation of devices that a user carries or wears. Such a device cannot be too large, too heavy or run too hot. Battery life is a crucial design constraint. Technological breakthroughs—say, new battery technology, or new lightweight, flexible material—may expand the envelope of designs, but the underlying constraints remain.

Advertisement

Sensing is another defining attribute. Today’s mobile dev­ices are rich in sensors: GPS, microphones, accelerome-ters, gyroscopes, video cameras. But a mobile device may not be powerful enough to perform real-time analysis of data captured by its on-board sensors. There is always a large gap between what it can do and what is feasible on a server of the same technological era. Take a Palm Pilot PDA and a Pentium II processor—both of 1997. The first had a speed of 16 MHz, the second weighed in at 266 MHz. The same asymmetry obtained between a Google Pixel 2 and an Intel Xeon of 2017—9.4 GHz versus 96 GHz. Think of this stubborn gap as a “mobility penalty”— the price you pay in performance foregone due to mobility constraints. To overcome this, a mobile device offloads its heavy computation labour over a wireless network to the cloud. That’s how speech recognition and natural language processing in iOS and Android work. IoT devices, like your smart home appliances with embedded sesnors, belong in Tier 3. They may not be mobile, but there is a strong incentive for them to be inexpensive—hence, meagre processing capability, and offloading.

Advertisement

Tier 2: Cloudlets

This is a new but critical piece of architecture: the essence of edge computing. ‘Edge’ here refers, literally, to proximity—so this is computing done as close as possible to the source of the data. Note, the key here is network proximity, not physical proximity. Basically, instead of depending on large centralised datacenters (the cloud), this relies on a series of mobile cloudlets, or ‘micro-clouds’. These small, dispersed datacenters—think of them as “a datacenter in a box”—are what enable immersive technology like augmented reality, cognitive assistance systems like Google Glass (head-mounted smart glasses, with uses in everything from surgery to neural conditions like autism), as also cyber-physical systems like drones. These are resource-intensive, interactive tasks that need speed. So they depend vitally on low latency—which is the time required for a packet of data to travel round trip between two points. Now, a cloud achieves its economies of scale by consolidation into a few very large datacenters, but that extreme consolidation has two negative consequences. First, it tends to lengthen network round-trip times from device to cloud and back—the closest cloud is likely to be far away from most places. Second, the massive flow of inputs from mobile devices calls for high ingress bandwidth at the datacenters. These two factors tend to stifle the emergence of new classes of real-time, sensor-rich, compute-intensive applications. Enter cloudlets—and decentralised computing. Their very low latency levels help preserve the tight response time-bounds needed for, say, augmented reality or drone control. Speedy response and high end-to-end bandwidth is achievable via a fibre link with a cloudlet many tens or even hundreds of kilometers away. (A highly congested WiFi network wouldn’t do it even if the cloudlet were close by.) Decentralisation also precludes the need for excessive bandwidth demand anywhere in the system.

Advertisement

Tier 4: Smart dust, RFID tags…

A key driver of mobile devices is the vision of embedded sensing, in which tiny sensing-computing-communication platforms continuously report on their environment. “Smart dust” is the ext­reme limit of this vision. The challenge of charging/replacing the batteries of Tier 3 devices has led to the emergence of devices that contain no chemical energy source. Instead, they harvest incident electromagnetic energy (e.g., visible light or radio frequency, RF) to charge a capacitor, which then powers a brief episode of sensing, computation and wireless transmission. The device then remains passive until the next episode. This modality, intermittent computing, allows longevity of deployment combined with opportunism in energy harvesting. The most successful Tier 4 devices today are RFID tags, which harvest the energy provided by an RFID reader, a Tier 3 dev­ice. Immersive proximity is the defining relationship here—they have to be physically close enough for this symbiotic transfer to happen. Other sophisticated devices on the way: a robotic flying insect powered solely by an incident laser beam.

Advertisement

Together, the four tiers offer a canonical representation of components in a modern distributed system. Not every distributed system will have all four. A team of users playing Pokemon Go will only use smartphones (Tier 3) and a cloud server (Tier 1). A worker taking inventory in a warehouse will use an RFID reader (Tier 3) and passive RFID tags (Tier 4) embedded in the objects being inventoried. More complex versions there may allow multiple users to work concurrently, using a cloudlet (Tier 2) or the cloud (Tier 1) to aggregate and eliminate duplication.

Each tier embodies a small set of salient properties that shape both hardware and software designs. For example, hardware at Tier 3 is expected to be small, lightweight, mobile, sensor-rich, energy-efficient and have a small thermal footprint. A product that does not meet these broad imperatives will simply fail in the marketplace. Constraints serve as valuable discipline in system design. Comparing designs via this lens, what we see are the deep structural similarities—flowing under the incidental differences owing to compatibility, efficiency, usability, brand aesthetics et al.

Facebook server

One compelling factor that emerges is the central role of energy, and the need to optimise this vital resource. The power concerns at different tiers span many orders of magnitude, from tens of megawatts (an exascale data center) to a few nanowatts (a passive RFID tag)—though energy harvesting in Tier 4 presents unique challenges such as sporadic power, limited to 10−7 to 10−8 watts using, e.g., RF or biological sources. Emerging wireless backscatter networking enables communication at extremely low power. Intermittent computing allows sensing and complex processing on scarce energy—the uses range from a new breed of sensors and actuators deployed in the human body to monitor health signals, civil infrastructure, outer space. And RF ‘beamforming’ extends the capability of batteryless, networked, in-vivo devices.

Future Evolution

If we map this across time, we see that in the beginning, there was only Tier 1. The mainframes of the late 1950s and 1960s represented consolidation in its extreme form. In this primitive world, there were no representatives of Tier 2, 3 or 4…. And what will our future computing landscape look like? We think a lot of it will be inspired by, rendered in, and extensive of biology—the 1980s-vitage concept of neuromorphic computing is seeing a resurgence with neural machine learning. While analogies to biological behavior abound—spanning from circuits, to architectures, to software and algorithms—computer system behaviour is rarely biological in its efficiency. Visualise being able to render molecular-scale data storage and process structures directly in biological substrates, such as engineered DNA. Future computing systems will extend biology with the mechanical capabilities of micro- and nano-robotics. Tier boundaries are also likely to blur, with new technology enabling a continuum of devices and energy harvesting becoming smarter. As an outlier, there is also quantum computing.

The four tiers, as we can see, are not preordained. Rather, starting with a single tier, they have evolved over time in response to technical innovations and expanding goals—with space, time and energy as the driving forces. These themes will continue to shape computing long after today’s technology is obsolete. Like the periodic table, this model distils a vast space of possibilities into a compact intellectual framework. But the analogy should not be overdrawn: the basis of order in the two worlds is very different. The periodic table exposes order in a “closed-source” system (nature) and here we deal with structure in an “open-source” world of system components created by humans.

(Mahadev Satyanarayanan, Professor of Computer Science at Carnegie Mellon University, is on the frontier of his field, and is credited with many of the advances in edge computing, distributed systems, mobile computing and the Internet of Things. An earlier version of this paper, co-authored with Wei Gao and Brandon Lucia, was published in the proceedings of the ACM HotMobile 2019 workshop.)

ALSO READ

Mahadev Satyanarayanan Professor of Computer Science at Carnegie Mellon University

Show comments
US