This is a new but critical piece of architecture: the essence of edge computing. ‘Edge’ here refers, literally, to proximity—so this is computing done as close as possible to the source of the data. Note, the key here is network proximity, not physical proximity. Basically, instead of depending on large centralised datacenters (the cloud), this relies on a series of mobile cloudlets, or ‘micro-clouds’. These small, dispersed datacenters—think of them as “a datacenter in a box”—are what enable immersive technology like augmented reality, cognitive assistance systems like Google Glass (head-mounted smart glasses, with uses in everything from surgery to neural conditions like autism), as also cyber-physical systems like drones. These are resource-intensive, interactive tasks that need speed. So they depend vitally on low latency—which is the time required for a packet of data to travel round trip between two points. Now, a cloud achieves its economies of scale by consolidation into a few very large datacenters, but that extreme consolidation has two negative consequences. First, it tends to lengthen network round-trip times from device to cloud and back—the closest cloud is likely to be far away from most places. Second, the massive flow of inputs from mobile devices calls for high ingress bandwidth at the datacenters. These two factors tend to stifle the emergence of new classes of real-time, sensor-rich, compute-intensive applications. Enter cloudlets—and decentralised computing. Their very low latency levels help preserve the tight response time-bounds needed for, say, augmented reality or drone control. Speedy response and high end-to-end bandwidth is achievable via a fibre link with a cloudlet many tens or even hundreds of kilometers away. (A highly congested WiFi network wouldn’t do it even if the cloudlet were close by.) Decentralisation also precludes the need for excessive bandwidth demand anywhere in the system.