In September 1991, Scientific American published an article by Mark Weiser, the then head of Xerox’s PARC computer science laboratory, entitled ‘The Computer for the 21st Century’. In that piece he discussed his ideas for a computing infrastructure that would disappear into the background, calling it “ubiquitous computing”.
Weiser’s ubiquitous computing future was one where technology was everywhere, and where we interacted with them through what he called ‘tabs’, ‘pads’, and ‘walls’.
He described it as follows: “Ubiquitous computing names the third wave in computing, just now beginning. First were mainframes, each shared by lots of people. Now we are in the personal computing era, person and machine staring uneasily at each other across the desktop. Next comes ubiquitous computing, or the age of calm technology, when technology recedes into the background of our lives.”
That third wave of computing is one we’re living through, with smartphones his tabs, tablets and touch-enabled PCs his pads, and giant flat screens his walls. But it’s not a particularly calm world, with all our devices demanding attention, and where we’re forced into an unending set of interactions with our screens.
Ignorance is bliss (when it comes to computers)
Instead, we’re being drawn to another wave, one that takes Weiser’s ubiquitous computing vision and mixes it with the Internet of Things (IoT), machine learning, and the hyperscale compute cloud to deliver what’s being called ‘ambient computing’. As an alternative to traditional computing models, ambient computing takes its cue from musician Brian Eno, who in coining the term ‘ambient music’ for his slow compositions, described it as something that “must be as ignorable as it is interesting.”
Ambient computing is ignorable computing. It’s there, but it’s in the background, doing the job we’ve built it to do. One definition is a computer you use without knowing that you’re using it. That’s close to Eno’s definition of his music — ignorable and interesting.
A lot of what we do with smart speakers is an introduction to ambient computing. It’s not the complete ambient experience, as it relies on only your voice. But you’re using a computer without sitting down at a keyboard, talking into thin air. Things get more interesting when that smart speaker becomes the interface to a smart home, where it can respond to queries and drive actions, turning on lights or changing the temperature in a room.
But what if that speaker wasn’t there at all, with control coming from a smart home that takes advantage of sensors to operate without any conscious interaction on your part? You walk into a room and the lights come on, because sensors detect your presence and because another set of sensors indicate that the current light level in the room is lower than your preferences. Maybe the sun has set, maybe it’s raining; what’s important is that the system has delivered your chosen response without any interaction on your part.
Living with ambient computing
With ambient computing, any interaction has to be by choice, driven by the user rather than the system. Most operations are in the background, driven by rule engines and machine learning. For example, the heating controllers in my home are an excellent example of an ambient computing platform. Like most European homes, mine uses hot water radiators and a central boiler. As well as a central thermostat, each radiator has its own thermostatic valve. These used to be simple wax motors that opened and closed the valve, using approximate temperatures. A ‘4’ on one radiator would be much the same on another.
The ambient computing system that runs them now has separate IoT-controlled valves that can treat each room as a separate zone, combining temperature sensors with actuators that drive the radiator valves and wireless connections to the central controller. While these are used to manage temperature at a room level, they’re only part of a much more complex system. Once turned on, the system as a whole spent the first month of operation building a thermal model of the house, learning how much heat needs to be put into each zone to reach and maintain the target temperature.
All I’ve needed to do is define what the system targets are, and now it runs free, turning on the boiler when necessary and adjusting the valves to ensure that each zone is correctly heated. I can check an app to see if everything is working the way I intend, changing targets as necessary. There are no alerts, no unwanted interactions. All that matters is that the rooms are as warm as they need to be, when they need to be. The complexity of the system is hidden, with a cloud-trained, machine-learning model running on more constrained hardware in my home.
What’s more important is that the model is also tied to external conditions, trained on the house’s response to external conditions as well as internal heat sources, and linked to a small digital weather station on my roof. If it’s not going to be particularly cold outside, it won’t run the heating for as long, because it would take longer for the house to cool down.
Ambient computing provides an intelligent way of working with sensors and actuators, building on their connections and the flexible computing power of the cloud. It’s a way of building smart connectors that can deliver more than the relatively simple hardware it’s using. Home automation is a logical early adopter of ambient computing technologies, but there are many more options, in industry, transport, and in the environment.
Colors and light, movement and shape: the ambient interface
The other key aspect of ambient computing is how it delivers information to us. Instead of complex screens full of information, an ambient interface might be a shade of blue, changing its color as the weather changes or as a stock price moves. You can think of it as the electronic equivalent of the old analogue dials and lights, or a car’s dashboard: something you can glance at and understand what’s happening and determine whether you need more information.
One of the first popular ambient computing devices was the Nabaztag, a rabbit-shaped internet-connected device that changed color or moved its ears based on external information. You could choose what its signals meant to you, so each Nabaztag became a very personal device. That model went even further with a Microsoft Research project that built a real-world version of a family clock from a Harry Potter movie, with a mix of physical pointers and custom screens.
An ambient interface needs to be glanceable. It’s not something you should have to spend time deciphering. It shouldn’t be complicated to set up, with no- and low-code environments providing the simple event-driven model used to deliver ambient applications. Hooking an IoT-powered light up to a calendar means your colleagues (and if it’s for working at home, family) know not to interrupt you when you’re in an online meeting. Tools like Node-RED, Microsoft’s Power Automate, and IFTTT are key to building your own ambient computing environment from common IoT hardware and from simple APIs like webhooks.
The ambient future
We’re living in a world of ubiquitous computers, one where they’re demanding more and more of our attention. But as they get more powerful and more distributed throughout the world, that attention becomes less and less important. Making them ignorable is the next step, using them in the background and only interacting with them when it’s really necessary.
Blending ubiquitous computing with IoT sensors and actuators, as well as with cloud and local AI, makes a lot of sense. It’s all combined to become another big step to a science-fictional future where the environment around us responds to our needs before we even know what we want.