Everything’s getting connected around us. Computers. Smart mobile computing devices. Embedded systems. Documents. People. Creatures. Pets. Even ideas. Everything now has the possibility of being a node on the network, able to publish and to subscribe. Everything.
Before this decade is out, there’s talk of 50 billion connected things. The Internet of Everything. At one time, this figure seemed fanciful. Now, if anything, it looks conservative. Of that 50 billion, human beings are going to be in a minority, even allowing for multiple devices per person.
So it’s not surprising that the age-old question of being ‘controlled by technology’ raises its head. Some human beings have this penchant for ‘Exit, followed by a bugbear’, I suppose.
It may be what technology wants. But it’s not going to happen. Not in 2014, not for a considerable time.
Not until we find a way of taking a machine to court, using it, winning, and getting paid by the machine. There’s the rub.
We’ve had the internet. We’ve had the Internet of Things, we’ve had the Internet of Everything. As Marc Benioff memorably stated at salesforce.com’s annual customer, partner and developer conference Dreamforce last November, we are now in the era of the Internet of Customers.
Behind every device, there is a customer. Behind every tweet, there is a customer. Every like, every comment, every rating, every review, every purchase, every sale: There is always a customer. After all, people buy from people and sell to people. That’s been the case since people learnt to trade with each other, and it’s unlikely to change.
In the main, we use technology to help us make decisions, and not to make the decisions per se. And if you don’t make the decision and you can’t be sued, then you’re not in control.
Most of the time, we don’t actually let technology make decisions, but to present us with information upon which decisions can be taken. In so doing, we may ask of technology that our process of making decisions is simplified: Crunching numbers, presenting them in consumable, comprehensive ways, making them easy to find and use.
Providing answers to questions that inform us, that provide us with more information.
When I was at school in Calcutta, we had odd notions of what the developed world looked like. For example, we used to joke that a deprived foreign schoolboy was one who couldn’t afford new batteries for his calculator. Fast forward to today, when some people think that happiness has two principal measures: A full signal and a full battery.
In some way, we were being serious about calculators. I think I must have been part of the last generation at school to learn to use slide rules, something that faded out by the time I left school in 1975. But other mind-numbing jobs had not yet been outsourced to technology. We had physical paper-based tables for many things, particularly for trigonometry and for logarithms. I’ve watched my three children grow up over the years, and I’ve never seen any of them carrying around tables of logarithms.
Outsourcing mundane jobs to technology (and not just information technology) makes sense. As Kevin Kelly pointed out, it helps us evolve. There is a school of thought that suggests man’s sentience is directly related to our conquest of fire: That we put the ‘technology’ of fire to work for us, used it to break down and simplify our food intake, allowed us to build external stomachs as a result, and thereby make our physical stomachs smaller and our brains bigger over time.
Labour-saving devices (I’m tempted to spell that ‘labor’ but I shall resist that temptation) make sense, and in no way have us ceding control.
Sometimes we can have the illusion that technology is in control of something. Like when your bank manager says he can’t give you the overdraft you want, because the system says no. All that means is that the discretionary power to make the decision has even been taken away from the person who says that to you; nothing more and nothing less. The decision in contexts such as these has been codified by someone, and then carried out by the ‘system’: It didn’t take the decision, all it did was enforce a set of rules it was instructed to enforce.
Decision-support systems by their very nature are not decision-making systems, so the question of being in control doesn’t arise.
Illustration: Sameer Pawar
As more and more things get connected, there’s an increase in complexity, and it’s not always easy to figure out who or what is in control. So there are a growing number of examples where we instruct machines to do something, without quite understanding how those instructions will play out. This is particularly noticeable when you have instructions that interact with each other: By now, everyone has come across cases like the apparent $23m price tag for an everyday unremarkable book, as two sets of instructions ‘compete’ with each other without human intervention. These things will happen, and happen more frequently, as our connected world strives to become hyper-connected. And as that happens, we run the risk of having a world gone mad, ruled by algorithm as it were. Kevin Slavin has done a good job alerting us to such situations. But I go back to what I said earlier about insurance: The algorithm-in-charge world exists, will exist, will even grow, but it’ll never be in charge of something which threatens life, not until it becomes suable.
Design errors such as unplanned and unforeseen recursion can and will happen, but that has nothing to do with technology being in control. Similarly, we can codify instructions inaccurately. It’s the same issue. Nothing to do with technology in control.
For some years now, we’ve been building systems that ‘learn’, that take a set of core principles and then refine, adjust, mutate and perhaps even mangle them over time, reflecting the data that has been gathered. Again, as we move more and more into a world of billions of sensors and actuators, such systems that evolve consequent upon feedback loops will become common. And some will say that it’s an example of technology in control, since we will have outsourced the taking of some classes of decision that were apparently not human in origin, based on data collected by machines and executed according to instructions mutated by machines. Some will see merit in that argument. For now, all I will say is that such types of machines will not be involved in anything where the decision can be queried to the point of being sued, which rules out outsourcing anything of importance to such machines.
As we learn to deal with more and more information, we’re going to need better filters. One of the core filters is knowing whether the information you’re being presented is itself accurate before you go ahead and act on that information.
As we move into hyper-connectivity, we’re going to find this harder and harder; we’ve already had some tragic examples of lives lost because the decision-maker apparently did the right thing but based on the wrong information. Instrument malfunction will cause operator error unless and until we can build in safeguards that verify the information from a different viewpoint, that corroborate the information prompting the action. In such cases, technology participates in the decision-making process, influences it, albeit in the wrong direction, but at the end of the day it’s unreasonable to say that technology is actually in control in such cases.
As we move towards the 50 billion connected things world, we’re going to see increasing levels of complexity in occasions where it looks like technology’s in control. Technology that’s embedded inside our bodies will become common, capable of initiating muscle response.
The way we engage with technology will itself involve our senses more often, as we move further and deeper into wearable computing with sound-, gesture- and touch-based controls. There’ll be a point where we can’t differentiate between embedded-in-body and worn-on-body as our skin becomes a conduit for information in both directions. And as all this happens, we’re going to wonder who is in control.
But it’s not technology, information or otherwise. What we build will:
- Continue to regard us as master, performing tasks we ask of that which we’ve built.
- Create, measure, collect, collate and present information at the right time and in the right place, allowing us to make better decisions.
- Act on our instructions and execute policy against known or discovered parameters, whether in giving loans, governing the speed of a car or in shocking the heart out of ventricular fibrillation.
- Learn from a plethora of feedback loops at scale and thereby refine and adjust rules, policies and guidelines that drive (a), (b) and (c).
None of that equates to technology being in control. Not until I can sue a machine, win, and collect from that machine.
(This story appears in the 10 January, 2014 issue of Forbes India. To visit our Archives, click here.)