There has been a lot of hype lately about the coming wave of capabilities enabled by the Internet of Things (IoT). Through products as diverse as the Apple Watch, Google Nest, and any number of “connected-car” features offered by major auto makers, we are beginning to link together our most valuable possessions—our homes, our vehicles and even our own bodies. Each product comes with a new wave of “smart” applications designed to study our habits and get to know us better than ourselves; each will help us offload the complications of daily life.
The promise of a “smart” life, an IoT life, is usually couched in broad language about a utopian future where everything works “optimally” and “in balance.” But as any reader of Orwell might point out, these are subjective terms. What is optimal?
Defining how things are “supposed” to work
When did we decide exactly how things are supposed to work? Whose values and needs will be most represented in an optimized world? Those of the engineers who are coding it? The product manager who is defining the features? Certainly, for the moment, it’s not the end user.
A smart traffic management system, for example, will make decisions for drivers: prioritizing lanes, setting speeds, timing light changes, etc. These decisions will make traffic more efficient, in theory. But the traffic management system will also have to decide whether right of way goes to the parent with screaming kids in the back who is running late for school, or to the trucker with perishables in back.
In an ideal traffic system, is the goal that everyone move at the same speed? Should decision-making algorithms try to optimize fuel efficiency by speeding up slow movers? Or should they limit pollution by letting electric cars use the fastest lanes? One possible tradeoff of efficiency will play straight into the classic US tension between fairness and an individual’s right to take risks for higher rewards: in this case, to drive faster and get there sooner.
Smart systems are stumbling on values and policy questions
In Jan. 2015, the city of Los Angeles stepped in to ban parking apps like MonkeyParking and Haystack, which allow drivers to auction off the parking spots they occupy, creating an optimized market for parking supply and demand that put drivers who most wanted certain parking spots (and could pay) directly in touch with those who had them. But many in the Los Angeles City Council objected to what Councilman Mike Bonin described as “pimping out a parking spot in the city of LA—taking something which is a public good, something that all of us own, and privatizing for a period of time.”
Within the next three to five years, a similar tension is likely to play out in home delivery services. As a key piece in fulfilling the promise of online commerce, delivery growth rates are expected to rise and include more food delivery and other types of services. Amazon, Google, and Wal-Mart are already competing with UPS, FedEx, and the US Postal Service, as well as crowdsourced-based startups such as Deliv and Postmates.
Smart delivery platforms will have to prioritize. With food delivery, will it be the customers who pay the most or those in most need of better nutrition who get the first drop-off? Is it okay for UberFresh to optimize profits over need? In another part of its business, the recent furor over Uber’s surge pricing is a good example of the public demanding a voice in what, on the surface, is simply an automatically optimizing mechanism.
Who gets to have a say?
We have already seen one optimization battle play out in the net neutrality debate. In Feb. 2015, the United States’ Federal Communications Commission ruled that all Internet traffic is equal. This is not the most efficient approach; some applications don’t really need the same speed as others, just like an Instagram photo needs less than a realtime, remote surgery protocol. But we’ve decided to sacrifice priorities like need and profit for the sake of protecting other values: an “open” internet, unrestricted access, and unrestricted speech. The same decisions will need to be made when it comes to the IoT.
As automated systems mechanisms control more of our everyday lives, governments and markets want to know—and influence—decision-making algorithms, even if those algorithms belong to privately-run businesses. In an Oct. 2013 paper published in the Boston College Law Review, Kate Crawford (Microsoft Research) and Jason Schultz (NYU School of Law) argue that consumers deserve to understand the data and decision-making processes used by analytics systems to make recommendations and perform other types of optimization.
Public intervention in optimization practices is not new. We have always had to make choices and trade-offs in community or national policy decisions. What is different this time, however, is the specificity of choice that is exposed by the technology: day-to-day choices that have historically remained hidden, embedded in social norms, like letting the hurried guy pass you. Other choices have been historically been made by experts and regulatory bodies specifically tasked with designing and running things on our behalf, like public utilities commissions.
What this means for developers
“Code is never found; it is only ever made, and only ever made by us,” once wrote Lawrence Lessig, law professor and director of the Edmond J. Safra Center for Ethics at Harvard University. Although he was discussing the structure of digital space (such as virtual worlds), the same is true of our integrated physical and digital platforms. With this in mind, the companies currently developing, deploying and using smart, big data, and algorithmic platforms would be wise to do a few things:
- Be explicit about the values that are being optimized in these solutions during development and regularly test these against the market or societal sentiment.
- Build in the flexibility today that will allow for substantial revision of the core algorithms in the future, as society changes. Yes, even if this does cost more; optionality comes at real time and resource cost.
- Take pride in the shared values embedded in the code. It’s what defines and differentiates a community and keeps society together for the long haul.
Just as we now have very specific parameters for building codes, it should not surprise us to eventually demand oversight on smart software code. And when we do, we will have to agree, as human stakeholders, on what we mean by “optimized” lives.