Hitting the Books: BMW's iDrive and the pitfalls of an overly customizable UX

Sometimes, 700 options is too many options.

Tobias Schwarz / reuters

The robots are evolving. Up until now they’ve been little more than rank automatons under the direct supervision of a human “in the loop.” But as AI and machine learning continue to advance, a new generation of robots is sure to emerge, more capable and independent than their predecessors and able to fill a wider variety of service positions than they do today — from delivering take-out orders to autonomously managing shipping warehouses.

What to Expect When You’re Expecting Robots, by Motional CTO Laura Major and Julie Shah, director of the Interactive Robotics Group at MIT, explores how these transformative advances will require society to rethink its relationship with the working robots of tomorrow. In the excerpt below, Major and Shah explore the user experience, how companies leverage it to attract and maintain customers, and how allowing users to define their own experiences can lead to disastrous design outcomes.

What to Expect When You're Expecting Robots cover
What to Expect When You're Expecting Robots cover (Basic Books)

Copyright ©2020 Laura Major and Julie Shah All rights reserved. The following excerpt is reprinted from What To Expect When You're Expecting Robots: The Future of Human-Robot Collaboration by Laura Major and Julie Shah. Reprinted with permission of Basic Books.


We tend to think about robots in terms of how much joy and entertainment they bring. We buy an Alexa not only to play our favorite music, but also to add character to our homes. We delight at her preprogrammed banter, her jokes, and her animal sounds. People personify their Roombas and choose smart home devices that blend with their decor. We give our devices names as if they were pets, and customize their voices. The overwhelming sense is that what we want from robots is for them to be relatable. We want them to be pliable pseudo-people.

In turn, robots today are typically designed with special attention to aesthetics and character. When news stories about robots go viral, it’s because the robots have been made to look more like people. They mimic our facial expressions and act friendly. We want them to have personalities. Indeed, a great deal of attention has been devoted to developing robots that can elicit engagement from their users and connect with them on an emotional level. The companies that develop these tools likely feel that anthropomorphizing their products will help create attachment to their brand. There is a whole new field of technology design that aims to optimize the user’s emotions, attitudes, and responses before, during, and after using a system, product, or service. It’s called user experience (UX), and for most businesses the goal of UX development is to attract and keep loyal customers.

But as robots enter our everyday lives, we need more from them than entertainment. More and more, we don’t want them to simply delight us, we want them to help us, and we need to understand them. As robots weave in and out of traffic, handle our medication, and zip by our toes to deliver pizzas, it won’t really matter whether we’re having fun with them. Developers of new technology will have to confront the complexity of our everyday world and design ways of dealing with it into their products. We will all inevitably make mistakes in these interactions, even where lives are at stake, and it’s only through designing a proper human-robot partnership that we will be able to identify these mistakes and compensate for them.

The stakes for the design of most consumer electronics are now fairly low. If your smartphone fails, most likely no one will get hurt. So designers focus on providing the best experience for the most common situations. Problems that arise in only rare circumstances are tolerated, and the assumption is that most problems can be solved by rebooting the device. If that fails, you just have to figure it out, perhaps with the help of a tech-savvy friend. It’s simply not the point of most consumer technologies to be resilient against all possible failures, and it’s not worth the effort for companies to prevent all failures. A user, after all, is usually willing to overlook an occasional software glitch, as long as the overall experience is enjoyable and the device seems more useful than what the competition has on offer.  This just isn’t the same for safety-critical systems: a blue screen of death on the highway in a self-driving car could mean a catastrophic accident.

So the goal in UX is to elicit a positive emotional response from the user, and the best way to do that is to focus on the artistic aspects of the system. Give it a “personality,” make it sleek and gamelike. Emphasize product branding. Consider trapping users within the system by hoarding their data or otherwise making it hard to transition to a competing product. And then, at some point, stop sending software and security updates. The planned obsolescence of the product forces the user back into the sales cycle. The ultimate design goal of most consumer electronics is to make people buy more of them, which results in short time scales between generations. And every time you purchase the newest version of the product, you have to restart the learning process.

These design goals will not be sufficient for the new class of social working robots we will be encountering more and more in our daily lives. Take, for example, the first BMW iDrive. BMW was on the cutting edge of the movement to introduce high-tech infotainment systems to cars. In 2002, the company debuted the iDrive. The engineers tried to make it fun and sleek, but that wasn’t enough. Just as in the introduction of new generations of aircraft automation, this first interactive infotainment system brought about unexpected safety concerns — so many of them, in fact, that early versions of the system were dubbed “iCrash.”

The first iDrive provided users with flexibility to customize the display to fit their preferences. There were approximately seven hundred variables for the user to customize and reconfigure. Imagine how distracting it was to modify the placement of features or the color of buttons on the screen while stopped at a red light. It created unnecessary complexity for users, because there was too much to learn. The extensive features of the infotainment system and the many ways to customize it were overwhelming. As drivers became consumed by fiddling with the interface, their focus narrowed, and things became dangerous. Drivers began to miss important cues about the road or other cars. This is why user customization is a bad idea for safety-critical systems. Instead, designers need to determine an optimal setup for the controls from the beginning with safety in mind. In this case, commonly used features needed to be more easily accessible to the driver. A single button to turn the air-conditioning up and down or change the radio station should not be hidden beneath a complex tree of menu options.

The physical layout of the first iDrive system was problematic. The design introduced a central control architecture with a digital screen and single controller, a trackball. But the display and controller were physically separated, with the screen in the central front-facing panel, and the controller on the central console between the two front seats. Most other infotainment systems had required the driver to press buttons near or on the display screen itself. The physical separation between the screen and the input device presented a mental hurdle,  as drivers had to fiddle with the trackball in one location and watch the screen in another. Also, removing the physical buttons eliminated the muscle memory most of us have developed in our own cars. We reach over and grab the knob to turn down the air-conditioning fan without even taking our eyes off the road. This isn’t possible with a digital screen: the driver has to look away from the road to adjust the air-conditioning or radio.

Finally, the first iDrive used a deep menu structure, which required the user to click through many menu options to access specific functions. Specific functions that users would want were buried deep within a series of options. A broad menu, separating functions into individual controls that could be accessed directly—such as knobs or dials— would have been better. The broad menu design is the choice for most cockpits, because it allows pilots to activate specific functions with a single button press. The pilot is physically surrounded by an entire set of menu options and can quickly activate any one of them at a moment’s notice. Broad menus do require more physical real estate for the knobs and dials, and they might require the user to know more about the system, depending on how many menu options there are. They may look more complicated, but in fact they make it easier to select options quickly. The right solution for working robots, as we will see, often blends both approaches.