Keep up with new content on the site, receive exclusive content and commentary, and learn about activities within the Straight Talk community.
By Tom Davenport, Distinguished Professor, Babson College
This article is by Featured Blogger Tom Davenport from his LinkedIn page. Republished with the author’s permission.
As a society, we are becoming increasingly comfortable with the idea that machines can make decisions and take actions on their own. We already have semi-autonomous vehicles, high-performing manufacturing robots, and automated decision making in insurance underwriting and bank credit. We have machines that can beat humans at virtually any game that can be programmed. Intelligent systems can recommend cancer cures and diabetes treatments. “Robotic process automation” can perform a wide variety of digital tasks.
What we don’t have yet, however, are machines for producing strategy. We still believe that humans are uniquely capable of making “big swing” strategic decisions. For example, we wouldn’t ask a computer to put together a new “mobility strategy” for a car company based on such trends as a decreased interest in driving among teens, the rise of ride-on-demand services like Uber and Lyft, and the likelihood of self-driving cars at some point in the future. We assume that the defined capabilities of algorithms are no match for the uncertainties, high-level issues, and problems that strategy often serves up.
We may be ahead of smart machines in our ability to strategize right now, but we shouldn’t be complacent about our human dominance. First, it’s not as if we humans are really that great at setting strategy. Many M&A deals don’t deliver value, new products routinely fail in the marketplace, companies expand unsuccessfully into new regions and countries, and myriad other strategic decisions don’t pan out.
Second, although it’s unlikely that a single system will be able to handle all strategic decisions, the narrow intelligence that computers display today is already sufficient to handle specific strategic problems. IBM Corp., for example, has begun to use an algorithm rather than just human judgment to evaluate potential acquisition targets. Netflix Inc. uses predictive analytics to help decide what TV programs to produce. Algorithms have long been used to identify specific sites for retail stores, and could probably be used to identify regions for expansion as well. Key strategic tasks are already being performed by smart machines, and they’ll take on more over time.
A third piece of evidence that strategy is becoming more autonomous is that major consulting firms are beginning to advocate for the idea. For example, Martin Reeves and Daichi Ueda, both of the Boston Consulting Group, recently published a short article on the Harvard Business Review website called “Designing the Machines That Will Design Strategy,” in which they discuss the possibility of automating some aspects of strategy. McKinsey & Co. has invested heavily in a series of software capabilities it calls “McKinsey Solutions,” many of which depend on analytics and the semi-automated generation of insights. Deloitte has developed a set of internal and client offerings involving semi-automated sensing of an organization’s external environment. In short, there is clear movement within the strategy consulting industry toward a greater degree of interest in automated cognitive capabilities.
Assuming that this movement toward autonomous strategy is beginning to take place, what are the implications for human strategists? As Reeves and Ueda point out in their article, cognitive capabilities will need to be combined with human intelligence in what they call an “integrated strategy machine.” Just as contemporary autonomous vehicles can take the wheel under certain conditions, we’ll see situations in which strategic decision making can be automated. Other situations, however, will require that a human strategist take the wheel and change direction.
Big-picture thinking is one capability at which humans are still better than computers — and will continue to be for some time. Machines are not very good at piecing together a big picture in the first place, or at noticing when the landscape has changed in some fundamental way. Good human strategists do this every day.
In a world of smart, strategic machines, humans need to excel at big-picture thinking in order to decide, for example, when automation is appropriate for a decision; what roles machines and people will play, respectively; and when an automated strategy approach their organization has implemented no longer makes sense.
Executives who see the big picture are able to answer the critical questions that will guide their organizations’ future: how their companies make money, what their customers really want, how the economy is changing, and what competitors are up to that is relevant to their company.
These kinds of issues and trends can’t be captured in data alone. It’s certainly a good and necessary thing for strategists to begin embedding their thinking into cognitive technologies, but they also have to keep their eyes on the broader world. There is a level of sense-making that only a human strategist is capable of — at least for now. It’s a skill that will be more prized than ever as we enter an era of truly strategic human-machine partnerships.
Originally published on LinkedIn