value insights

The Time Pressures- Valutrics

A reliable system can generate tremendous time savings; once designed, it eliminates the need for subjective and thoughtful analysis by an expensive and time-pressed manager or professional. Hence the appeal of automated asset-allocation systems at investment advisory firms: before new clients even meet an adviser, the clients complete a questionnaire designed to reliably assess their investment horizons, risk tolerance, and investment goals. The data feeds into a program that impersonally graphs the recommended mix of stocks, bonds, and other investments. It takes the massively complex job of understanding individual investment needs out of the hands of the adviser. Where there was once an adviser consulting with clients at length and depth, and then tailoring a portfolio by applying a heuristic and subjective judgment, there is now an algorithm that quickly produces reliable answers.

Of course, the demand for proof, the absence of bias, and the pressures of time affect a good deal more than the forms customers have to fill out. They strongly influence the very shape of the corporation itself, and the structures, processes, and norms that guide its daily activities. If the goal of the reliability-oriented business is to ensure that tomorrow consistently and predictably replicates yesterday, then it follows that the business will be organized as a permanent structure with long-term ongoing job assignments. Daily work will consist of a series of permanent, continuous tasks: make stuff, sell it, ship it, follow up with customers, and service the installed base. There are few if any limited-term projects on the organizational chart, and for good reason. In most corporations, “special projects” is a euphemism for the purgatory reserved for terminated executives hunting for a new job.

In such an environment, the organizational goal evolves toward managing permanent, continuous tasks to the highest possible level of reliability. Think of General Electric during the Jack Welch era, when the company’s flagship product was not an industrial turbine or a refrigerator or a medical imaging device but a quarterly earnings number that reliably met or ever so slightly exceeded earnings guidance. Because of the environment’s demands for reliability, work is only secondarily the business of making stuff and selling it. It is primarily a matter of ensuring that the existing heuristic or algorithm produces a consistent result time and time again. (See “Counterproductive Pressure from the Public Capital Markets.”)

The reliability bias is deeply embedded in organizational processes related to planning and budgeting, executive skill development, and the use of analytical technology. In all those processes, conventional wisdom says that reliability equals success. In most corporations, for example, the first measure of an operation’s success is whether it reliably meets a predetermined quantitative goal: the budget. Anything new and different that threatens the overriding goal of making budget is rejected out of hand. Constraints such as rising materials costs are equally threatening, as they add complexity, undermining the algorithm that produces the desired consistent result.

The managerial skills that are built and rewarded are those of running heuristics or algorithms to produce reliable outcomes. Consider the cottage industry that has grown up around Six Sigma. Six Sigma relentlessly simplifies algorithms to the bare minimum, taking reliability to its logical extreme. Its statistical measures plane away from the algorithm any nuance that would sacrifice consistency of result. Many organizations—most famously, General Electric—promote Six Sigma techniques and reward managers who become Six Sigma “Black Belts.” These Black Belts are reliability masters.

In even wider use than Six Sigma is a tool that was virtually unknown to corporate boardrooms just a generation ago: linear regression, a tool that is used for “proving” statistically the relationship between one factor (e.g., store hours) and another (e.g., sales per square foot). Managers prove the value of their ideas by invoking the size of their regression’s R2. Proficiency in regression analysis, as well as large-scale analytical systems such as ERP and CRM, are prerequisites for senior executives in corporations. When you consider the amount of resources that individuals and businesses invest to develop those analytical skills, compared to the relatively paltry resources invested in the intuitive skills that produce valid answers, it is easy to see why most corporations tilt so strongly in favor of reliability.

Reinforcing that tilt are organizational norms that govern status and the style of reasoning that the organization considers acceptable. Rewards and high status flow to those managers who analyze past performance to refine heuristics and algorithms, and the highest status and biggest rewards accrue to the executive who reliably runs the most important heuristic or algorithm, importance being measured by revenue and profit. Think of Goldman Sachs’s sales and trading heuristic or McDonald’s U.S. business algorithm. Managers do their best to dodge tricky smaller businesses that face complicated mysteries, which are seen as detours to advancement, if not career dead ends.

All too often, companies mismanage the resources freed up by movement along the knowledge funnel. Tragically, the public capital markets encourage this inefficiency, which can be fatal. The public capital markets are reliability-oriented and encourage excessive exploitation, though not necessarily by intention.

The capital markets reward certainty. Nothing is surer to win analysts’ favor than a record of delivering predictable revenue and earnings, and nothing is surer to arouse their ire than a failure to meet earnings forecasts. Even a penny’s shortfall in quarterly earnings per share can trigger negative analyst reports, downgrades, and sell-offs. For example, on September 25, 2008, Research In Motion announced that its second-quarter profit had risen to 86 cents per share from 50 cents per share the previous year. Profits were $496 million for the quarter; revenue was $2.6 billion. The earnings-per-share results were just one cent below the consensus analyst estimate. How did the market respond? The stock dropped by almost 30 percent, destroying some $16.1 billion in value in a single day. 5

Analysts don’t see the consequences of elevating precision and certainty as the be-all and end-all of business. They fail to recognize their own demand that businesses cease investing in innovative, validity-oriented activities. Remember that mysteries have no production process. Not even the most plugged-in analyst can predict with any certainty when a mystery will yield to a heuristic, or a heuristic to an algorithm. Validity can be demonstrated only by the passage of time. It doesn’t happen on strict quarterly schedules, unlike investments in exploiting the current heuristic or algorithm.

The longer-term effect of the capital markets’ preference for remaining at the same knowledge stage is stagnation. At some point, exploitation activities will run out of steam, and the company will be outflanked by competitors taking more exploratory approaches. Earnings will stop growing or even decline, and the analysts will savage the company for its lack of innovation. As James March points out, “An organization that engages exclusively in exploitation will ordinarily suffer from obsolescence.” 6

Publicly traded companies have great difficulty resisting the capital markets’ pressure to hone and refine within a single knowledge stage. Companies that balance exploitation with exploration, reliability with validity, and refinement with innovation will find themselves targets of heavy criticism from analysts. These analysts think they are being constructive. They’re not. They’re discouraging the very activity—moving knowledge through the funnel faster than competitors, driving down costs of current activities, and freeing up time and capital to engage in new activities—that creates enduring competitive advantage.

The public capital markets also discourage innovation by demanding that companies divert the savings generated by advancing across the funnel to shareholders. Of course, shareholders have legitimate claim on corporate cash, whether it takes the form of dividends or stock buybacks. But by demanding that they be served first, they work against their own long-term interests. Like the analysts, they prevent the company from achieving the competitive advantage gained from advancing knowledge faster than the competition.

The private capital markets have the opposite effect on companies. The private capital markets like nothing better than a company that relentlessly advances knowledge from one stage to the next, as long as the advance creates value that is captured at the end point of private capital investment, the highly coveted “liquidity event.” Yes, private capital seldom avoids failures and write-offs. But there’s a reason that the private capital markets are growing much more quickly than the public capital markets. Private capital embraces knowledge advance, while public capital—knowingly or unknowingly—discourages it.

In such an organization, personal success is achieved by running existing heuristics and algorithms. Self-interest dictates that managers refrain from cycling back to the first stage of the knowledge funnel. The organization’s own reward systems and processes practically dictate that it exploit knowledge at its current stage in the funnel, particularly, perhaps, if it is at the heuristic stage.

In corporate settings, high-level heuristics are generally in the hands of highly paid executives or specialists. Out of sheer self-interest, they are reluctant to relinquish their enigmatic and valuable capability. Whether they are brand managers, investment bankers, acquisitions editors, CFOs, research scientists, or star salespeople, they are in a constant tug-of-war with the owners of their company over the spoils of their work. They have the skill—the heuristic inside their heads—and the company has the capital. The company would like maximum compensation for providing the capital. The talent would like maximum compensation for running the heuristic. As long as the talent keeps its heuristic shrouded in priestly secrecy, it can bargain successfully for a bigger share of the value it creates. If the talent were to advance the heuristic to the algorithm stage, the company could hand the specialist’s job to a much less expensive person.

In many organizations, including professional service organizations such as law firms, consulting firms, investment firms, and most entertainment and media firms, talent is winning this battle. And the price of maintaining an ongoing monopoly on important heuristics is high. These heuristic-running high priests create a big bottleneck in the middle of the knowledge funnel, blocking the movement forward to algorithm. Their desire to collect monopoly rents sharply limits the speed at which the organization can advance knowledge.

No organization sets out to limit its ability to innovate and create additional value. No board would vote to drive out movement along the knowledge funnel. But to paraphrase Winston Churchill, first we shape our tools and then our tools shape us. 7 The structures, processes, and norms of the contemporary business organization all but condemn it to remain within a single knowledge stage. When a validity-oriented advance comes to an important organizational decision gate, someone in authority inevitably asks reliability-oriented questions: “But can we prove this will work?” or, “How can we be sure of the outcome?” Typically the answers are no, it cannot be proven, and we cannot be sure. So design thinking is suppressed without explicit intent, a victim of organizational bias toward reliability.

Leave a Reply

Your email address will not be published. Required fields are marked *