value insights

Business Impact Evaluation and Management- Valutrics

In an organizational setting, managers are continuously faced with decisions about what products/programs/services to sustain, which to change, and which to abandon, to name but a few organizational dilemmas. How do organizational members go about making sound decisions? They make decisions with the use of relevant, reliable, and valid data, gathered through a sound evaluation process that is aligned with desired, long-term outcomes.

The impact evaluation process is aligned to some greater purpose, with the end manifesting itself in three levels of result:
1. Strategic. Long-term organizational impact or benefit your organization delivers to its clients and society (for example, improvements to clients’ quality of life).
2. Tactical. Shorter-term organizational results from which the organization benefits (for example, profits, revenues, and other measures).
3. Operational. The internal building-block deliverables that, when well aligned and coordinated, allow the organization to reach its tactical and strategic results.

While the contributions of a product/program/service are more readily observed at the operational level, it must ultimately get us closer to long-term, strategic results, as well as to the more immediately expected organizational level results. The product/program/service would have to positively contribute toward specific results at each of these levels, as measured by relevant performance indicators and other measures.

The benefits that evaluation can give us useful feedback about how much closer (or further) the organization is from our ultimate goal. In the context of continual improvement, evaluation helps us to do this by establishing an evaluation framework that allows things that matter to be consistently and reliably measured.

The performance indicators must be appropriate measures of desired performance, rather than typical or standard measures that, while perhaps related to the result to be measured, do not say anything about the impact of the product/program/service and the value of that impact. One classic example is using the number of participants as an indicator of the success of a program. The fact that there were many or few participants says nothing about the quality of the participation, or the impact of participating in such a program on human and organizational performance, or even the desirability of improving that particular performance in the first place. Measuring impact on the three levels mentioned above ensures that all relevant perspectives are considered.

Needs should have, of course, been identified through a needs assessment process, which in turn gives us the inputs or raw material for a causal analysis. It is through a sound needs assessment and causal analysis that performance improvement professionals are able to identify the causes for those needs, and in turn, solution requirements, solution alternatives, and finally a selected solution. This ‘‘solution’’ later becomes the product/program/service during an evaluation process. If the front-end work was done, and it was done well, then there should be a high probability that the product/program/service will in fact add positive and measurable value to the organization and its customers through its various levels of results.
If the product/program/service was the best alternative for closing the gap, then one evaluation hypothesis is that the it should have helped eliminate or reduce such gaps in results/performance. The basic evaluation question would then be: ‘‘Did Solution X contribute to the reduction or elimination of Performance Gap Y?’’

From an impact evaluation perspective, every evaluation starts with a practical purpose: to help stakeholders make sound decisions. Thus, the entire evaluation process must begin with the identification of what decisions have to be made and what data and subsequent information will help us make them.

When evaluators set out to evaluate products, programs, or any solution, the usual focus of the evaluation is the nature of the program and the results of the program in terms of the predetermined expectations. For example, did the participants like the new product? Did the participants master the new training program content? Are they applying the content in their jobs? What is usually taken for granted or assumed is the desirability of mastering that particular content. Usually, the product/program/service is reported as effective depending on the:

  •   Resources consumed;
  •   Participation level;
  •   Perceived satisfaction;
  •   Usage; and
  •   Other indicators that tell us little about the contributions toward the organizational objectives.

This common evaluation focus centers around the means (for example, the new leadership development program) rather than the organizational ends organizational members wish to accomplish (for example, increased sales, increased revenues, growing market share, enhanced quality of life of our customers). This is not all that different from the way organizations are usually led. If our planning and implementation are focused on means, likewise our evaluation questions will probably stop at this level. If you look at the bulleted items above, one could certainly claim to have relevant data about effectiveness place; and whether the benefits of these results outweigh their costs and unintended consequences.

 

Stages of Impact Evaluation
While evaluation, at its core, is straightforward, the situations in which it is applied can be complex and at times make evaluation daunting. The impact evaluation process is primarily directed at individuals who want a clear map that guides them through the process and helps them keep a pragmatic and responsive focus. The idea is that, with a well-articulated plan, the actual evaluation process will be a lot simpler and more straightforward. The impact evaluation process consists of seven steps that, while conveying sequence, can and should be considered reiteratively.

1. Identify Stakeholders and Expectations
The evaluator must identify the key stakeholders involved. The stakeholder groups include those who will be making decisions, either throughout the evaluation process or directly as a result of the evaluation findings. Those with the authority to make critical decisions are often the ones who finance the evaluation project, but if it is someone different or a different group, they too should be included. Also important are those who will be affected by the evaluation—either in the process or potentially as a result of the findings.
Including this group will make the implementation of the evaluation plan a lot easier, particularly during the data collection stage. The driving question for identifying stakeholders is ‘‘Who is/could be either impacted by the evaluation or could potentially impact the evaluation in a meaningful way?’’ While not every single stakeholder must be a direct member of the evaluation project team, it is wise to have each group represented.
Now, with a diverse group of stakeholder representation, you will also have a diverse group of expectations. These expectations are the basis for your contract, whether verbal or written and should explicitly articulate what is expected of you (as well as of the stakeholders!). If you feel they are unreasonable, this is the time to discuss, educate, discuss again, educate again, and come to a consensus . . . not after you have completed what in your own mind you think is a successful evaluation. If you do not have the specific stakeholder expectations clearly defined from the start, it is nearly impossible to align your efforts to such expectations without sheer luck . . . and if you do not align your efforts with stakeholder expectations from the start, it is very unlikely that you will ever meet those expectations.

2. Determine Key Decisions and Objectives
Asking the stakeholders to articulate what decisions will be made as a result of your findings is a primary step. The discussion about the decisions that must be made should also be about the objectives that must be reached. All organizations have objectives—both external and internal—and everything within the organization must contribute toward those objectives. The relative worth of any intervention or solution is primarily contingent on whether it is helping or hindering the achievement of organizational objectives.
While some stakeholders may not provide you with the specific objectives they expect, they will give you ‘‘clues’’ about the relevant effects they are expecting,even if these are about means rather than results. Your task here (and actually, throughout the process) is to be the educator and facilitator and to approach the conversation from the standpoint of . . . and if the organization were to accomplish that, what would the result be? And to continue that line of inquiry until key results have been identified.
With these decisions and objectives clarified, the overarching questions that will drive the evaluation process and purpose of the evaluation should also become clear, articulated, and agreed on.

3. Derive Measurable Indicators
Sound decisions are made on the basis of relevant, reliable, and valid data related to desired results and the related questions that must be answered. Therefore, the heart ofyour evaluation plan will be togatherthe data required to answer the questions that guide the inquiry. People often end up making judgments based on wrong or incomplete data, particularly when they try to force connections between inappropriate data (just because it happens to be available) and the decisions that must be made.
The data you will seek to collect are essentially about key performance indicators. Indicators are observable phenomena that are linked to something that is not directly observed and can provide information that will answer an evaluation question. Results are not always neatly and directly observed. When measuring results, there are a number of indicators. For instance, profit is a result that has various metrics, which collectively indicate its level(for example, money collected; money paid out; assets, and others). Indicators for customer service include referrals, repeat business, customer retention, length of accounts, and satisfaction survey scores.

4. Identify Data Sources
With a list of specific indicators for which to collect data, you must first determine where you can find those data. The data drive the appropriate source. You can likely find the data that you are looking for right in your own organization. Existing records about past and current performance may already be available, but collected by different parties in your organization and for different reasons. Some excellent sources include strategic plans, annual reports, project plans, consulting studies, and performance reports, to name a few.
Telecommunications and other technologies can often be used to link to reports, documents, databases, experts, and other sources like never before possible (the Internet is a great vehicle for efficiently linking up to these!). A number of companies, government agencies, and research institutions, nationally and internationally, publish a series of official studies and reports that could prove to be valuable sources of data.

5. Select Data Collection Methods
The right data collection methods and tools are a function of the data you are seeking. Likewise, the data you collect is a function of the methods you select.
When evaluators limit the data they collect by employing an overly narrow set of observation methods because they don’t know how to use others, their data set will not be complete and, in turn, their findings will not be valid. If you are after hard data such as sales figures, don’t use a survey to get people’s opinions of what these sales figures are. Rather, review relevant sales reports. Conversely, if it is people’s attitudes you want, there are a number of ways to ask them (interviews, focus groups,and surveys are some appropriate possibilities). There is extensive literature about these and other data collection methods. Be sure to make your selection based on their pros and cons, specifically with regard to important criteria such as appropriateness of the instrument for the required data, time, characteristics of sample, comprehensiveness of tool, previous experience with tools that are being considered, and feasibility among others.
Again, the ‘‘secret ingredient’’ for successfully collecting valid and reliable data is alignment of data type, data source, data collection tools, and data analysis procedures.

6. Select Data Analysis Tools
While the data analysis is often thought to be mere ‘‘number crunching’’ it is more than that. The analysis of data as part of an evaluation effort is the organization of information to discover patterns and fortify arguments used to supportconclusions orevaluativeclaims thatresult from yourevaluation study.
In a nutshell, what is happening is a mere summarizing of large volumes of data into a manageable and meaningful format that can quickly communicate its meaning. In fact, one might say that the analysis of the data begins even before its collection by virtue of analyzing the characteristics of the required data, as it is done before the methods for data collection are selected.
If you have quantitative data, various statistical operations can help you organize your data as you sort through your findings. Qualitative data is also subject to analytical routines. Qualitative observations can be ordered by source and by impact, or sorted according to general themes and specific findings. Checking the frequency of qualitative observations will begin to merge qualitative into quantitative data.

7. Communicate Results and Recommend Improvements
The importance of effective communication cannot be overstated. A rigorous evaluation does not speak for itself. Communicating with key stakeholders throughout the evaluation process keeps them aware of what you are doing and why, which in turn increases the amount of trust they place in you and your efforts. In addition, it allows them the opportunity to participate and provide you with valuable feedback. By the time the final report and debriefing come along, these products will not be seen as something imposed on them, but rather as something that they help create. With this type of buy-in, resistance to the findings will likely be lower.

 

Managing Impact Evaluation Framework
Every organization is different and has its own set of goals, values, strengths and weaknesses. The impact evaluation frameworks should possess the following five characteristics:

1. Align all key results at various organizational levels (systemic): Recall that the value of any intervention is whether it ultimately helped the organization get closer to achieving its vision. Thus, do not track only immediate results at the intermediate level, but be sure to hypothesize and test the linkages all the way up to the vision level goals.

2. Provide linkages between interventions or initiatives and the indicators they are to impact: Remember that one of the en-route tasks of evaluation is to provide evidence of the effectiveness of implemented solutions. Thus, it is important to articulate for everyone the linkages among these solutions and between the solutions and the organizational indicators they are intended to impact. The clearer the linkages, the better able people will be to understand and use the data.

3. Responsive and dynamic: The evaluation framework is more of a template than a confining structure. While the framework might remain pretty constant, the actual indicators may change, or the result itself, as objectives are met, and new ones are derived. Recall that, while solutions might solve old problems, they may also bring with them a new set of challenges. Modifying this framework in order to keep its indicators current should not be done at the expense of the constancy of your organization’s purpose. Changing your mission every year does not make you current, but rather gives you a moving target your organization will not likely reach.

4. Accessible by all decision-makers: While all these characteristics are critical, this one is probably one of the most difficult for leaders to grasp. The idea that the organization’s report card will be open for all to see is quite scary for many. It is important to remember that the purpose of evaluating is to collect and interpret data for improving performance, not for pointing fingers and blaming. All must have ready access so that they can make timely decisions about how to improve performance — individual and organizational. These efforts, of course, should be coordinated and integrated.

5. Feedback and communication: You cannot talk about continual improvement without considering the feedback loop upon which it rests. The feedback loop represents the reiterative nature of tracking and adjusting. Performance data should not only be accessible by all, but should be clearly understood by all. Thus, providing consistent feedback about performance is part of bigger communication systems. Progress, milestones reached (or not reached), action plans for reaching desired goals, and so forth should be consistently and accurately communicated throughout the organization.

All this, of course, has to take place in the context of a supportive environment. An environment where using relevant, reliable, and valid data before making decisions is part of the organizational culture. This can only be accomplished by modeling this from the top of the organization on down and aligning the proper consequences with the desired accomplishments and behaviors related to continually improving.