Post-Doc Blogpost: Engaging Evaluation Stakeholders

When it comes to addressing the potential challenges of working to engage stakeholders in the evaluation process, I believe those limitations exist along a number of dimensions. Those dimensions include the organization, the program, and the program’s evaluation. I begin with the organization as regardless of the number of steps taken to factor needs, include stakeholders, and design great evaluations, there remains the potential for lackluster results if the organization is structured in such a way that stakeholders are prevented from working with you or sharing critical input. We have asserted that organizational goals are established through coalition behavior, we have done so on grounds that organizations are interdependent with task-environment elements and that organizational components are also independent with one another; unilateral action is not compatible with interdependence, and the pyramid headed by the single all-powerful individual has become a symbol of complex organization but through historical and misleading accident (Thompson, 2004, p. 132). I mentioned work by Thompson in the past when writing on the task environment, this time I mention Thompson when regarding the structure of organizations themselves. At times the very structure of an organization can limit the opportunity for evaluators to reach all pertinent stakeholders. In highly structured, greatly hierarchical organizations it may require nothing less than multi-level approvals to seek the input of one highly critical yet lower-level stakeholder. The potential inclusion strategy to correct for this, seek first to establish a matrix organization within the larger organization, formed for the sole purpose of the continued efforts of the program evaluation. Seeking the involvement of an interdepartmental, matrix team of members as a cross-section through many levels of the organization and representing as many functions/divisions as possible, negates much of the limitation brought on by normative command and control structures, while ensuring representation is realized from nearly every corner of the organization.

The program also presents certain limitations to stakeholder inclusion and therefore engagement. A common difficulty with information from these sources is a lack of the specificity and concreteness necessary to clearly identify specific outcomes measures; for the evaluator’s purposes, an outcomes description must indicate the pertinent characteristic, behavior, or condition that the program is expected to change (Rossi, Lipsey, & Freeman, 2004, p. 209). The very design of the program, and the dissemination of its intent, can provide a limited view of the program to evaluators. This limits the ability of stakeholder inclusion, as an under-representative description of the program will deliver an under-representative description of required stakeholders. Where this is the case the strategy to ensure stakeholder inclusion rests on garnering a better understanding of the program itself. The strategy has less to do with identifying individuals, and more about identifying pertinent processes and impacts, and only then can relevant individuals be identified and included relative to those processes.

A third and final consideration for stakeholder engagement is the design of the evaluation, as much of the limitation of research deals with the limitations embedded in its design. When it comes to the complexities of interviewing, remember that the more you plan by determining exactly what you want to know, the more efficiently you will get what you need; you don’t always need to script an interview around a set list of questions, but prepare so that you don’t question your source aimlessly (Booth, Colomb, & Williams, 2008, p. 82). Just as specifying the program further permits the evaluator to know more about the program which is under evaluation and therefore who to ask about the program, it is equally effective to consider exactly what you wish to know from stakeholders prior to engaging them amid the evaluation process. The difference between aimlessly questioning a stakeholder for an hour, and purposively questioning a stakeholder for ten minutes is engagement. The latter interview begets an engaged stakeholder, where the former begets a rambling dialogue rife with detractors and partial information alike. This in mind the final strategy for engaging stakeholders more fully is to arrive better prepared for the evaluation, and for each interview. While this strategy also has little to do with something done to a stakeholder, or how the stakeholder is selected, based on the literature it is found what is done with a stakeholder when you have one makes all the difference.

Booth, W. C., Colomb, G. G., & Williams, J. M. (2008). The craft of research (3rd Ed.). Chicago, IL: The University of Chicago Press.

Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th Ed.). Thousand Oaks, CA: Sage Publications, Inc.

Thompson, J. D. (2004). Organizations in action: Social science bases of administrative theory. New Brunswick, NJ: Transaction Publishers.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s