Post-Doc Blogpost: Issue Polarization & Evaluator Credibility

On the topic of ideology and polarization Contandriopoulos & Brousselle (2012) note, “Converging theoretical and empirical data on knowledge use suggest that, when a user’s understanding of the implications of a given piece of information runs contrary to his or her opinions or preferences, this information will be ignored, contradicted, or, at the very least, subjected to strong skepticism and low use” (p. 63). Program evaluation, as with any other form of research and analysis, must be evaluated in context.  Yet context is not simply defined as the setting of the evaluation, nor the intent of the evaluation alone.  Instead, context must also include consideration for the evaluation’s design, and the very credibility of the evaluator him/herself as well. Evaluations should be conducted by qualified people who establish and maintain credibility in the evaluation context (Yarbrough et al., 2011, p. 15). This points to the need to not only ensure an audience capable of reception of the ideas/findings brought forth by the evaluation, yet to the equally necessary inclusion of evaluator’s capability of preserving the credibility of the study by purporting their own professional credibility as well.

An example of this in action was at a program evaluation session as part of the Orange County Alliance for Community Health Research last year.  This event, presented at UC Irvine, included a three hour presentation on program evaluation delivered by Michelle Berelowitz, MSW (UC Irvine, 2012). MS Berelowitz spoke at length on the broader purpose of program evaluation, the process for designing and conducting program evaluation, and the potential applications of program evaluation. This event was attended by a multitude of program directors, and other leaders of health and human services agencies in proximity to the university, intending to both learn of this process and to network with other agencies as well. Where polarization was introduced, and therefore the first instance of calling evaluator credibility into question, was during the introduction of MS Berelowitz’ presentation.  She, in very plain language, asked the audience who among them was motivated when it came time to perform evaluations of their programs each year.  This question was posed, to which none replied as being motivated, and a general consensus of disregard for the annualized process instead loomed. This calls the evaluator’s credibility into question, as the process itself is only as valuable as it is perceived by its audience, and program evaluation is only meaningful, when it can impact decisions and affect change.

If, during a presentation intended to inform others of the very merits of this program evaluation process the evaluator’s credibility is called into question, strategies must be enacted to counteract this stifling critique and inattention to the process’ value. To briefly return to the value in identifying and addressing polarization among stakeholders Contandriopoulos & Brousselle (2012) remark, “as the level of consensus among participants drops, polarization increases and the potential for resolving differences through rational arguments diminishes as debates tend toward a political form wherein the goal is not so much to convince the other as to impose one’s opinion” (p. 63).  Thus, in a room where a presentation on the merits of program evaluation is to be received with tepid acceptance, the evaluator holds the responsibility to convey the process in a way which fosters consensus, and restores credibility to the process.

One means of establishing greater evaluator credibility is by ensuring inclusion. This remains of no surprise as much of the literature regarding program evaluation centers upon a focus on stakeholder inclusion.  Yet to specifically address how this relates to evaluator credibility Yarbrough et al. (2011) write, “Build good working relationships, and listen, observe, and clarify. Making better communication a priority during stakeholder interactions can reduce anxiety and make the evaluation processes and activities more cooperative” (p. 18).  This was masterfully exercised by MS Berelowitz, as throughout the presentation she was found to be engaging, she drew insights from multiple attendees of the presentation, she worked to incorporate many of the attendees own issues into the presentation’s material, and she was thoughtfully respondent to attendee questions and further paradigm inquiry.

Another of the methods by which evaluator credibility can be restored, is in ensuring the design of the research is one where the audience can be receptive of the work performed. On this topic Creswell (2009) writes, “In planning a research project, researchers need to identify whether they will employ a qualitative, quantitative, or mixed methods design. This design is based on bringing together a worldview or assumptions about research, the specific strategies of inquiry, and research methods” (p. 20). Yet these conclusions impact not only the design of the research itself – and by extension design of program evaluation – yet are considerations which impact an evaluator’s ability to design meaningful research which conveys information according to long established assumptions about research. In this instance, MS Berelowitz conveyed a presentation on program evaluation which was deeply supported by the extant literature, was a presentation which purported her worldview and assumptions on research and thus program evaluation quite clearly, and was delivered in such a way that attendees were permitted to witness both the technical and practical merits of navigating the program evaluation process in the way presented. In both ensuring the inclusion of stakeholders, and ensuring worldview, inherent assumptions, and defensible design, the presentation was ultimately a success, and one where attendees left conveying motivation of the program evaluation process ahead.

Contandriopoulos, D., & Brousselle, A. (2012). Evaluation models and evaluation use. Evaluation, 18(1), 61–77.

Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: Sage Publications, Inc.

UC Irvine. (2012). Program evaluation. Retrieved October 22, 2013 from http://www.youtube.com/watch?v=XD-FVzeQ6NM.

Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s