Post-Doc Blogpost: Meaningful Products & Practical Procedures in Program Evaluation

Of the Program Evaluation Standards, standard U6 regards meaningful processes and products, whereas standard F2 is on practical procedures.  To begin, the definition of U6 as described by Yarbrough et al. (2011) includes, “Evaluations should construct activities, descriptions, and judgments in ways that encourage participants to rediscover, reinterpret, or revise their understandings and behaviors” (p. 51). The authors continue when discussing F2, “Evaluation procedures should be practical and responsive to the way the program operates” (p. 87). While U6 is among the utility standards, and F2 the feasibility standards, I very sincerely believe they share much of the same intent as it pertains to program evaluation.  They can be viewed as facets of a singular whole, where U6 on meaningful products and practical procedures discusses the real need for evaluation audiences to have the ability to not only interpret what findings are shared, yet be able to make positive change from those results as well.  F2 reflects pointedly regarding respecting the program’s existing operations to the point of requesting that evaluators act in a way practical when in comparison to what is already in-place.  In their combined essence, U6 asks for findings that mean something to audiences, and F2 asks those findings take existing conditions into account.  Yet is this not already a fundamental requirement of any successful change initiative?

A related position in the field I am quite interested in is the work that institutional research (IR) teams perform.  Working simultaneously and directly with members of the Office of Institutional Research and Assessment from one university, and the Office of Assessment with another university, I have garnered a deep respect for the work they are collectively performing for their relative institutions.  As Howard, McLaughlin, & Knight (2012) define the profession, “two of the most widely accepted definitions are Joe Saupe’s (1990) notion of IR as decision support – a set of activities that provide support for institutional planning, policy formation, and decision making – and Cameron Fincher’s (1978) description of IR as organizational intelligence” (p. 22).  In both cases the focus is on data, and the use of data for an institution to know more about itself in the future than it knows in the present.  Whether this is in the form of performance metrics, operational efficiency analysis, forward-looking planning exercises, or simply the evaluation of data which has been better culled, processed, cleaned, and presented than in the past.  Yet amid each of these steps is the very real need to not only provide a value-added process which is practical, it also must speak to the existing system for its findings to warrant merit.  This is how we return to both U6 and F2, the standards of meaningful processes and products, as well as practical procedures.

Forging a palpable relationship between the evaluation process and relevant stakeholders including sponsors, evaluators, implementers, evaluation participants, and intended users is key. This permits greater understanding of the processes employed and products sought, and permits for greater buy-in when results are later shared.  Kaufman et al. (2006) presents a review of the evaluation plan used to review the outcomes of a family violence initiative for the purpose of promoting positive social change. On meaningful products and practical procedures Kaufman et al. (2006) remark, “Evaluations are most likely to be utilized if they are theory driven, emphasize stakeholder participation, employ multiple methods and have scientific rigor… In our work, we also place a strong emphasis on building evaluation capacity” (p. 191).  This evaluation, focusing first on creating a logical model for the purpose of articulating a highly-defined program concept, was used to then cascade data-driven decisions from the model constructed. With the combined efforts of project management, the program’s staff, and the evaluation team, an evaluation plan was crafted.  This plan permitted broad buy-in based on the involvement of many stakeholders. In the end, this also enabled the work of the evaluation team to continue in a less acrimonious environment and reemphasized for the team the importance of working in collaboration with key stakeholders from the beginning so that stakeholders bought into and supported the evaluation process (Kaufman et al., 2006, p. 195). The results of this collaborative, synthesizing process included greater stakeholder participation, a heightened level of rigor in addition to increased capacity, and most importantly for the program the use of ‘common measures’ across the program.

Yet increased stakeholder involvement is not the only benefit to ensuring meaningful processes & products as well as practical procedures. This heightened use of pragmatism among processes also lends to greater design efficacy. Said of a study of 209 PharmD students at the University of Arizona College of Pharmacy (UACOP), “Curriculum mapping is a consideration of when, how, and what is taught, as well as the assessment measures utilized to explain achievement of expected student learning outcomes” (Plaza, 2007, p. 1). This curriculum mapping exercise was intended to review the juxtaposition of the ‘designed curriculum’ versus the ‘delivered curriculum’ versus the ‘experienced curriculum’. The results of this study show great concordance among student and faculty perception, reinforcing not only sound program evaluation design to permit concordance, yet an effective program as well as measured by these graphical outcomes.  Equally said of the design aspects of pragmatism in program evaluation, Berlowitz et al. (2010) developed, “a system-wide approach to the evaluation of existing programs… This evaluation demonstrates the feasibility of a highly coordinated “whole of system” evaluation. Such an approach may ultimately contribute to the development of evidence-based policy” (p. 148). This study, and the rigorous data collection among existing datasets was not designed for the purpose of purporting a new means of gathering and aggregating data alike, simply a new method for taking advantage of the data already largely available while ensuring a broader yet more actionable resulting series of conclusions strong enough to further input on policy decisions.

Where the above considerations for the utility of meaningful processes and products, and the feasibility of practical procedures then comes together, is among its application in a position held in the office of institutional research.  Said of the role IR has in ensuring pragmatism in program evaluation, Howard et al. (2012) note, “Driven by the winds of accountability, accreditation, quality assurance, and competition, institutions of higher education throughout the world are making large investments in their analytical and research capacities” (p. 25). These investments remain critical to the sustainability of their investing institutions, and require that what is discovered in and among these offices is then instituted through the very departments and programs evaluated.  Institutional research is not a constituent which exists solely for the purpose of performance evaluation, nor do IPEDS or accreditation reporting responsibilities overshadow the need for actionable data among those programs under review.  Rather, we seek an environment where IR analysts and researchers are permitted to utilize practical processes for the purpose of ensuring greater stakeholder buy-in and effective design.

An environment where practical procedures are used is equally one where replicability, generalizability, and transferability all exist in a more stable ecosystem of program evaluation. Said of this need, in a 2 year follow-up study of needs and attitudes related to peer evaluation, DiVall et al. (2012) write, “All faculty members reported receiving a balance of positive and constructive feedback; 78% agreed that peer observation and evaluation gave them concrete suggestions for improving their teaching; and 89% felt that the benefits of peer observation and evaluation outweighed the effort of participating” (p. 1). These are not results achieved from processes where faculty found no direct application of the peer review program, these data were gathered of a program where a respect for the existing process was maintained and the product was seen as having more value than the time and effort required to participate as expressed in opportunity cost. Finally, using a portfolio evaluation tool which measured student achievement of a nursing program’s goals and objectives, Kear et al. (2007) state, “faculty reported that although students found writing the comprehensive self-assessment sometimes daunting, in the end, it was a rewarding experience to affirm their personal accomplishments and professional growth” (p. 113). This only further affirms the very real need for evaluators to continue to discover means of collecting, aggregating, and analyzing data that speak to existing processes.  This also further reinforces the need for program evaluations to result in a series of conclusions, or recommendations that make the greatest use of existing process, allowing for sweeping institutionalization as was seen with the Kear et al study.  Finally, said of this need for practicality among process and product within program evaluation and research as a whole, Booth, Colomb, & Williams, (2008) remark, “When you do research, you learn something that others don’t know. So when you report it, you must think of your reader as someone who doesn’t know it but needs to and yourself as someone who will give her reason to want to know it “(p. 18).

Berlowitz, D. J. & Graco, M. (2010). The development of a streamlined, coordinated and sustainable evaluation methodology for a diverse chronic disease management program. Australian Health Review, 34(2), 148-51. Retrieved from http://search.proquest.com/docview/366860672?accountid=14872

Booth, W. C., Colomb, G. G., & Williams, J. M. (2008). The craft of research (3rd Ed). Chicago, IL: The University of Chicago Press.

DiVall, M., Barr, J., Gonyeau, M., Matthews, S. J., Van Amburgh, J., Qualters, D., & Trujillo, J. (2012). Follow-up assessment of a faculty peer observation and evaluation program. American Journal of Pharmaceutical Education, 76(4), 1-61. Retrieved from http://search.proquest.com/docview/1160465084?accountid=14872

Howard, R.D., McLaughlin, G.W., & Knight, W.E. (2012). The handbook of institutional research. San Francisco, CA: John Wiley & Sons, Inc.

Kaufman, J. S., Crusto, C. A., Quan, M., Ross, E., Friedman, S. R., O’Reilly, K., & Call, S. (2006). Utilizing program evaluation as a strategy to promote community change: Evaluation of a comprehensive, community-based, family violence initiative. American Journal of Community Psychology, 38(3-4), 191-200. doi:http://dx.doi.org/10.1007/s10464-006-9086-8

Kear, M. & Bear, M. (2007). Using portfolio evaluation for program outcome assessment. Journal of Nursing Education, 46(3), 109-14. Retrieved from http://search.proquest.com/docview/203971441?accountid=14872

Plaza, C., Draugalis, J. R., Slack, M. K., Skrepnek, G. H., & Sauer, K. A. (2007). Curriculum mapping in program assessment and evaluation. American Journal of Pharmaceutical Education, 71(2), 1-20. Retrieved from http://search.proquest.com/docview/211259301?accountid=14872

Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s