As I approach full speed in the post-doctoral program, I equally approach my first opportunity to share publicly insights derived from this study of assessment, evaluation, and accountability. The American Evaluation Association (AEA) identifies its established social and ethical responsibilities of evaluators. In juxtaposition, the social and ethical responsibilities of institutional research as an education-based area of interest, are expressed by the Association for Institutional Research (AIR). Yet first, a personal introduction as requested by this assignment. I have chosen institutional research as my professional education-based area of interest, as research and analysis have been at the heart of much of what I’ve done for the past decade or more.
Spanning a period easily covering ten years, I have straddled industry and academe for the purpose of not only remaining a lifelong learner but continuing to leverage what I take from each course and apply it as readily as possible to my working world in industry and in the classroom to the benefit of my employers and my students. As mentioned elsewhere in my ‘about’ page, my work includes a multitude of projects focused on distilling a clear view of institutional effectiveness and program performance. Roles have included senior outcomes analyst, management analyst, operations analyst, assessor, and faculty member for organizations in industries ranging from higher education to hardware manufacturing and business intelligence. With each position a new opportunity to assimilate new methods for assessing data. With each new industry a new opportunity to learn a new language, adhere to new practices, and synthesize the combined/protracted experience that is the sum of their parts. Yet in each instance I do not feel as though I remain with a steadfast understanding that I’ve learned more and therefore have less left to learn. In each instance I instead feel as though I know and have experienced even less of what the world has to offer. Focusing this indefinite thirst to speak to assessment and evaluation specifically, the task becomes pursuing ever-greater growth, and ever-greater success across a wide range of applications, industries, and instances, while equally remaining true to guiding principles which serve those who benefit from any lesson I learn or analysis I perform.
The Program Evaluation Standards intimate standards statements regarding propriety which include responsive and inclusive orientation, formal agreements, human rights and respect, clarity and fairness, transparency and disclosure, conflicts of interest, and fiscal responsibility. At the heart of these Yarbrough, Shulha, Hopson, & Caruthers (2011) remark, “Ethics encompasses concerns about the rights, responsibilities, and behaviors of evaluators and evaluation stakeholders… All people have innate rights that should be respected and recognized” (p. 106). This is then compared with a like-minded statement from the AEA directly in stating, “Evaluators have the responsibility to understand and respect differences among participants, such as differences in their culture, religion, gender, disability, age, sexual orientation and ethnicity, and to account for potential implications of these differences when planning, conducting, analyzing, and reporting evaluations” (Guiding Principles for Evaluators, n.d., para. 40). Finally, in juxtaposition we have Howard, McLaughlin, & Knight with The Handbook of Institutional Research (2012) who write, “All employees should be treated fairly, the institutional research office and its function should be regularly evaluated, and all information and reports should be secure, accurate, and properly reported… The craft of institutional research should be upheld by a responsibility to the integrity of the profession” (p. 42). Thus, in the end, while this work had intended to explore a juxtaposition, the chosen word implies some paradoxical behavior at least to a slight degree, in actuality shows none of the sort. Rather, we find congruence, and we find agreement.
It is important to uphold standards for the ethical behavior of evaluators, as the very profession is one steeped in a hard focus on data, and the answers data provide. We as human beings, however, tend to this profession while flawed. We make mistakes, we miscalculate, we deviate from design, and we inadvertently insert bias into our findings. None of this may be done on purpose, and certainly not all transgressions are present in every study. The implication is there, though, that we can make mistakes and are indeed fallible. At the same time we are of a profession which is tasked with identifying what is data and what is noise, what programs work and which curriculum does not, which survey shows desired outcomes and which employees are underperforming. These are questions which beget our best efforts, our most scientific of endeavors, and our resolute of trajectories to identify only truths however scarce, amid the many opportunities to be tempted toward manufacturing alternate – albeit perhaps more beneficial – realities for we as evaluators and our stakeholders. All participants have rights, all evaluators have rights, and all sponsors have rights. It is our task to serve in the best collective interest, using the best methods available to ensure a properly informed future.
American Evaluation Association. (n.d.). Guiding principles for evaluators. Retrieved September 11, 2013 from http://www.eval.org/p/cm/ld/fid=51
Howard, R.D., McLaughlin, G.W., & Knight, W.E. (2012). The handbook of institutional research. San Francisco, CA: Jossey-Bass.
Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc.