Post-Doc Blogpost: Issue Polarization & Evaluator Credibility

On the topic of ideology and polarization Contandriopoulos & Brousselle (2012) note, “Converging theoretical and empirical data on knowledge use suggest that, when a user’s understanding of the implications of a given piece of information runs contrary to his or her opinions or preferences, this information will be ignored, contradicted, or, at the very least, subjected to strong skepticism and low use” (p. 63). Program evaluation, as with any other form of research and analysis, must be evaluated in context.  Yet context is not simply defined as the setting of the evaluation, nor the intent of the evaluation alone.  Instead, context must also include consideration for the evaluation’s design, and the very credibility of the evaluator him/herself as well. Evaluations should be conducted by qualified people who establish and maintain credibility in the evaluation context (Yarbrough et al., 2011, p. 15). This points to the need to not only ensure an audience capable of reception of the ideas/findings brought forth by the evaluation, yet to the equally necessary inclusion of evaluator’s capability of preserving the credibility of the study by purporting their own professional credibility as well.

An example of this in action was at a program evaluation session as part of the Orange County Alliance for Community Health Research last year.  This event, presented at UC Irvine, included a three hour presentation on program evaluation delivered by Michelle Berelowitz, MSW (UC Irvine, 2012). MS Berelowitz spoke at length on the broader purpose of program evaluation, the process for designing and conducting program evaluation, and the potential applications of program evaluation. This event was attended by a multitude of program directors, and other leaders of health and human services agencies in proximity to the university, intending to both learn of this process and to network with other agencies as well. Where polarization was introduced, and therefore the first instance of calling evaluator credibility into question, was during the introduction of MS Berelowitz’ presentation.  She, in very plain language, asked the audience who among them was motivated when it came time to perform evaluations of their programs each year.  This question was posed, to which none replied as being motivated, and a general consensus of disregard for the annualized process instead loomed. This calls the evaluator’s credibility into question, as the process itself is only as valuable as it is perceived by its audience, and program evaluation is only meaningful, when it can impact decisions and affect change.

If, during a presentation intended to inform others of the very merits of this program evaluation process the evaluator’s credibility is called into question, strategies must be enacted to counteract this stifling critique and inattention to the process’ value. To briefly return to the value in identifying and addressing polarization among stakeholders Contandriopoulos & Brousselle (2012) remark, “as the level of consensus among participants drops, polarization increases and the potential for resolving differences through rational arguments diminishes as debates tend toward a political form wherein the goal is not so much to convince the other as to impose one’s opinion” (p. 63).  Thus, in a room where a presentation on the merits of program evaluation is to be received with tepid acceptance, the evaluator holds the responsibility to convey the process in a way which fosters consensus, and restores credibility to the process.

One means of establishing greater evaluator credibility is by ensuring inclusion. This remains of no surprise as much of the literature regarding program evaluation centers upon a focus on stakeholder inclusion.  Yet to specifically address how this relates to evaluator credibility Yarbrough et al. (2011) write, “Build good working relationships, and listen, observe, and clarify. Making better communication a priority during stakeholder interactions can reduce anxiety and make the evaluation processes and activities more cooperative” (p. 18).  This was masterfully exercised by MS Berelowitz, as throughout the presentation she was found to be engaging, she drew insights from multiple attendees of the presentation, she worked to incorporate many of the attendees own issues into the presentation’s material, and she was thoughtfully respondent to attendee questions and further paradigm inquiry.

Another of the methods by which evaluator credibility can be restored, is in ensuring the design of the research is one where the audience can be receptive of the work performed. On this topic Creswell (2009) writes, “In planning a research project, researchers need to identify whether they will employ a qualitative, quantitative, or mixed methods design. This design is based on bringing together a worldview or assumptions about research, the specific strategies of inquiry, and research methods” (p. 20). Yet these conclusions impact not only the design of the research itself – and by extension design of program evaluation – yet are considerations which impact an evaluator’s ability to design meaningful research which conveys information according to long established assumptions about research. In this instance, MS Berelowitz conveyed a presentation on program evaluation which was deeply supported by the extant literature, was a presentation which purported her worldview and assumptions on research and thus program evaluation quite clearly, and was delivered in such a way that attendees were permitted to witness both the technical and practical merits of navigating the program evaluation process in the way presented. In both ensuring the inclusion of stakeholders, and ensuring worldview, inherent assumptions, and defensible design, the presentation was ultimately a success, and one where attendees left conveying motivation of the program evaluation process ahead.

Contandriopoulos, D., & Brousselle, A. (2012). Evaluation models and evaluation use. Evaluation, 18(1), 61–77.

Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: Sage Publications, Inc.

UC Irvine. (2012). Program evaluation. Retrieved October 22, 2013 from http://www.youtube.com/watch?v=XD-FVzeQ6NM.

Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc.

Advertisement

Post-Doc Blogpost: On Explicit Evaluation Reasoning

Evaluation reasoning leading from information and analyses to findings, interpretations, conclusions, and judgments should be clearly and completely documented (Yarbrough et al., 2011, p. 209). This standard arises not solely for the purpose of ensuring one’s conclusions are logical, rather this standard additionally emerges to function as both a final filter and an ultimate synthesizer of the results of all other accuracy standards. A7 – Explicit Evaluation Reasoning as per The Program Evaluation Standards serves to make known the efficacy of the process by which conclusions are reached. Said of this standard Yarbrough et al. (2011) continue, “If the descriptions of the program from our stakeholders are adequately representative and truthful, and if we have collected adequate descriptions from all important subgroups (have sufficient scope), then we can conclude that our documentation is (more) likely to portray the program accurately” (p. 209).  This level of holism leaves us with a critical imperative, to serve the program we are evaluating well, and to serve the negotiated purposes of the evaluation to their utmost.

Said of the need to ensure clarity, logic, and transparency of one’s process Booth, Colomb, & Williams (2008) elucidate, “[Research] is a profoundly social activity that connects you both to those who will use your research and to those who might benefit – or suffer – from that use” (p. 273). We then have a responsibility as evaluators and as researchers, to conduct ourselves and to document our process explicitly.  Doing so preserves such attributes tantamount to quality research as reproducibility, generalizability, and transferability. Yet there are also more specific considerations at-play. On the topic of this standard’s importance to current/future professional practice, we use the example of an extant job posting for a Program Evaluator with the State of Connecticut Department of Education. The description for this position includes the following, “A program evaluation, measurement, and assessment expert is sought to work with a team of professionals developing accountability measures for educator preparation program approval. Key responsibilities will include the development of quantitative and qualitative outcome measures, including performance-based assessments and feedback surveys, and the establishment and management of key databases for annual reporting purposes” (AEA Career, n.d., para. 2). This position covers a wide range of AEA responsibilities, and makes clear from only the second paragraph the sheer scope of responsibility under this position.  And while the required qualifications include mention of expertise in program evaluation, qualitative and quantitative data analyses, as well as research methods, it more importantly concludes with mention of the need to ‘develop and maintain cooperative working relationships’ and demonstrate skill in working ‘collaboratively and cooperatively with internal colleagues and external stakeholders’. What is required, then, is not solely a researcher with broad technical expertise, nor simply a methodologist with program evaluation background, but instead a member of the research community who can deliver on the palpable need to produce defensible conclusions from explicit reasoning in a way which connects with a broad audience of users and stakeholders.

Explicit reasoning, expressed in a way digestible by readers, defensible to colleagues, and actionable by program participants, requires the researcher be comfortable with where he/she is positioned in relation to the research itself when communicating both process and results.  This is also known among as the literature as positionality. Andres (2012) speaks of this in saying, “This positionality usually involves identifying your many selves that are relevant to the research on dimensions such as gender, sexual orientation, race/ethnicity, education attainment, occupation, parental status, and work and life experience” (p. 18). And yet why so many admissions solely for the purpose locating one’s self among the research? Because positionality has as much to do with the researcher, as it does the researcher’s position and its impact on program evaluation outcomes. An example of this need for clarity comes to us from critical action research.  Kemmis & McTaggart (2005) describe, “Critical action research is strongly represented in the literatures of educational action research, and there it emerges from dissatisfaction with classroom action research that typically does not take a broad view of the role of the relationship between education and social change… It has a strong commitment to participation as well as to the social analyses in the critical social science tradition that reveal the disempowerment and injustice created in industrialized societies” (p. 561). This in mind, it stands to reason that one can only be successful in such a position, if the researcher him/herself is made clear, his/her position to the research is clear, his/her stance on justice as only one example is considered, the process by which the research is conducted is clear, and how this person in relation to this research then renders subsequent judgment on data collected.  For this Program Evaluator role, just as many others like it, must be permitted to serve as both researcher and advocate, exercising objective candor throughout.

American Evaluation Association. (n.d.). Career. Retrieved October 9, 2013 from http://www.eval.org/p/cm/ld/fid=113.

Andres, L. (2012). Designing & doing survey research. London, England: Sage Publications Ltd

Booth, W. C., Colomb, G. G., & Williams, J. M. (2008). The craft of research (3rd Ed.). Chicago, IL: The University of Chicago Press.

Kemmis, S. & McTaggart, R. (2005). Participatory action research. In Denzin, N. K. & Lincoln, Y.S., The sage handbook of qualitative research (3rd Ed.), (559-604). Thousand Oaks, CA: Sage Publications, Inc.

Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc.

Post-Doc Blogpost: Meaningful Products & Practical Procedures in Program Evaluation

Of the Program Evaluation Standards, standard U6 regards meaningful processes and products, whereas standard F2 is on practical procedures.  To begin, the definition of U6 as described by Yarbrough et al. (2011) includes, “Evaluations should construct activities, descriptions, and judgments in ways that encourage participants to rediscover, reinterpret, or revise their understandings and behaviors” (p. 51). The authors continue when discussing F2, “Evaluation procedures should be practical and responsive to the way the program operates” (p. 87). While U6 is among the utility standards, and F2 the feasibility standards, I very sincerely believe they share much of the same intent as it pertains to program evaluation.  They can be viewed as facets of a singular whole, where U6 on meaningful products and practical procedures discusses the real need for evaluation audiences to have the ability to not only interpret what findings are shared, yet be able to make positive change from those results as well.  F2 reflects pointedly regarding respecting the program’s existing operations to the point of requesting that evaluators act in a way practical when in comparison to what is already in-place.  In their combined essence, U6 asks for findings that mean something to audiences, and F2 asks those findings take existing conditions into account.  Yet is this not already a fundamental requirement of any successful change initiative?

A related position in the field I am quite interested in is the work that institutional research (IR) teams perform.  Working simultaneously and directly with members of the Office of Institutional Research and Assessment from one university, and the Office of Assessment with another university, I have garnered a deep respect for the work they are collectively performing for their relative institutions.  As Howard, McLaughlin, & Knight (2012) define the profession, “two of the most widely accepted definitions are Joe Saupe’s (1990) notion of IR as decision support – a set of activities that provide support for institutional planning, policy formation, and decision making – and Cameron Fincher’s (1978) description of IR as organizational intelligence” (p. 22).  In both cases the focus is on data, and the use of data for an institution to know more about itself in the future than it knows in the present.  Whether this is in the form of performance metrics, operational efficiency analysis, forward-looking planning exercises, or simply the evaluation of data which has been better culled, processed, cleaned, and presented than in the past.  Yet amid each of these steps is the very real need to not only provide a value-added process which is practical, it also must speak to the existing system for its findings to warrant merit.  This is how we return to both U6 and F2, the standards of meaningful processes and products, as well as practical procedures.

Forging a palpable relationship between the evaluation process and relevant stakeholders including sponsors, evaluators, implementers, evaluation participants, and intended users is key. This permits greater understanding of the processes employed and products sought, and permits for greater buy-in when results are later shared.  Kaufman et al. (2006) presents a review of the evaluation plan used to review the outcomes of a family violence initiative for the purpose of promoting positive social change. On meaningful products and practical procedures Kaufman et al. (2006) remark, “Evaluations are most likely to be utilized if they are theory driven, emphasize stakeholder participation, employ multiple methods and have scientific rigor… In our work, we also place a strong emphasis on building evaluation capacity” (p. 191).  This evaluation, focusing first on creating a logical model for the purpose of articulating a highly-defined program concept, was used to then cascade data-driven decisions from the model constructed. With the combined efforts of project management, the program’s staff, and the evaluation team, an evaluation plan was crafted.  This plan permitted broad buy-in based on the involvement of many stakeholders. In the end, this also enabled the work of the evaluation team to continue in a less acrimonious environment and reemphasized for the team the importance of working in collaboration with key stakeholders from the beginning so that stakeholders bought into and supported the evaluation process (Kaufman et al., 2006, p. 195). The results of this collaborative, synthesizing process included greater stakeholder participation, a heightened level of rigor in addition to increased capacity, and most importantly for the program the use of ‘common measures’ across the program.

Yet increased stakeholder involvement is not the only benefit to ensuring meaningful processes & products as well as practical procedures. This heightened use of pragmatism among processes also lends to greater design efficacy. Said of a study of 209 PharmD students at the University of Arizona College of Pharmacy (UACOP), “Curriculum mapping is a consideration of when, how, and what is taught, as well as the assessment measures utilized to explain achievement of expected student learning outcomes” (Plaza, 2007, p. 1). This curriculum mapping exercise was intended to review the juxtaposition of the ‘designed curriculum’ versus the ‘delivered curriculum’ versus the ‘experienced curriculum’. The results of this study show great concordance among student and faculty perception, reinforcing not only sound program evaluation design to permit concordance, yet an effective program as well as measured by these graphical outcomes.  Equally said of the design aspects of pragmatism in program evaluation, Berlowitz et al. (2010) developed, “a system-wide approach to the evaluation of existing programs… This evaluation demonstrates the feasibility of a highly coordinated “whole of system” evaluation. Such an approach may ultimately contribute to the development of evidence-based policy” (p. 148). This study, and the rigorous data collection among existing datasets was not designed for the purpose of purporting a new means of gathering and aggregating data alike, simply a new method for taking advantage of the data already largely available while ensuring a broader yet more actionable resulting series of conclusions strong enough to further input on policy decisions.

Where the above considerations for the utility of meaningful processes and products, and the feasibility of practical procedures then comes together, is among its application in a position held in the office of institutional research.  Said of the role IR has in ensuring pragmatism in program evaluation, Howard et al. (2012) note, “Driven by the winds of accountability, accreditation, quality assurance, and competition, institutions of higher education throughout the world are making large investments in their analytical and research capacities” (p. 25). These investments remain critical to the sustainability of their investing institutions, and require that what is discovered in and among these offices is then instituted through the very departments and programs evaluated.  Institutional research is not a constituent which exists solely for the purpose of performance evaluation, nor do IPEDS or accreditation reporting responsibilities overshadow the need for actionable data among those programs under review.  Rather, we seek an environment where IR analysts and researchers are permitted to utilize practical processes for the purpose of ensuring greater stakeholder buy-in and effective design.

An environment where practical procedures are used is equally one where replicability, generalizability, and transferability all exist in a more stable ecosystem of program evaluation. Said of this need, in a 2 year follow-up study of needs and attitudes related to peer evaluation, DiVall et al. (2012) write, “All faculty members reported receiving a balance of positive and constructive feedback; 78% agreed that peer observation and evaluation gave them concrete suggestions for improving their teaching; and 89% felt that the benefits of peer observation and evaluation outweighed the effort of participating” (p. 1). These are not results achieved from processes where faculty found no direct application of the peer review program, these data were gathered of a program where a respect for the existing process was maintained and the product was seen as having more value than the time and effort required to participate as expressed in opportunity cost. Finally, using a portfolio evaluation tool which measured student achievement of a nursing program’s goals and objectives, Kear et al. (2007) state, “faculty reported that although students found writing the comprehensive self-assessment sometimes daunting, in the end, it was a rewarding experience to affirm their personal accomplishments and professional growth” (p. 113). This only further affirms the very real need for evaluators to continue to discover means of collecting, aggregating, and analyzing data that speak to existing processes.  This also further reinforces the need for program evaluations to result in a series of conclusions, or recommendations that make the greatest use of existing process, allowing for sweeping institutionalization as was seen with the Kear et al study.  Finally, said of this need for practicality among process and product within program evaluation and research as a whole, Booth, Colomb, & Williams, (2008) remark, “When you do research, you learn something that others don’t know. So when you report it, you must think of your reader as someone who doesn’t know it but needs to and yourself as someone who will give her reason to want to know it “(p. 18).

Berlowitz, D. J. & Graco, M. (2010). The development of a streamlined, coordinated and sustainable evaluation methodology for a diverse chronic disease management program. Australian Health Review, 34(2), 148-51. Retrieved from http://search.proquest.com/docview/366860672?accountid=14872

Booth, W. C., Colomb, G. G., & Williams, J. M. (2008). The craft of research (3rd Ed). Chicago, IL: The University of Chicago Press.

DiVall, M., Barr, J., Gonyeau, M., Matthews, S. J., Van Amburgh, J., Qualters, D., & Trujillo, J. (2012). Follow-up assessment of a faculty peer observation and evaluation program. American Journal of Pharmaceutical Education, 76(4), 1-61. Retrieved from http://search.proquest.com/docview/1160465084?accountid=14872

Howard, R.D., McLaughlin, G.W., & Knight, W.E. (2012). The handbook of institutional research. San Francisco, CA: John Wiley & Sons, Inc.

Kaufman, J. S., Crusto, C. A., Quan, M., Ross, E., Friedman, S. R., O’Reilly, K., & Call, S. (2006). Utilizing program evaluation as a strategy to promote community change: Evaluation of a comprehensive, community-based, family violence initiative. American Journal of Community Psychology, 38(3-4), 191-200. doi:http://dx.doi.org/10.1007/s10464-006-9086-8

Kear, M. & Bear, M. (2007). Using portfolio evaluation for program outcome assessment. Journal of Nursing Education, 46(3), 109-14. Retrieved from http://search.proquest.com/docview/203971441?accountid=14872

Plaza, C., Draugalis, J. R., Slack, M. K., Skrepnek, G. H., & Sauer, K. A. (2007). Curriculum mapping in program assessment and evaluation. American Journal of Pharmaceutical Education, 71(2), 1-20. Retrieved from http://search.proquest.com/docview/211259301?accountid=14872

Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc.

The Juxtaposition of Social/Ethical Responsibility across Disciplines

As I approach full speed in the post-doctoral program, I equally approach my first opportunity to share publicly insights derived from this study of assessment, evaluation, and accountability. The American Evaluation Association (AEA) identifies its established social and ethical responsibilities of evaluators.  In juxtaposition, the social and ethical responsibilities of institutional research as an education-based area of interest, are expressed by the Association for Institutional Research (AIR).  Yet first, a personal introduction as requested by this assignment.  I have chosen institutional research as my professional education-based area of interest, as research and analysis have been at the heart of much of what I’ve done for the past decade or more.

Spanning a period easily covering ten years, I have straddled industry and academe for the purpose of not only remaining a lifelong learner but continuing to leverage what I take from each course and apply it as readily as possible to my working world in industry and in the classroom to the benefit of my employers and my students.  As mentioned elsewhere in my ‘about’ page, my work includes a multitude of projects focused on distilling a clear view of institutional effectiveness and program performance. Roles have included senior outcomes analyst, management analyst, operations analyst, assessor, and faculty member for organizations in industries ranging from higher education to hardware manufacturing and business intelligence.  With each position a new opportunity to assimilate new methods for assessing data.  With each new industry a new opportunity to learn a new language, adhere to new practices, and synthesize the combined/protracted experience that is the sum of their parts.  Yet in each instance I do not feel as though I remain with a steadfast understanding that I’ve learned more and therefore have less left to learn.  In each instance I instead feel as though I know and have experienced even less of what the world has to offer.  Focusing this indefinite thirst to speak to assessment and evaluation specifically, the task becomes pursuing ever-greater growth, and ever-greater success across a wide range of applications, industries, and instances, while equally remaining true to guiding principles which serve those who benefit from any lesson I learn or analysis I perform.

The Program Evaluation Standards intimate standards statements regarding propriety which include responsive and inclusive orientation, formal agreements, human rights and respect, clarity and fairness, transparency and disclosure, conflicts of interest, and fiscal responsibility.  At the heart of these Yarbrough, Shulha, Hopson, & Caruthers (2011) remark, “Ethics encompasses concerns about the rights, responsibilities, and behaviors of evaluators and evaluation stakeholders… All people have innate rights that should be respected and recognized” (p. 106).  This is then compared with a like-minded statement from the AEA directly in stating, “Evaluators have the responsibility to understand and respect differences among participants, such as differences in their culture, religion, gender, disability, age, sexual orientation and ethnicity, and to account for potential implications of these differences when planning, conducting, analyzing, and reporting evaluations” (Guiding Principles for Evaluators, n.d., para. 40).  Finally, in juxtaposition we have Howard, McLaughlin, & Knight with The Handbook of Institutional Research (2012) who write, “All employees should be treated fairly, the institutional research office and its function should be regularly evaluated, and all information and reports should be secure, accurate, and properly reported… The craft of institutional research should be upheld by a responsibility to the integrity of the profession” (p. 42). Thus, in the end, while this work had intended to explore a juxtaposition, the chosen word implies some paradoxical behavior at least to a slight degree, in actuality shows none of the sort.  Rather, we find congruence, and we find agreement.

It is important to uphold standards for the ethical behavior of evaluators, as the very profession is one steeped in a hard focus on data, and the answers data provide.  We as human beings, however, tend to this profession while flawed. We make mistakes, we miscalculate, we deviate from design, and we inadvertently insert bias into our findings.  None of this may be done on purpose, and certainly not all transgressions are present in every study.  The implication is there, though, that we can make mistakes and are indeed fallible.  At the same time we are of a profession which is tasked with identifying what is data and what is noise, what programs work and which curriculum does not, which survey shows desired outcomes and which employees are underperforming.  These are questions which beget our best efforts, our most scientific of endeavors, and our resolute of trajectories to identify only truths however scarce, amid the many opportunities to be tempted toward manufacturing alternate – albeit perhaps more beneficial – realities for we as evaluators and our stakeholders. All participants have rights, all evaluators have rights, and all sponsors have rights.  It is our task to serve in the best collective interest, using the best methods available to ensure a properly informed future.

American Evaluation Association. (n.d.). Guiding principles for evaluators. Retrieved September 11, 2013 from http://www.eval.org/p/cm/ld/fid=51

Howard, R.D., McLaughlin, G.W., & Knight, W.E. (2012). The handbook of institutional research. San Francisco, CA: Jossey-Bass.

Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc.

On Perpetual Organizational Progress

Emergence as a recognized entity secures a tentative place for an organization in a population, but its persistence depends upon the continual replication of its routines and competencies (Aldrich & Ruef, 2006, p. 94).  We think of where we work as somewhere fixed; an institution in its truest sense, a building with cubicles, desks with computers, and employees with bosses.  Yet what the research has shown is this is only the case because we collectively make it so each day, and the day we cease to do so is the day our organization equally ceases to persist.  From this outlook, though, comes an equally ambitious upside… we then have a choice on the organizational routines and competencies we elect to replicate and utilize.  Said differently, we can begin to rethink, regroup, redirect, and retool at any time as the organization is not in a fixed state.  So what’s stopping us?  Transformational change involves a radical shift from one state of being to another, which is an extremely painful process… proactive transformation requires an awareness of the consequences the “new” context will have on the existing culture, behaviors, and mindset, if it is to be engaged in willingly (Biscaccianti, Esposito, & Williams, 2011, p. 30).

We as individual members of an organization function as both user and supporter of the organization continually and paradoxically.  We are project managers, financial analysts, account executives, and customer service representatives.  We are defined by our role, by our processes, by the systems we use, the skills we have, and the declarative & procedural knowledge we employ.  We do not change because we choose not to change, and we choose not to change because we took far too long learning and working and struggling to get where we are with what we know.  Is this an accurate look on reality, though?  To seek perpetual organizational progress is to seek a framework and mindset of near-daily renewal of our routines and competencies for the sake of our company’s progress, not for change’s sake alone, nor at the expense of individual accomplishment.  The organization at its essence is an aggregation of human effort, not of best practices, industry standards, and heralded products and services.  Put another way, individuals can be wildly successful and equally accomplished, while the organizations they work for is under a constant state of flux and renewal.  One can use and support an organization differently each day, while being regarded the expert of his/her craft.  Thus, in order to pursue perpetual organizational progress, a new lens with which to view change is necessary.

The essence of the problem-finding and problem-solving approach revolves around the identification of problem characteristics and the extent to which they entail corresponding impediments to the activities of problem finding, framing, and formulating; problem solving; and solution implementation…  methodologically, this approach responds to design science’s call to comparatively evaluate alternative governing mechanisms that mitigate impediments, leading to more comprehensive problem formulations, more efficient searching for and creating of valuable solutions, and more successful implementation of solutions (2012, p. 58).  This approach to organizational design allows us to ask far broader questions of management, and of every member’s contribution to ongoing organizational success.  Success herein and thus far has not been defined, and this definition remains absent as the definition must instead remain iterative.  We should not seek success in traditional terms as traditional terms warrant traditional practices, and those practices warrant the knowledge we already have and the processes we already use.  Perpetual progress then means a perpetual identification of new problems, new obstacles, new impediments, new solutions, and a new definition of success with each march forward.

Is there a magic recipe all companies should follow for identifying the problems we must then address in perpetuity?  With persistence as the goal the answer then remains, not likely.  What can be done, however, is we can instead codify the process for identifying problems at the individual organizational level, as those same routines and competencies which brought us to today can then serve as filters for identifying further opportunities for progress.  Cognitive heuristics – problem-solving techniques that reduce complex situations to simpler judgmental operations – can become specific to an organizational form, or even an individual organization (Aldrich & Ruef, 2006, p. 120).  The very fabric which defines how our organizations are successful now then becomes not what we choose to change, yet instead what we use to evaluate what else should change.  Success today sets not tomorrow’s bar, it identifies today’s neighboring problem.  All else may change to exist on-par with that new success.

Today’s performance management systems seek to evaluate how well individual members are faring at performing pre-determined routines.  Individual performance measurement is accepted as a retrospective task seeking convergent methods of routine persistence and level of competence.  We set new goals, yet of the same routines.  We establish new targets, yet of only marginally enlarged job descriptions.  Skeptical?  Ask yourself when you were last given a revised job description based on what you’ve learned during your year(s) of service and growth.  Better still, ask yourself when you directly contributed toward the authorship of such a document.  Rational system theorists stress goal specificity and formalization, natural system theorists generally acknowledge the existence of these attributes but argue that other characteristics – characteristics shared with all social groups – are of greater significance, and open systems are [instead] capable of self-maintenance on the basis of throughput of resources from the environment, [and] this throughput is essential to the system’s viability (Scott, 2003).  Social systems warranting the identification of work performed indeed, and based on the resources provided by the surrounding environment.  This then obviates the idea that performance management should be based on a fixed target, much as organizational progress is only perpetual when fixed routines and competencies have been abandoned.

To seek resolution, then, of the competing challenges between management’s historical predisposition toward a rational system, and its desire to emulate open systems thinking, we seek not a replacement for today’s routines or tomorrow’s stretch goals.  We seek instead, an entirely different unit of analysis, and object of our futurist affection.  What we should be promoting instead of leadership alone are communities of actors who get on with things naturally, leadership together with management being an intrinsic part of that (Mintzberg, 2009, p. 9).  We seek the ability to fluidly move between the routines which bring us present success, the pursuit of impediments to success elsewhere, and the ability to base our progress on an iterative view of success itself and our progress toward it, thereby managing performance on numerous planes simultaneously.  Those planes then include perhaps a normative look at performance via the evaluations we all know and review periodically, the plane of success impediments identified, the working definition of success holistically, and the actions/strategies necessary to balance them all.  And is there a process for identifying these actions/strategies?  Indeed there is.  Positive deviance (PD) is founded on the premise that at least one person in a community, working with the same resources as everyone else, has already licked the problem that confounds others… from the PD perspective, individual difference is regarded as a community resource… community engagement is essential to discovering noteworthy variants in their midst and adapting their practices and strategies (Pascale, Sternin, & Sternin, 2010, p. 3).  We can embrace the bestseller lists without reservation and engage in either frequency imitation, trait imitation, outcome imitation, or a combination of the three.  Conversely, we can seek these deviants, and not for their solutions, but for their methodology at removing impediment in the name of a new successful day, every day.

– Justin

The Work That Gets Measured…

The organization is an entity which can be both viewed and assessed through a multitude of lenses, this includes the organizational learning approach.  The organizational learning approach focuses on how individuals, groups, and organizations notice and interpret information and use it to alter their fit with their environments (Aldrich & Ruef, 2009, p. 47).  The varying schools of thought around viewing organizations included lenses for viewing each as an ecosystem, as a combination of symbolic interactions, and as a series of transaction costs.  This view looks specifically into organizations as a network of persons learning and growing according to their environment.  Aldrich & Ruef (2009) continue, “The adaptive learning perspective, pioneered by Cyert and March (1963), treats organizations as goal-oriented activity systems that learn from experience by repeating apparently successful behaviors and discarding unsuccessful ones… From the adaptive learning perspective, variations are generated when performance fails to meet targeted aspiration levels, triggering problem-driven search routines” (p. 47).

The above suggests that we seek ‘a better mousetrap’ not only when we feel the current has gone stale, yet when targets are not met as well.  This should come as no surprise, yet what does require reinforcement is determining when targets are missed in the first place.  How do we determine we’ve missed the mark, if the mark is not plainly labeled before we begin?  In short, we do not know in advance, and therefore cannot always know when the mark has been unintentionally averted.  Adaptation then requires not only the ability to alter course according to outcome, yet is equally predicated on knowing the outcome sought to begin with.  This appeal to clarity reverberates throughout both management process and strategic clarity.  Once ready, we have just one consideration left for the moment before action, recognizing how much we can achieve based on what we’ve already done to-date.

Aldrich & Ruef (2009) complete the thought with, “Prior organizational learning creates knowledge structures and sets of conceptual categories that filter subsequent information and thus influence further learning.  Cohen and Levinthal (1990) borrowed the term absorptive capacity from industrial economics to refer to the level of stored knowledge and experience that make organizations better able to learn from further experience.” (p. 48).  Absorptive capacity is then this final consideration.  We take into account where we want to go as an organization, what benchmarks will alert us to whether progress is being made, while taking into consideration what we’ve accomplished/learned in the past, and we have a further informed determination of just how adaptable we can be collectively.

– Justin

Aldrich, H.E. & Ruef, M. (2009). Organizations evolving. Los Angeles, CA: Sage Publications.

My Philosophy of Teaching

A Call to Action

Exercising the courage to become more purpose-centered, other-focused, internally directed, and externally open results in increased hope and unleashes a variety of other positive emotions (Quinn, 2004). I as a teacher am not so solely because of anything tangible. Nor am I a teacher solely for those inspired moments in each student’s day. Rather, I feel that teaching is both a privilege and a responsibility. It is a privilege as I do have the opportunity to touch lives, bring new hope to possibly otherwise under-informed futures, and hopefully and occasionally inspire someone to be great. Yet, I additionally feel teaching is a responsibility each generation has to its successors. As society can be regarded as a construct of social networks, a collection of living systems, and its role to be that of sustainability long-term; teachers hold the responsibility of ushering in an informed era for those that follow such that they have the opportunity to continue the successes of the past and create their own in the process.

Learning as SKILLS

Self-Knowledge Inventory of Lifelong Learning Strategies (SKILLS) is based upon five aspects of learning which are essential to the learning process, these are the constructs of metacognition, metamotivation, memory, resource management, and critical thinking (Conti & Fellenz, 1991). Taking this construct as developed by the Center for Adult Learning Research at Montana State University into account, it creates a paradigm with which to gauge not only the structure and success of a given lesson plan, yet the success of each student in terms of their own personal level of learning as well. As metacognition regards the ability of the learner to reflect upon what has been learned and work to make their own learning process more efficient over time, it is my responsibility to ensure each learner has the tools to do so. As metamotivation regards the learner’s control over their own motivational strategies, it is both my responsibility and privilege to ensure those options exist while in a learning environment. As both memory and resource management are stand-alone concepts, I operate with an obligation to ensure the methodologies I employ allow for greater capture and memory usage while allowing for greater resource utilization and management as well. Finally, as critical thinking is a concept not uncommon in the academic environment, I will put defining this term to the side and instead comment that critical thinking is what I feel the majority of my teaching strategy is reliant upon. As critical thinking is what I feel separates the successful from those otherwise not experiencing similar success, I feel critical thinking and success are mutually beneficial and directly correlated. Yet, to ensure the greatest level of critical thinking in those I guide, I return to Quinn’s words regarding being purpose-centered, externally focused, and use these emphases to ensure each learner operates at their highest critical thinking potential.

Sculpting Futures

The workplace, the professions, the leaders and foot soldiers of civic society must all do their part – and that obligation cannot be spurned or postponed or fobbed off on institutions that are incapable of picking up the responsibility (Gardner, 2006). Institutions of higher learning have existed far before any referenced work concerning concepts such as adult learning strategies. Yet, very little separates the adult from the adult learner and again from those instructing such as myself. As that responsibility exists to ensure the sustainable future of our society, I feel taking an analytical approach to learning as with the SKILLS construct, aids in ensuring both that privilege and responsibility are well served. Finally, as a litmus test for whether I have succeeded as a teacher, I look to Wind & Crook’s definition of advancement. Science sometimes advances not through evolutionary progress in a given framework but through sudden leaps to a new model for viewing the world (Wind & Crook, 2005).

– Justin

Conti, G.J. & Fellenz, R.A. (1991). Assessing adult learning strategies. Bozeman, MT: Montana State University.

Gardner, H. (2006). Five minds for the future. Boston, MA: Harvard Business School Press.

Quinn, R.E. (2004). Building the bridge as you walk on it: A guide for leading change. San Francisco, CA: John Wiley & Sons, Inc.

Wind, Y.J. & Crook, C. (2005). The power of impossible thinking: Transform the business of your life and the life of your business. Upper Saddle River, NJ: Wharton School Publishing.

Do You Lead by Listening?

As Flight 1549 plummeted down, they chanted in unison to passengers, “Brace, brace, heads down, stay down,” preventing many injuries during the rough water landing; a testament to the leadership aboard the ‘Miracle on the Hudson’, and to the power of repetitive and concrete instruction for provoking action (Sutton, 2010). Examples such as this one are those rare gems which can take an entire tome on leadership, and give it a single, palpable schema with which to walk away from the reading and immediately apply its lessons. Yet is leadership entirely about leading? Is being a follower both the antithetical and only alternative?

Leadership is leading, yes as the term begets, but leading is additionally listening. In a summer article for the Harvard Business Review, Martin (2007) wrote, “Brilliant leaders excel at integrative thinking. They can hold two opposing ideas in their minds at once. Then, rather than settling for choice A or B, they forge an innovative “third way” that contains elements of both but improves on each… Embrace the complexity of conflicting options. And emulate great leaders’ decision-making approach – looking beyond obvious considerations” (p. 73). The funny thing is, though, some leaders may hear this and think they must come up with all of those great options & ideas personally. ‘They put me in charge because they expect me to have all the answers’ you may say. ‘My people can’t possibly think I am weak and in need of their help to decide what to do’. I challenge this thinking to instead reply that your abilities as an individual contributor may have been what brought you praise, possibly even initial consideration for the position you now serve, but it wasn’t why you were selected.

You were selected because you see the opportunities a better run team can collectively contribute to organizational success, more colloquially known as ‘seeing both the forest and the trees’. But where can we turn for a repetitive, concrete example of how to get the best of our people by learning to listen better, rather than relying on our ideas and personal experiences alone? How about a solution dating back to 1956, Bloom’s Taxonomy.

Educators have been using it for decades, and anyone who has frequented grad school has been exposed at least minimally in conversation with fellow scholars. The crux of Bloom’s work gives us categorical direction with which to mine meaningful data from our direct reports, by simply channeling the intentions of our listening activities via the following levels of thinking as described by Anderson & Krathwohl (2001):

Remembering: Retrieving, recognizing, and recalling relevant knowledge from long-term memory.

Understanding: Constructing meaning from oral, written, and graphic messages through interpreting, exemplifying, classifying, summarizing, inferring, comparing, and explaining.

Applying: Carrying out or using a procedure through executing, or implementing.

Analyzing: Breaking material into constituent parts, determining how the parts relate to one another and to an overall structure or purpose through differentiating, organizing, and attributing.

Evaluating: Making judgments based on criteria and standards through checking and critiquing.

Creating: Putting elements together to form a coherent or functional whole; reorganizing elements into a new pattern or structure through generating, planning, or producing.

So try this yourself with your team, by applying at least two of these levels in your next meeting, see if the conversation changes shape from what you’re used to hearing (or saying).

– Justin

Forehand, M. (2005). Bloom’s taxonomy: Original and revised. In M. Orey (Ed.), Emerging perspectives on learning, teaching, and technology. Retrieved from http://projects.coe.uga.edu/epltt/

Martin, R. (2007). How successful leaders think. Harvard Business Review, 85(6), 60-67.

Sutton, R. I. (2010). Good boss, bad boss: How to be the best – and learn from the worst. New York, NY: Business Plus.

Help Them Grow Before They Work For You: Community-Based Curriculum

Generally, futurologists do not attempt to predict what is going to happen in 10-15 years, but rather attempt to decide on what they want to happen so that they can then make more intelligent choices (McNeil, 2009). In order to prepare tomorrow’s business leaders for the obstacles that lay before them while acting as global citizens, we must rethink today’s curriculum at our institutions of higher education. Recent years have seen a proliferation of career schools, oriented in training practitioners to meet the demands of society’s impoverished trades. These trades, ranging from business, to healthcare, to the legal profession, give rise to a series of curriculum which are founded upon systemic views of curriculum development. The aforementioned futurists belong instead to the school of thought concerning the social reconstructionist curriculum.

Considering only those participants who are identified as members of the organization, who is recruited and how long they stay have a wide range of implications for the structure and performance of the organization (Scott, 2003). The organization of tomorrow is more likely to take advantage of advanced technologies, contribute to the abundance of information already seen by the current generation, and will be entirely composed of knowledge workers. The implication to these conditions is to develop a curriculum around the intelligence, creativity, and energy of the upcoming generation. Yet, as current curriculum is developed via traditional models intrinsic in our larger institutions, there lies a great opportunity for those institutions to embrace a school of thought which is concerned with allowing the learner to be focused on the student’s definition of self and thereby provide the catalyst for greater individual growth.

Under the leadership of the school, community members meet to acquire the mental outlooks, knowledge, and skills for establishing new industry central to the development and self-sufficiency of the community (McNeil, 2009). As a solution to the imperative concerning the development of curriculum to meet the needs of tomorrow, there is the opportunity to begin to engage the community. This engagement is much less about solving the problems faced today, and is more about contributing to social reconstruction via a democratized model of curriculum development based on future need. When initiating a dialogue from relevant stakeholders, there lies the opportunity to forge connections between the impressions of need felt by teacher, administrator, community member, and student alike.

It is important to emphasize that transfer and long-term retention are enhanced by learning conditions that introduce difficulties in learning initially and even impair performance (McNeil, 2009). A student can be better served with a curriculum not allowing for immediate, small successes, and instead introduces such challenge that refuge must be sought in disparate pieces of information. This gives rise to the synthesis of otherwise meaningless data in a way that forges connections throughout the curriculum, and instills a sense of student independence with regard to the learning. This, thereby creating a heightened sense of self-efficacy, and preparing him/her for the future challenges that await in an increasingly global society where an employee is as much a global, corporate citizen.

To conclude, democratizing the curriculum development effort where institutions, their constituents, community members, and students alike have a profound effect on the way essential skills are taught moving forward presents an opportunity to go beyond lesson plans which harbor activities built to the textbooks of today. In learning how to learn through long-term retention activities, and while incorporating this community-based outcome of social reconstructionist curriculum, the educational institutions of today may be better prepared to face the challenges facing them tomorrow.

– Justin

McNeil, J.D. (2009). Contemporary curriculum: In thought and action. Hoboken, NJ: John Wiley & Sons, Inc.

Scott, W.R. (2003). Organizations: rational, natural, and open systems. Upper Saddle River, NJ: Prentice Hall.