Act Like an Analyst, Think Like a Strategist

TheBigAnalytics_472_726Business analytics is serving a company best, when it is used to shape and support every facet of the company’s strategy, and support all resources and activities tied to that strategy.

Business analytics has not only gained traction as a function within business in recent years, it has also become a beacon in the current literature for those hungry for more rigor around identifying how to win. Many executives now place analytics among their top strategic priorities. So why, then, do I still encounter organizations who relegate their analytics teams to focus solely on operations or sales support? More importantly, why is this still the norm and not the exception?

Experience has told me this has everything to do with the fact that organizations see analytics as crucial, yet they lack a shared awareness of how analytics can be leveraged within the business. Not unlike six sigma advocates who use the process solely for manufacturing. Nor unlike balanced scorecard proponents who use the process to simply assess, rather than coordinate and integrate key activities. You as an analytics leader are the individual best positioned to help the business understand the totality of what value analytics can deliver. Yet in order to do this, you will have to act like an analyst, and think like a strategist…

To explore how best to identify an appropriate synthesis of analytics and strategy, sign up for a free copy of The Big Analytics: Data Leaders’ Collaborative Book Project made possible by AnalyticsWeek:

http://thebiganalytics.com/

And for the full press release covering this project:

http://bit.ly/1ZWd0CW

About The Big Analytics Book

A living book project that contains thought leadership contributions from industry leaders, influencers and practitioners. The book will be re-published annually with coverage from current leaders and influencers. The Big Analytics Book is meant for data science professionals, enthusiasts, leaders and influencers.

About AnalyticsWeek

AnalyticsWeek is a global community of 150+ businesses and 30,000+ data science professionals. In support of their mission of bringing “Analytics to the 99%,” AnalyticsWeek works with their community members and businesses partners to roll out initiatives to bridge the talent and knowledge gap.

Advertisement

Opinion Poll: Anthem Takeover Suffering from Deal Heat?

*Note: The following was originally posted as an article on LinkedIn dated June 20, 2015.

Earlier today the Wall Street Journal reported that Anthem has submitted yet another, even sweeter deal to acquire fellow industry behemoth Cigna. The deal is now valued at $184 a share. This the fourth Anthem attempt in just weeks, the moves viewed as prescient at a time when M&A chatter abounds, and in a sector ripe for sea change with many heavy hitters stepping-up to the plate.

The first question which comes to mind is whether Anthem remains prudent in a tumultuous environment, or if this mega merger is beginning to suffer from deal heat. First to explore intent, as we must understand the game. Eccles, Lanes, and Wilson writing for the Harvard Business Review remind us that, “In today’s market, the purchase price of an acquisition will nearly always be higher than the intrinsic value of the target company. An acquirer needs to be sure that there are enough cost savings and revenue generators—synergy value—to justify the premium so that the target company’s shareholders don’t get all the value the deal creates.”

So then why consider deal heat? Well, according to Jack Welch’s description of deal heat in his seminal text Winning, “In such situations, once an acquisition candidate is identified, the top people at the acquirer and their salivating investment bankers join together in a frenzy of panic, overreaching, and paranoia, which intensifies with every additional would-be acquirer on the scene.” He goes on to list seven pitfalls associated with mergers, including a warning about the sixth pitfall, paying too much. Described by Welch as “Not 5 or 10 percent too much, but so much that the premium can never be recouped in the integration.”

So… $184 a share when Cigna is listed at $156.40 at the time of this writing I ask you … is this still prudent, or is this deal starting to get a temperature? Please share your opinions and comments below, and be sure to share this article with others to give them the chance to weigh-in.

Dr. Justin Barclay is an operations research scientist focused on supporting strategy through applied research and analytics. He is a senior analyst specializing in research and data modeling for the well-being company Healthways, and serves as an assistant professor of strategy for the Jack Welch Management Institute.

Speak Less, Say More, To Keep Your Strategy On-Track

Microphones at the podium

This did not begin well. For the past two months I have been teaching a group of senior executives and high level technical professionals the finer points of communicating strategy to their organizations. Day 1 was a travesty, I was all but drawn and quartered by their perfunctory comments relative to impressions of how confusing it was that I assumed their having a strategy did not necessarily equal their having an ability to communicate that strategy.

To some extent they were right, yet to a larger extent we all still had quite a lot to learn. While I’ve been teaching college students for the better part of the last 12 years, I still went home that first night of this class just weeks ago entirely deflated. While confident in my ability to both recall and relate the material, I had a larger lesson to learn about the inauthentic way in which I arrived that first night. This was not a class of first year freshman looking for strong leadership. These are seasoned leaders themselves and in their last class prior to graduating with a Master’s degree in, of all things, leadership.

So what did I do differently to turn things around? I stopped trying to talk my way out of it. Instead, I just recognized my role as they did, a bridge between theory and practice yet nothing more. I created the boundaries for their communication and learning and set them free within those boundaries. My communication with them has also since changed. I toned down the sage on the stage. I listened more intently. I adjusted both style and content on the fly. I adjusted my approach to leverage their experience and my own, all while tying in the content when I could through substantive yet bite-sized key takeaways to keep it memorable. In the process I realized that as I was teaching them to become more effective communicators, such that they may communicate their strategies more effectively, I too was learning to be more effective with my own communication. This in mind, I just wanted to take a minute to share a dozen of my key takeaways from this class on communicating strategy, that I think you may just find helpful across communication applications of all types.

  1. Effective Leaders Lead Strategy and Tactics
  2. Lead with Logic & Emotion, Not Logic or Emotion
  3. Everyone Can Be a Change Agent
  4. Effective Alignment Requires A Common Message
  5. Keep the Strategy Message Bite-Sized & Repeatable
  6. Reach Them Through Intrinsic Motivations
  7. Identify Motivations Through Open Discussion
  8. Connect Strategy to a Destination
  9. Measure the Business to Measure the Communication
  10. Always Reward Those Supporting the Strategy
  11. Expect of Yourself What You Expect of Others
  12. Your People Are Your Primary Communication Vehicle

You may entirely disagree with some of what is listed and that is certainly for you to decide. There may be some (or many) which you find context-specific and are of no use to you. Yet I still believe knowing what content matters and what does not is one of our first steps toward great communicating as a leader, so thank you for reading.

Design Options When Conducting Appreciative Inquiry

According to BetterEvaluation (n.d.) there are currently 17 possible approaches to program evaluation.  These include such approaches as case study, contribution analysis, horizontal evaluation, positive deviance, and the approach where this analysis will focus, appreciative inquiry.  As remarked by BetterEvaluation (n.d.), “Appreciative Inquiry is about the coevolutionary search for the best in people, their organizations, and the relevant world around them. In its broadest focus, it involves systematic discovery of what gives ‘life’ to a living system when it is most alive, most effective, and most constructively capable in economic, ecological, and human terms” (para. 2). Yet the process for such an approach can be difficult to hold universal.  This, as appreciative inquiry by definition is neither specifically quantitative nor qualitative by requirement, rather it seeks to utilize the methods necessary to collect what data can be found meaningful among that which is available, and the design which proves most fruitful in the pursuit of positive deviants among an organization’s ecosystem.  With the understanding this approach is not suitable to every situation, Pascale, Sternin, & Sternin (2010) specify, “the process excels over most alternatives when addressing problems that, to repeat, (1) are enmeshed in a complex social system, (2) require social and behavioral change, and (3) entail solutions that are rife with unforeseeable or unintended consequences” (p. 10).

Example 1 – Organizational Culture in Higher Education

Knowing appreciative inquiry holds a special value among program evaluators, yet is one which can introduce a host of considerations for the processes, procedures, measures, rationale, and theoretical bases of studies, three instances of appreciative inquiry are introduced.  First, Niemann (2010) who studied organizational culture in higher education, when discussing the research question explored noted, “it is necessary to know how to create that sense of belonging, what the vision and mission are, what the people value and expect from their leaders and colleagues, what they identify with, and what will make them move collectively towards taking united ownership of the future of their institution or at least part of their institution” (p. 1004). To explore this transformation, the design included first a theoretical basis founded in Geertz’s semiotic approach to culture, as well as Thompson & Luthans psycho-social interpretive framework (Niemann, 2010, p. 1005). The population consisted of the surveying of 40 full-time faculty members of the Faculty of Education at the University of the Free State. The survey consisted of a number of open-ended questions, seeking the establishment of responses which could indicate ‘life-giving’ moments among the university’s ecosystem, whereby best practice could additionally flourish and the best version of the university could be furthered.  This method was chosen to permit such exploration as well as the opportunity for further probing if necessary.  27 narratives were collected in all among the 40 surveyed. Questions asked included those such as “Tell the story of one of your best experiences when you felt most involved in your work environment”, and “What do you value most in terms of the faculty’s values, norms, philosophy, mission and vision” (Niemann, 2010, p. 1010). The factors affecting this design included organizational structure, access to participants, theoretical basis of the questions themselves, as well as the research question driven by sponsorship from the Ministerial Committee.  An alternative could have been perhaps to exchange open-ended surveying with transcription to closed-ended multiple choice survey items.  Yet in sum total the strength of this study was based on a combination of the population’s organizational ecosystem in common, the totality of the questions asked, the open forum in which all were permitted to respond, and the direction taken by the inquiry leading to the pursuit of a best version of the university moving forward.

Example 2 – Online Courses and Knowledge Assimilation

A second study to consider involved an exploration to see if graphic enhancements and navigation could increase learning and reduce cognitive load to make it easier for at-risk, lower socioeconomic, and ethnic self-identified students to have a positive experience in online courses and increase the likelihood of their success in online courses (Cook, 2009, p. 303). Based on the study of semiotics, or the study of ‘patterned human communication behavior’, Cook sought to determine whether enhancements to courses could permit for reduced barriers to interaction for learners.  On the design of the study Cook (2009) states, “This study used an exploratory survey research model and several qualitative methodologies, appreciative inquiry and development design, to examine whether embedded semiotics and carefully designed metaphors helped students in the online courses to feel more comfortable in assimilating new knowledge online, reinforce their learning, and increase the potential for their course completion” (Cook, 2009, p. 304). Appreciative inquiry was chosen as the basis of the design for its positive approach, and the theory underpinning the 23 open-ended interviews conducted of students to identify positive deviants were established by research borne of the work of Richey & Klein (2007) on a developmental research design founded on data collected via practice. The practice of open-ended interviews, seeking answers to positive questions, for the purpose of identifying best practice is all founded in extant theory.  Factors affecting this design notably include the desire to gather actionable data from those experiencing the phenomena first-hand, while emphasizing what is working amid this phenomena.  And while closed-ended surveying of experience could have been performed, this would have needed to be predicated on existing theory regarding what factors most impact the navigation of online courses and facilitate the reduction of cognitive load.  Until these themes emerge, and positive deviants discovered, open-ended questioning permitted for the most actionable data regarding what can be built upon to optimize online courses.  Conclusions in this instance thus included as remarked by Cook (2009), “The findings suggest that the students’ need to be heard was an important factor, and more relevant to the students than had been discerned in the generic student assessments conducted by the university after each course” (p. 305).

Example 3 – Strategic Planning at the ISPI

Finally, we have Van Tiem & Rosenzweig (2008) who, with the cooperation of the Board of Directors for the International Society for Performance Improvement, “undertook a study to uncover the ‘best of ISPI’ to enhance their strategic planning” (p. 5). The methodology employed was to ask a series of positive questions which were designed to draw what was indeed working of the society, to then perpetuate these ‘wins’ across the remainder of ISPI practices. The questions were founded on established appreciative inquiry methodology, and spanned a very short window from a June 2007 board meeting to an established cutoff of August 15th of the same year. As remarked by Van Time et al. (2008), “the response time was short, but the entire timeframe for the project was very limited” (p. 6). In order to draw a series of conclusions based upon reliable answers from amid the ISPI population, the researchers first divided society membership into four equal parts.  Among those parts systematic random sampling was employed, and of the four questions constituting the aforementioned positive questions, each group received one of these four questions.  Regarding analysis, Rosenzweig and Van Tiem, with advice from Thomas (research advisor), analyzed the data, Rosenzweig and Van Tiem analyzed the member responses and determined category descriptors, and they collaborated on the individual question-findings pages, summary pages, conclusions, and recommendations (Van Tiem et a., 2008, p. 6).  The process thus used was a single cluster systematic random sampling of ISPI members.  This sampling received one of four survey questions regarding best practice among the ISPI. Responses were then coded and themed by the researchers with the guidance of their research advisor.  Finally, the theoretical basis of study was founded upon prior appreciative inquiry research spanning the work of Cooperrider, Whitney, & Stavros, 2003 as well as Watkins & Mohr, 2001.  Factors influencing this design included available data while at the same time a lack of available time.  Factors also included the means by which the entirety of society membership could be surveyed regarding the same topic.  An alternative approach in this instance would have been to ask all members the same four questions, thus removing the clustering among the sampling strategy. This would permit the ability to detect trend across groups, rather than simply categorize data among groups.

These studies, while only a small sample of those available regarding appreciative inquiry, reflect an opportunity to further consider the possibilities of appreciative inquiry among program evaluation.  Said of sound designs and analyses, Yarbrough et al. (2011) note, “Every evaluation requires an overall design that is responsive to features of the program and program components, context factors, and the purposes of the evaluation” (p. 201). I believe appreciative inquiry can provide this responsiveness if designed well, and I am excited by the prospect of exploring this very possibility further.

BetterEvaluation. (n.d.). Appreciative inquiry. Retrieved September 29, 2013 from http://betterevaluation.org/plan/approach/appreciative_inquiry.

Cook, R. (2009). Lessons learned from parietal art: Transformative pansemiotics for elearning. Proceedings of the IADIS International Conference on Cognition & Exploratory Learning In Digital Age, 303-306.

Niemann, R. R. (2010). Transforming an institutional culture: An appreciative inquiry. Catalyst (21519390), 39(3), 1003-1022.

Pascale, R., Sternin, J., & Sternin, M. (2010). The power of positive deviance: How unlikely innovators solve the world’s toughest problems. Boston, MA: Harvard Business Press.

Van Tiem, D., & Rosenzweig, J. (2008). How are we doing? “Best of ISPI” Appreciative inquiry member survey. Performance Improvement, 47(7), 5-11. doi:10.1002/pfi.20011

Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc.

Research in Business is Everyone’s Business

Collective-Consciousness

I am a firm believer that having a greater number of college degrees does not necessarily mean you’re smarter than those with fewer. I am unapologetic in my stance, as I believe the role of the university is not to increase your IQ (arguably a number with little flux). The role of the university is instead to train you, largely in a particular discipline or process or both. Yes, some programs require a greater degree of raw intelligence, and the purpose of this post is not to draw those lines. The purpose instead is to understand how we can walk away from the misconception that only those with a research background can perform business research. What connects these two dots? In short, the conclusion that just because someone has a PhD, it does not mean they know more about your business than you do. In fact, the opposite is usually true. If they are trained in the process, and you are intimate with your business, I would like to make a suggestion. When seeking a greater understanding of your business’ either process, program, or product performance, team up instead to form a symbiotic relationship between the business and a researcher so you both can accomplish more and do the research together.

The Why – The Interpretive Approach

Among the approaches to organization studies which exist, these include the interpretive approach. As described by Aldrich and Ruef (2009):

The interpretive approach focuses on the meaning social actions have for participants at the micro level of analysis. It emphasizes the socially constructed nature of organizational reality and the processes by which participants negotiate the meanings of their actions, rather than taking them as a given. Unlike institutional theorists, interpretive theorists posit a world in which actors build meaning with locally assembled materials through their interaction with socially autonomous others. (p. 43)

If this is true, then a lone researcher cannot simply be transplanted from one organization to the next, all the while delivering revenue-trajectory-altering research in a vacuum. The research is to be built on great questions, those may just come from the business, and the very meaning of the business and the data it generates is embedded within the interactions and the actors in the business itself.

The What – A Symbiotic Relationship

A relationship where you – representing the business – provide the context, maybe even help gather some of the data, and are there to take part in the interpretation once the researcher has completed a substantive portion of his/her analysis. You’re a team, the researcher is not a gun for hire. Which also means, if you’re a team, you’re a researcher too. This approach is important for many reasons, among which includes your store of tacit knowledge. As we are reminded by Colquitt, Lepine, and Wesson (2013), “Tacit knowledge [is] what employees can typically learn only through experience. It’s not easily communicated but could very well be the most important aspect of what we learn in organizations. In fact, it’s been argued that up to 90 percent of the knowledge contained in organizations occurs in tacit form” (p. 239). That is a vast amount of available information the researcher simply will not have if you do not team up and start working together.

The How – A Cue from Empowerment Evaluation

We can draw a number of conclusions on how best to form this reciprocal relationship between business and researcher as one team, and many come from the literature on empowerment evaluation. As put by Fetterman and Wandersman (2005):

If the group does not adopt an inclusive and capacity-building orientation with some form of democratic participation, then it is not an empowerment evaluation. However, if the community takes charge of the goals of the evaluation, is emotionally and intellectually linked to the effort, but is not actively engaged in the various data collection and analysis steps, then it probably is either at the early developmental stages of empowerment evaluation or it represents a minimal level of commitment. (p. 9)

There is a final, critical subtext to all of the above. In essence, there must be a consistent flow of ideas between the researcher and the business. Research in business is everyone’s business, yet only in environments when the researcher can share his/her craft, and the business more informed can help to grant the researcher access to the knowledge only they possess. For a final thought on the merits of this proposed team I defer to the literature on constructing grounded theory. Therein Charmaz 2014 reminds us that, “We need to think about the direction we aim to travel and the kinds of data our tools enable us to gather… Attending to how you gather data will ease your journey and bring you to your destination with a stronger product” (p. 22).

About the Author:

Senior decision support analyst for Healthways, and current adjunct faculty member for Allied American University, Grand Canyon University, South University, and Walden University, Dr. Barclay is a multi-method researcher, institutional assessor, and program evaluator. His work seeks to identify those insights from among enterprise data which are critical to sustaining an organization’s ability to complete. That work spans the higher education, government, nonprofit, and corporate sectors. His current research is in the areas of employee engagement, faculty engagement, factors affecting self-efficacy, and teaching in higher education with a focus on online instruction.

What Great Research and Life Have in Common

Fork in the Road

Image: 80000hours.org

In short, it comes down to questions. I will likely draw critics for this one, but I take a very reductionist view when it comes to research as an idea. Designing research well is hard, performing the appropriate analysis to support your research question is usually even harder. Achieving publication of your completed research is harder still. Yet where great research is not hard is recognizing that it, just as with life, is about asking the right questions. In life we are fraught with such questions as whether we are in the right job, whether we are raising our kids well, and whether we are saving enough for retirement. All legitimate of course, and what continues to drive the market for self-improvement/personal success books (an avid fan myself I must admit) is the continued lesson that both framing and lens selection are among the keys to answering them. These texts, therefore and for a nominal price, offer methods for framing differently, and offer a lens which differs from the one currently employed whenever we seek to do better.

Success in research, just as with success in life, begins with asking the right questions. Since there is not a What Should I Do With My Life volume for research, here are a few questions to consider as you embark on your next research project:

What Keeps Me Up at Night? Palmer and Zajonc (2010) in their text The Heart of Higher Education: A Call to Renewal quote Whitehead who states, “We must be aware of what I will call ‘inert ideas’ – that is to say, ideas that are merely received into the mind without being utilized, or tested, or thrown into fresh combinations… Education with inert ideas is not only useless; it is, above all things, harmful” (p. 58). Research is a process endeavored by few, yet needed by many. Research pushes our society further, answers those important questions, and gives rise to collectively educating the curious. Yet that process is wasted when on questions of low utility, or those meant solely to serve an end such as publication in itself. A dissertation which simply sits on a shelf, an article written only to be quoted by its author, research performed amid an absence of passion indeed generates inert ideas.

What Can I Talk About, Endlessly? Great research takes time, massive amounts of forethought, a healthy dose of metacognition, and elbow grease. We must dig into the existing literature to such an extent that not only do we understand the relative, ongoing theoretical conversation to-date, yet we must also feel comfortable contributing to its furtherance. Where this becomes a problem is when we recognize there is not one correct and finite way to go about this. As Dane (2011) describes in Evaluating Research, “For any particular theory, the number of ways in which a concept may be operationalized is limited only by the imagination of the researcher” (p. 22). This means not only can the very same concept be represented in myriad ways utilizing myriad methods, this also means that our curiosity in a topic cannot be short-lived or our exploration of it will be poorly served. Where inert ideas asks that we identify a source of passion, Dane reminds us we must also be willing to show great amounts of stamina in order to produce equally great research.

What Do Other People Need? As students of research, both at the Master’s and Doctoral level, we are told when first striving to identify a research topic that we must identify something meaningful to us and explore it. This, I feel is only one third of a very critical equation. As mentioned above another facet is to identify the existing conversation in the literature around a topic, and pinpoint where furtherance can be achieved. The final coefficient to this equation, however, has to do with the audience. As Booth, Colomb, and Williams (2008) note in The Craft of Research, “Down the road, you’ll be expected to find (or create) a community of readers who not only share an interest in your topic (or can be convinced to), but also have questions about it you can answer” (p. 19). This gives rise to the consideration for what problems can be solved for others, what questions can be answered for others, and what good your research can do for others. Great research needn’t be solely about a transparent journey into the center of you, it should also serve a purpose outside of the self, and serve a vested audience.

How Can I Help Them? We are not all researchers comfortable with every design available. Some of us prefer quantitative, some qualitative, some mixed methods. When considering the above on operationalization, as well as the furtherance of an existing scholarly exchange, we also have the opportunity to decide from among the designs possible which will ensure the largest captive, receptive audience as appropriate. As noted by the famed methodologist John Creswell (2009) in Research Design, “researchers write for audiences that will accept their research. These audiences may be journal editors, journal readers, graduate committees, conference attendees, or colleagues in the field… The experiences of these audiences with quantitative, qualitative, or mixed methods studies can shape the decision made about this choice” (p. 19). This becomes a task at not only knowing your audience, it also means a task in understanding what design will be most helpful, as it will equally be the design which brings the learning curve down to near non-existence among your readership.

Again, I recognize I am taking a highly reductionist view. I hope those of you who see this as such also recognize this is meant to be a primer alone. These words certainly do not reflect all one should consider when beginning a research project. What this does represent, though, is a list of things to consider to get you on a productive path. A path toward enlightenment, toward understanding, and one well-trodden by those who were just as curious about the world around them. Hopefully, and if you’re lucky, it will also be a path which only leads to more questions.

Booth, W. C., Colomb, G. G., & Williams, J. M. (2008). The craft of research. Chicago, IL: The University of Chicago Press.

Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods approaches. Thousand Oaks, CA: Sage Publications, Inc.

Dane, F. C. (2011). Evaluating research: Methodology for people who need to read research. Thousand Oaks, CA: Sage Publications, Inc.

Palmer, P. J. & Zajonc, A. (2010). The heart of higher education: A call to renewal. San Francisco, CA: Jossey-Bass

About the Author:

Senior decision support analyst for Healthways, and current adjunct faculty member for Grand Canyon University, South University, and Walden University, Dr. Barclay is a multi-method researcher, institutional assessor, and program evaluator.  His work seeks to identify those insights from among enterprise data which are critical to sustaining an organization’s ability to complete. That work spans the higher education, government, nonprofit, and corporate sectors. His current research is in the areas of employee engagement, faculty engagement, factors affecting self-efficacy, and teaching in higher education with a focus on online instruction.

Mitigating Hazards in Justified Conclusions & Sound Design

A1 and A6 of The Program Evaluation Standards regard both Justified Conclusions and Decisions, as well as Sound Designs and Analyses. Where A1 asks that evaluation conclusions and decisions be explicitly justified in the cultures and contexts where they have consequences, A6 asks that evaluations employ technically adequate designs and analyses that are appropriate for the evaluation purposes (Yarbrough et al., 2011, p. 165-167). In these instances, we regard standards which impact the potential accuracy of an evaluation. When discussing strategies for mitigating the hazards associated with these standards, previous coverage elucidated suitable actions ranging from integrating stakeholder knowledge frameworks, to clarifying roles amid the evaluation team, to properly defining what is meant by accuracy in the context of a given evaluation. Extant strategies discussed also include selecting designs based on the evaluation’s purpose, while still including enough flexibility in the design that compromise and uncertainty can be permitted during this iterative process. Here we discuss strategies in addition to those previously mentioned, and will instead focus on mitigating the hazards associated with the accuracy standards by exploring both the concept of triangulation, and of establishing validation in practice.

Triangulation is employed across quantitative, qualitative, and mixed methods research alike, as a means with which to prevent such common errors among research as establishing conclusions based on samples which are not representative of their stated population, and permits the reduction of confirmation bias among findings.  As per Patton (2002), “Triangulation is ideal. It can also be expensive. A study’s limited budget and time frame will affect the amount of triangulation that is practical, as will political constraints in an evaluation. Certainly, one important strategy for inquiry is to employ multiple methods, measures, researchers, and perspectives – but to do so reasonably and practically” (p. 247). This operates as a strategy for mitigating hazards among the accuracy standards, as the primary intent of those standards is to provide context-specific conclusions which are defensible, and inclusive of those stakeholders involved in the process.  Risks to accuracy are addressed by this strategy, as while a single data point, or collection method may provide a pertinent picture of the evaluation’s efficacy, the ability o further substantiate those results via additional methods and additional data only serve to strengthen what conclusions are reached. In addition to ensuring data integrity via a multi-method approach to data collection, establishing validation in practice remains an addition strategy for mitigating the hazards of our accuracy standards.

On establishing validation in practice, Goldstein & Behuniak (2011) comment, “In the Standards, validity is defined as the ‘degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests… The interrelationships among the interpretations and proposed uses of test scores and the sources of validity evidence define the validity argument for an assessment” (p. 180). What then becomes pertinent among the evaluation team’s efforts is to ensure any/all relevant validity evidence is collected, alongside the proposed uses of the methods selected and employed for a particular evaluation.  There remains an onus upon the evaluation team to not only substantiate the conclusions borne of the team’s data collection and analysis of the immediate program and environment, there equally remains an onus upon this same team to first establish the validity of the methods and instruments chosen for that data collection and analysis.  Simply requiring stakeholders to trust in the expert judgment of an evaluation team’s select of methods and instruments is cause for concern, as this does not permit the kind of inclusion of stakeholder knowledge frameworks mentioned above. Rather, to ensure that accuracy standards are upheld an inclusive process of iterative reviews of the proposed design and execution of the design with stakeholders groups predicates the necessary level of holism required to conclude a design and its instruments accurate.  The evaluation team absolutely brings with it the knowledge, experience, and technical prowess necessary to perform a successful evaluation, yet doing so without consulting the knowledge and experience of stakeholders provides an opportunity for research to be performed which is not in alignment with what is intended of those employing such a team.  Stakeholders, while not technical or content experts of AEA per se, would have as much to contribute on the selection of data points, methods employed, and analysis performed as the team itself, based solely on stakeholder involvement with the program directly and the experiences that interaction brings with it. They would not serve as the primary source of suggestions for design, yet would serve to discern which might serve the current situation best.  Said of this paradox, Booth, Colomb, & Williams (2008) note, “A responsible researcher supports a claim with reasons based on evidence. But unless your readers think exactly as you do, they may draw a different conclusion or even think of evidence you haven’t” (p. 112).

Booth, W. C., Colomb, G. G., & Williams, J. M. (2008). The craft of research (3rd Ed.). Chicago, IL: The University of Chicago Press.

Goldstein, J., & Behuniak, P. (2011). Assumptions in alternate assessment: An argument-based approach to validation. Assessment for Effective Intervention, 36(3), 179–191.

Patton, M. Q. (2002). Qualitative research & evaluation methods. Thousand Oaks, CA: Sage Publications, Inc.

Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc.

Faculty Accountability through Individual Assessment Data

In what way(s) can we as faculty hold ourselves increasingly accountable for the learning outcomes of our students, evidenced though increasing means of individual assessment rooted in both quantitative and qualitative measures alike?

This broader phraseology is used not because I could not think of a more specified question, yet instead is written in such a way that takes into account whatever context and existing levels of assessment each faculty member at an individual level already employs. I have worked with organizations where the faculty member’s performance is judged using a triangulation of supervisor feedback on progress to meeting established goals, in conjunction with the inclusion of a series of measures against student scores in-class, combined with the feedback from an ongoing student survey process. Yet does this triangulation process provide enough data to truly carry out individual assessment at a level which demonstrates sufficient accountability? By setting clear and ambitious goals, each institution can determine and communicate how it can best contribute to the realization of the potential of all its students (Association of American Colleges & Universities, 2008, p. 2). This in mind our first consideration must be less about the process by which individual assessment is carried out, and instead must first consider whether the goals as they are currently established are sufficient for the purpose of holding individual accountability to a sufficient standard. Were the goal to simply ensure a high proportion of students pass each class, this completion goal is one met with low levels of accountability for how that goal is met. Alternatively, a goal which includes reference to areas of assessment, areas of professional development, areas of curricular review, all while targeting student success then begets a goal which holds faculty to a higher level of accountability both for the content and method(s) of individual assessment and performance.

Another strategy for holding faculty to a higher level of individual accountability in assessment concerns the data points collected. Outcomes, pedagogy, and measurement methods must all correspond, both for summative assessment such as demonstrating students have achieved certain levels, and formative assessment such as improving student learning, teaching, and programs (Banta, Griffin, Flateby, & Kahn, 2009, p. 6). In considering how such a dynamic process is then implemented, we can consider such concepts as a community of practice, and community of learning, and instead consider the implementation of a community of assessment. Holding each other mutually accountability for formative and summative assessment alike is one way a faculty member can gain more in-depth data during his/her individual assessment process, by eliciting the feedback of supervisors, peers, other colleagues, and students collectively in order to form a community of assessment. One extant method of this today is the 360 degree feedback process. This process asks for the performance feedback regarding one individual, sought from positions proximal to the individual in all directions, ranging from those the person works for, to those he/she works with, to those who serve him/her. Such a process can help instigate a community of assessment by sharing the individual assessment process among many, permitting both richer data for individual assessment, and a subsequent means for theming data across individuals as well. Such a process can combine feedback from students and fellow faculty to learn how a particular program is serving the community, while equally assessing say teaching style and whether/how this impacts a faculty member’s ability to teach. The implications of such a process are promising, not only because there are already a great number of tools available to implement such an evaluation process, yet equally promising as the individual assessment process is then served by a multifaceted data collection procedure. One best way of asserting the merits of the academy is to implement an assessment of learning assessment system that simultaneously helps improve student-learning and provides clear evidence of institutional efficacy that satisfies appropriate calls for accountability (Hersh, 2004, p. 3).

– Justin

Association of American Colleges & Universities (AAC&U) Council for Higher Education Accreditation (CHEA). (2008). New leadership student learning accountability: A statement of principles, commitments to action. Washington: DC. Retrieved from http://www.newleadershipalliance.org/images/uploads/new%20leadership%20principles.pdf.

Banta, T. W., Griffin, M., Flateby, T. L., & Kahn, S. (2009). Three promising alternatives for assessing college students’ knowledge and skills. (NILOA Occasional Paper No.2). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Retrieved http://learningoutcomesassessment.org/documents/AlternativesforAssessment.pdf.

Hersh, R. H. (2004). Assessment and accountability: Unveiling value added assessment in higher education. A paper presented at the AAHE National Assessment Conference, Denver, CO. Retrieved from http://www.aacu.org/resources/assessment/Hershpaper.pdf.

Justified Conclusions & Sound Evaluation Design

Among the program evaluation standards are the two accuracy standards A1: Justified Conclusions and Decisions, and A6: Sound Designs and Analyses. A1 regarding justified conclusions and decisions is defined by Yarbrough, Shulha, Hopson, & Caruthers (2011) as, “Evaluation conclusions and decisions should be explicitly justified in the cultures and contexts where they have consequences” (p. 165). The associated hazards among this standard are many, they include assumptions of accuracy among evaluation teams, the ignoring of cultural cues and perspectives, assumptions of transferability, and finally a number of hazards concerning the emphasis on technical accuracy at the expense of cultural inclusivity and immediate environmental context in combination (Yarbrough et al., 2011, p. 167). Where this particular standard is most consequential concerns the sociological factors inherent in any assessment. Said of such factors, Ennis (2010) notes, “It is the liking part – the emotional, aesthetic, or subjective decision to actively cooperate with an institution’s assessment regime – that suggests the difficulties inherent in coupling the success of an assessment program to the establishment of an assessment culture” (p. 2). Thus, an assessment culture cannot be established through rigor or display of technical prowess alone.

An institutions’ absorptive capacity is not directly correlated with the rate of acceptance of new knowledge. Rather, the effect of hazards such as disregarding extant culture/subcultures, disregarding the needs of the immediate environment, and disregarding transferability all pose a direct threat to both acceptance and adoption of whatever findings an assessment produces. The recommendations for correcting for this therefore include (1) clarifying which stakeholders will form conclusions and permit the integration of those stakeholders’ knowledge frameworks; (2) clarify the roles and responsibilities of evaluation team members; (3) ensure findings reflect the theoretical terminology as defined by those who will draw conclusions; (4) identify the many definitions of accuracy as per assessment users; and (5) make effective choices regarding depth, breadth, and representation of the program (Yarbrough et al., 2011, p. 166).

A6, the standard of sound designs and analyses is defined by Yarbrough et al. (2011) as, “Evaluations should employ technically adequate designs and analyses that are appropriate for the evaluation purposes” (p. 201). The associated hazards for this standard include a number of considerations for responsiveness to the features, factors, and purpose(s) of a given program. Such hazards include choosing a design based on status/reputation rather than their ability to provide high quality conclusions, a lack of preparation for potentially disappointing evaluation findings, a lack of consideration for the many feasibility/propriety/utility standards, a lack of customization of design to the current environment, and a lack of broad-based consultant with stakeholders at multiple levels (Yarbrough et al., 2011, p. 204). The effects of a lacking, misguided, or inappropriate design can be devastating to the overall efficacy of a given assessment. Said of the need for sound design Booth, Colomb, & Williams (2008) comment, “In a research report, you must switch the roles of student and teacher. When you do research, you learn something that others don’t know. So when you report it, you must think of your reader as someone who doesn’t know it but needs to and yourself as someone who will give her reason to want to know it” (p. 19).

Performing an assessment based solely on the popularity of the design employed misses the point of assessing the program at-hand, which is to formulate a strategy for better understanding the unique program under study, and relate gathered data in a way both digestible and actionable by those who hold a stake. One can employ a procedure which reliably gathers data, yet if unrelated data, or unnecessary data, the design lacks both utility and in this instance accuracy. So how can accuracy be increased and applicability restored? It is suggested to instead select designs based on the evaluation’s purpose, secure adequate expertise, closely evaluate any designs which are in contention, choose framework(s) which provide justifiable conclusions, allow for compromise and uncertainty, and consider the possibility of ongoing/iterative modifications to the design over protracted periods to ensure currency (Yarbrough et al., 2011, p. 204). Doing so will not only ensure that your audience receives actionable results, yet that same audience can also hold the design and collective opinion of the efficacy of the assessment in greater confidence, as their understanding of it is equally elevated.

Booth, W. C., Colomb, G. G., & Williams, J. M. (2008). The craft of research (3rd Ed.). Chicago, IL: The University of Chicago Press.

Ennis, D. (2010). Contra assessment culture. Assessment Update, 22(2), 1–16.

Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc.

Post-Doc Blogpost: What Sets Empowerment Evaluation Apart

According to Middaugh (2010), “There is no shortage of frustration with the inability of the American higher education system to adequately explain how it operates… The Spellings Commission (2006) chastised higher education officials for lack of transparency and accountability in discussing the relationship between the cost of a college education and demonstrable student learning outcomes” (p. 109). As we continue to look into instigating and preserving positive change, I find Middaugh’s words to go beyond that of a burning platform, and instead resonate as a call to action. Empowerment evaluation is not simply about urging faculty and administrators to perform better evaluations, improving analytical or reporting skills, or increasing skills in evaluative inquiry. Empowerment evaluation in this environment is about being able to better understand one’s self, in a highly regulated and highly monitored environment where the stakes remain quite high.

As three other views into this relevance we begin with Fetterman from his article Empowerment evaluation: Building communities of practice and a culture of learning. Therein Fetterman (2002) describes, “Empowerment evaluation has an unambiguous value orientation – it is designed to help people help themselves and improve their programs using a form of self-evaluation and reflection” (p. 89). Empowerment evaluation is therefore neither strictly formative nor summative, particularly as it is not evaluation performed by evaluation personnel. Rather, it creates enhanced opportunities for sustainability as empowerment evaluation permits stakeholders to conduct their own evaluations. What is greatly advantageous about this approach, regards its direct relationship to change processes. Just as with a guiding vision in most established change processes, empowerment evaluation begins with organizational mission. Fetterman (2002) notes, “An empowerment evaluator facilitates an open session with as many staff member and participants as possible. They are asked to generate key phrases that capture the mission of the program or project” (p. 91).

Party to this line of thinking is also the step of first determining present state before defining future state. As described by Worthington (1999), “First, it is highly collaborative, with input from program stakeholders at every stage of the evaluation process. The four steps or stages of empowerment evaluation are: (1) “taking stock,” during which stage program participants rate themselves on performance; …” (p. 2). This article is equally instructive for helping to make sense of how one takes an organization from present state to future state. As Worthington (1999) later describes, “Empowerment evaluation contains elements of all three forms of participatory research. It is a reciprocal, developmental process that aims to produce ‘illumination’ and ‘liberation’ from role constraints among participants; it shares with action research a commitment to providing tools for analysis to program participants; and the evaluator takes a less directive, collaborative role” (p. 7).

Finally among the list of support articles for this post we find A framework for characterizing the practice of evaluation, with application to empowerment evaluation. This article is meaningful because it looks to provide a line of demarcation separating empowerment evaluation from other forms of evaluation. As per Smith (1999), “A useful first step in clarifying the diversity of evaluation practice might be the development of a comprehensive framework with which to compare and contrast fundamental attributes of any evaluation approach” (p. 43). Such characteristics of this framework include consideration for context, role, interest, rules, justification, and sanctions. Smith (1999) therefore continues, “This analysis of Empowerment Evaluation illustrates how the aspects of the framework (context, purpose and social role, phenomena of interest, procedural rules, methods of justification, and sanctions), are highly interrelated… The primary phenomena of interest in Empowerment Evaluation are participant self-determination, illumination, and liberation, and not the worth of programs” (p. 63). This becomes important as the line of demarcation appears not to be a single line, but one containing many interrelated facets. Yet if the core of empowerment evaluation is to focus on the increased capability of evaluative inquiry among organizational stakeholders, this becomes a differentiated form of evaluation reporting which remains tantamount to determining whether one was successful in inciting such a transformation in reality. I will thus continue my search for not only continued understanding of how empowerment evaluation differs from other forms of evaluation, yet will equally focus on how communicating results and instigating change affect the very outputs of this process in specific.

Fetterman, D. M. (2002). Empowerment evaluation: Building communities of practice and a culture of learning. American Journal of Community Psychology, 30(1), 89-102.

Fetterman, D. M. & Wandersman, A. (2005). Empowerment evaluation: Principles in practice. New York, NY: The Guilford Press.

Middaugh, M. F. (2010). Planning and assessment in higher education: Demonstrating institutional effectiveness. San Francisco, CA: Jossey-Bass.

Smith, N. L. (1999). A framework for characterizing the practice of evaluation, with application to empowerment evaluation. The Canadian Journal of Program Evaluation, 14, 39-68.

Worthington, C. (1999). Empowerment evaluation: Understanding the theory behind the framework. The Canadian Journal of Program Evaluation, 14(1), 1-28.