Research in Business is Everyone’s Business

Collective-Consciousness

I am a firm believer that having a greater number of college degrees does not necessarily mean you’re smarter than those with fewer. I am unapologetic in my stance, as I believe the role of the university is not to increase your IQ (arguably a number with little flux). The role of the university is instead to train you, largely in a particular discipline or process or both. Yes, some programs require a greater degree of raw intelligence, and the purpose of this post is not to draw those lines. The purpose instead is to understand how we can walk away from the misconception that only those with a research background can perform business research. What connects these two dots? In short, the conclusion that just because someone has a PhD, it does not mean they know more about your business than you do. In fact, the opposite is usually true. If they are trained in the process, and you are intimate with your business, I would like to make a suggestion. When seeking a greater understanding of your business’ either process, program, or product performance, team up instead to form a symbiotic relationship between the business and a researcher so you both can accomplish more and do the research together.

The Why – The Interpretive Approach

Among the approaches to organization studies which exist, these include the interpretive approach. As described by Aldrich and Ruef (2009):

The interpretive approach focuses on the meaning social actions have for participants at the micro level of analysis. It emphasizes the socially constructed nature of organizational reality and the processes by which participants negotiate the meanings of their actions, rather than taking them as a given. Unlike institutional theorists, interpretive theorists posit a world in which actors build meaning with locally assembled materials through their interaction with socially autonomous others. (p. 43)

If this is true, then a lone researcher cannot simply be transplanted from one organization to the next, all the while delivering revenue-trajectory-altering research in a vacuum. The research is to be built on great questions, those may just come from the business, and the very meaning of the business and the data it generates is embedded within the interactions and the actors in the business itself.

The What – A Symbiotic Relationship

A relationship where you – representing the business – provide the context, maybe even help gather some of the data, and are there to take part in the interpretation once the researcher has completed a substantive portion of his/her analysis. You’re a team, the researcher is not a gun for hire. Which also means, if you’re a team, you’re a researcher too. This approach is important for many reasons, among which includes your store of tacit knowledge. As we are reminded by Colquitt, Lepine, and Wesson (2013), “Tacit knowledge [is] what employees can typically learn only through experience. It’s not easily communicated but could very well be the most important aspect of what we learn in organizations. In fact, it’s been argued that up to 90 percent of the knowledge contained in organizations occurs in tacit form” (p. 239). That is a vast amount of available information the researcher simply will not have if you do not team up and start working together.

The How – A Cue from Empowerment Evaluation

We can draw a number of conclusions on how best to form this reciprocal relationship between business and researcher as one team, and many come from the literature on empowerment evaluation. As put by Fetterman and Wandersman (2005):

If the group does not adopt an inclusive and capacity-building orientation with some form of democratic participation, then it is not an empowerment evaluation. However, if the community takes charge of the goals of the evaluation, is emotionally and intellectually linked to the effort, but is not actively engaged in the various data collection and analysis steps, then it probably is either at the early developmental stages of empowerment evaluation or it represents a minimal level of commitment. (p. 9)

There is a final, critical subtext to all of the above. In essence, there must be a consistent flow of ideas between the researcher and the business. Research in business is everyone’s business, yet only in environments when the researcher can share his/her craft, and the business more informed can help to grant the researcher access to the knowledge only they possess. For a final thought on the merits of this proposed team I defer to the literature on constructing grounded theory. Therein Charmaz 2014 reminds us that, “We need to think about the direction we aim to travel and the kinds of data our tools enable us to gather… Attending to how you gather data will ease your journey and bring you to your destination with a stronger product” (p. 22).

About the Author:

Senior decision support analyst for Healthways, and current adjunct faculty member for Allied American University, Grand Canyon University, South University, and Walden University, Dr. Barclay is a multi-method researcher, institutional assessor, and program evaluator. His work seeks to identify those insights from among enterprise data which are critical to sustaining an organization’s ability to complete. That work spans the higher education, government, nonprofit, and corporate sectors. His current research is in the areas of employee engagement, faculty engagement, factors affecting self-efficacy, and teaching in higher education with a focus on online instruction.

Faculty Accountability through Individual Assessment Data

In what way(s) can we as faculty hold ourselves increasingly accountable for the learning outcomes of our students, evidenced though increasing means of individual assessment rooted in both quantitative and qualitative measures alike?

This broader phraseology is used not because I could not think of a more specified question, yet instead is written in such a way that takes into account whatever context and existing levels of assessment each faculty member at an individual level already employs. I have worked with organizations where the faculty member’s performance is judged using a triangulation of supervisor feedback on progress to meeting established goals, in conjunction with the inclusion of a series of measures against student scores in-class, combined with the feedback from an ongoing student survey process. Yet does this triangulation process provide enough data to truly carry out individual assessment at a level which demonstrates sufficient accountability? By setting clear and ambitious goals, each institution can determine and communicate how it can best contribute to the realization of the potential of all its students (Association of American Colleges & Universities, 2008, p. 2). This in mind our first consideration must be less about the process by which individual assessment is carried out, and instead must first consider whether the goals as they are currently established are sufficient for the purpose of holding individual accountability to a sufficient standard. Were the goal to simply ensure a high proportion of students pass each class, this completion goal is one met with low levels of accountability for how that goal is met. Alternatively, a goal which includes reference to areas of assessment, areas of professional development, areas of curricular review, all while targeting student success then begets a goal which holds faculty to a higher level of accountability both for the content and method(s) of individual assessment and performance.

Another strategy for holding faculty to a higher level of individual accountability in assessment concerns the data points collected. Outcomes, pedagogy, and measurement methods must all correspond, both for summative assessment such as demonstrating students have achieved certain levels, and formative assessment such as improving student learning, teaching, and programs (Banta, Griffin, Flateby, & Kahn, 2009, p. 6). In considering how such a dynamic process is then implemented, we can consider such concepts as a community of practice, and community of learning, and instead consider the implementation of a community of assessment. Holding each other mutually accountability for formative and summative assessment alike is one way a faculty member can gain more in-depth data during his/her individual assessment process, by eliciting the feedback of supervisors, peers, other colleagues, and students collectively in order to form a community of assessment. One extant method of this today is the 360 degree feedback process. This process asks for the performance feedback regarding one individual, sought from positions proximal to the individual in all directions, ranging from those the person works for, to those he/she works with, to those who serve him/her. Such a process can help instigate a community of assessment by sharing the individual assessment process among many, permitting both richer data for individual assessment, and a subsequent means for theming data across individuals as well. Such a process can combine feedback from students and fellow faculty to learn how a particular program is serving the community, while equally assessing say teaching style and whether/how this impacts a faculty member’s ability to teach. The implications of such a process are promising, not only because there are already a great number of tools available to implement such an evaluation process, yet equally promising as the individual assessment process is then served by a multifaceted data collection procedure. One best way of asserting the merits of the academy is to implement an assessment of learning assessment system that simultaneously helps improve student-learning and provides clear evidence of institutional efficacy that satisfies appropriate calls for accountability (Hersh, 2004, p. 3).

– Justin

Association of American Colleges & Universities (AAC&U) Council for Higher Education Accreditation (CHEA). (2008). New leadership student learning accountability: A statement of principles, commitments to action. Washington: DC. Retrieved from http://www.newleadershipalliance.org/images/uploads/new%20leadership%20principles.pdf.

Banta, T. W., Griffin, M., Flateby, T. L., & Kahn, S. (2009). Three promising alternatives for assessing college students’ knowledge and skills. (NILOA Occasional Paper No.2). Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Retrieved http://learningoutcomesassessment.org/documents/AlternativesforAssessment.pdf.

Hersh, R. H. (2004). Assessment and accountability: Unveiling value added assessment in higher education. A paper presented at the AAHE National Assessment Conference, Denver, CO. Retrieved from http://www.aacu.org/resources/assessment/Hershpaper.pdf.

Post-Doc Blogpost: Machine Scoring & Student Performance

I believe one concept at the heart of considering the validity of a writing assessment via automated essay scoring (AES) as a measure of student performance in a graduate program is fidelity. Fidelity has to do with the degree to which the task, response mode, and what is actually scored matches the requirements found in the real world (Zane, 2009, p. 87). This includes consideration for such elements as context, structure, and the item’s parameters. What lends to granting the validity of such an assessment is the fact that much of what is expected of a writing assessment mirrors the college experience. From considerations surrounding grammar and sentence structure, to critical thinking and conceptual integration, an automated writing assessment permits for greater fidelity among an exercise which emulates the student experience in myriad ways. Where attention must be paid, however, regards the validity of objective scoring provided by an AES system.

Machine scores are based on a limited set of quantifiable features in an essay while human holistic scores are based on a broader set of features, including many, such as the logical consistency of an argument, which cannot yet be evaluated by a machine (Bridgeman, Trapani, & Attali, 2012, p. 28). The assessment itself, then, is was provides much of the fidelity to the student experience. What has yet to be developed is a holistic way of interpreting and subsequently scoring essay responses with the same level of depth and consideration for elements not contained within a designed algorithm as that with human raters. Human raters can probe creativity, innovation, and integrative thinking. Human raters can identify off-topic content just as an AES system would, yet rather than reducing the score by default as would be the design per AES, the human rater can form an individual opinion regarding off-topic content and its relevance to the response. Where one is asked about the concept of truth in such an environment as Accuplacer testing, the scoring mechanism might deduct for the use of vernacular proximal to art for example, to which the author refers as art and truth alike are subjective interpretations. What AES does well, and human raters do not, however, is score essay responses with an equal level of consistency across multiple raters and multiple ratings.

As AES models often formed by using more than two raters, studies that have evaluated interrater agreement have usually showed that the agreement coefficients between the computer and human raters is at least as high or higher than among human raters themselves (Shermis, Burstein, Higgins, & Zechner, 2010, p. 22). This gives pause to doubt cast on an AES systems’ ability to accurately and reliably score work. Yet this reliability does not necessarily denote validity as we have previously discussed. Thus, an environment where at least one AES score and one human rater score are considered in conjunction, presents the most promising synthesis of both approaches regarding a single assessment item and score. Further understanding of rater cognition is necessary to have a more thorough understanding of what is implied by the direct calibration and evaluation of AES against human scores and what, precisely, is represented in a human score for an essay (Ramineni & Williamson, 2013, p. 37). This in mind it is critical that those designing such assessments remain attentive to differences between scores among human raters, the difference in scores between machine and human rater, and any differences which exist in the AES model’s ability to measure student performance against a particular rubric. Should the rubric cover such concepts as creativity, an element to which AES is disadvantaged, this emphasis on content will be the driving force to inform the decision to use AES. Ultimately, it will depend on the context, grading criteria as explained by the rubric, and the opportunity for joint assessment via human rater which drive a decision of whether AES is a valid source of objective writing assessment.

Bridgeman, B., Trapani, C., & Attali, Y. (2012). Comparison of human and machine scoring of essays: Differences by gender, ethnicity, and country. Applied Measurement in Education, 25(1), 27–40.

Ramineni, C., & Williamson, D. M. (2013). Automated essay scoring: Psychometric guidelines and practices. Assessing Writing, 18(1), 25–39.

Shermis, M. D., Burstein, J., Higgins, D., & Zechner, K. (2010). Automated essay scoring: Writing assessment and instruction. In P. Peterson, E. Baker, and B. McGaw (Eds.), International encyclopedia of education (3rd ed.), 20–26. Oxford, UK: Elsevier.

Zane, T. W. (2009). Performance assessment design principles gleaned from constructivist learning theory (Part 2). TechTrends, 53(3), 86–94.

The Juxtaposition of Social/Ethical Responsibility across Disciplines

As I approach full speed in the post-doctoral program, I equally approach my first opportunity to share publicly insights derived from this study of assessment, evaluation, and accountability. The American Evaluation Association (AEA) identifies its established social and ethical responsibilities of evaluators.  In juxtaposition, the social and ethical responsibilities of institutional research as an education-based area of interest, are expressed by the Association for Institutional Research (AIR).  Yet first, a personal introduction as requested by this assignment.  I have chosen institutional research as my professional education-based area of interest, as research and analysis have been at the heart of much of what I’ve done for the past decade or more.

Spanning a period easily covering ten years, I have straddled industry and academe for the purpose of not only remaining a lifelong learner but continuing to leverage what I take from each course and apply it as readily as possible to my working world in industry and in the classroom to the benefit of my employers and my students.  As mentioned elsewhere in my ‘about’ page, my work includes a multitude of projects focused on distilling a clear view of institutional effectiveness and program performance. Roles have included senior outcomes analyst, management analyst, operations analyst, assessor, and faculty member for organizations in industries ranging from higher education to hardware manufacturing and business intelligence.  With each position a new opportunity to assimilate new methods for assessing data.  With each new industry a new opportunity to learn a new language, adhere to new practices, and synthesize the combined/protracted experience that is the sum of their parts.  Yet in each instance I do not feel as though I remain with a steadfast understanding that I’ve learned more and therefore have less left to learn.  In each instance I instead feel as though I know and have experienced even less of what the world has to offer.  Focusing this indefinite thirst to speak to assessment and evaluation specifically, the task becomes pursuing ever-greater growth, and ever-greater success across a wide range of applications, industries, and instances, while equally remaining true to guiding principles which serve those who benefit from any lesson I learn or analysis I perform.

The Program Evaluation Standards intimate standards statements regarding propriety which include responsive and inclusive orientation, formal agreements, human rights and respect, clarity and fairness, transparency and disclosure, conflicts of interest, and fiscal responsibility.  At the heart of these Yarbrough, Shulha, Hopson, & Caruthers (2011) remark, “Ethics encompasses concerns about the rights, responsibilities, and behaviors of evaluators and evaluation stakeholders… All people have innate rights that should be respected and recognized” (p. 106).  This is then compared with a like-minded statement from the AEA directly in stating, “Evaluators have the responsibility to understand and respect differences among participants, such as differences in their culture, religion, gender, disability, age, sexual orientation and ethnicity, and to account for potential implications of these differences when planning, conducting, analyzing, and reporting evaluations” (Guiding Principles for Evaluators, n.d., para. 40).  Finally, in juxtaposition we have Howard, McLaughlin, & Knight with The Handbook of Institutional Research (2012) who write, “All employees should be treated fairly, the institutional research office and its function should be regularly evaluated, and all information and reports should be secure, accurate, and properly reported… The craft of institutional research should be upheld by a responsibility to the integrity of the profession” (p. 42). Thus, in the end, while this work had intended to explore a juxtaposition, the chosen word implies some paradoxical behavior at least to a slight degree, in actuality shows none of the sort.  Rather, we find congruence, and we find agreement.

It is important to uphold standards for the ethical behavior of evaluators, as the very profession is one steeped in a hard focus on data, and the answers data provide.  We as human beings, however, tend to this profession while flawed. We make mistakes, we miscalculate, we deviate from design, and we inadvertently insert bias into our findings.  None of this may be done on purpose, and certainly not all transgressions are present in every study.  The implication is there, though, that we can make mistakes and are indeed fallible.  At the same time we are of a profession which is tasked with identifying what is data and what is noise, what programs work and which curriculum does not, which survey shows desired outcomes and which employees are underperforming.  These are questions which beget our best efforts, our most scientific of endeavors, and our resolute of trajectories to identify only truths however scarce, amid the many opportunities to be tempted toward manufacturing alternate – albeit perhaps more beneficial – realities for we as evaluators and our stakeholders. All participants have rights, all evaluators have rights, and all sponsors have rights.  It is our task to serve in the best collective interest, using the best methods available to ensure a properly informed future.

American Evaluation Association. (n.d.). Guiding principles for evaluators. Retrieved September 11, 2013 from http://www.eval.org/p/cm/ld/fid=51

Howard, R.D., McLaughlin, G.W., & Knight, W.E. (2012). The handbook of institutional research. San Francisco, CA: Jossey-Bass.

Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc.

Amid the Tumult, the Purposive Manager

Management as a practice can be seen as a combination of art, craft, and science, which take place on an information plane, a people plane, and an action plane (Mintzberg, 2009). A good manager, then, is someone who moves beyond the traditional confines of seeing one’s function as planning/organizing/leading/controlling each in isolation at a specific point in time, and instead sees managing as using all aspects of one’s intuition, training, and talents at once and in perpetuity.

Where managers are now described in the literature as operating in an environment wherein interruptions can be encountered up to every 48 seconds of the day, and the manager’s attention is thus piecemeal and scattered across multiple tasks as well as decisions in a single hour, it remains important that a “good manager” use this as a strength not as what defines their work. Rather than using the bustle of today’s business environment as an excuse for surface-level consideration of every decision encountered, it is instead an opportunity to convey consistency in message and purpose with every new decision. A day can be filled with hundreds of isolated decisions made at-a-glance, or they can all be made while guided by a single thread of focused purpose and attention to the direction he/she wishes to push their ecosystem within the manager’s given sphere of influence. If the manager wishes to develop a team guided by thoughtful analysis, each decision made can be an interruption prior to returning to this task, or it can be a way to substantiate this wish by emphasizing thoughtful analysis in each decision. A good manager thus uses technical skills to facilitate the professional and technical aspects of daily work, while also using those same technical skills to describe how best to support and influence the technical aspects of others’ work as well. Soft skills are equally important to ensure not only that influence is purported, yet the purpose and direction as the manger sees fitting is communicated within his/her network in a way which delivers a lasting impression.

An Operations Manager uses his technical knowledge of the business to drive operations, while also using this technical knowledge to guide others’ work within the scope of process variation, selection, and retention. His soft skills are important as the potential global, and very likely diverse workforce with which he works must be influenced and led, not directed and controlled alone. The Finance Manager must use her technical skills to drive the fiduciary sustainability of an organization and her team, but must also use her technical skills to seek an efficacious value chain to sustain the organization’s competitive advantage. Her soft skills thus are what provide the conduit for this process, and technical skills the information necessary for her network to later develop the tacit knowledge necessary for this to occur. A “good manager”, then, is aware of the organization’s ecosystem, his/her influence on this ecosystem, and will put to use all intuition, training, and talents present, to help others see how an organization’s value chain drives its purpose.

– Justin

Is the Model of the ‘Working Manager’ Truly Working?

Upon examination of the current worldview employed by the managers of today’s organizations, the term ‘working manager’ immediately comes to mind. We live in a complicated time where globalization is a given, knowledge networks are the foundation for action, and companies are only as successful as their most succinctly defined system of proprietary activity. This is met with economic times which have left many out of work, those who remain to perform the work of multiple, many to be overqualified for the positions they serve, and organizations forced to perpetuate only those aspects of their organization which can clearly add to the value proposition that is their economic engine.

Every organization needs performance in three major areas, including the building and reaffirmation of values, the building and developing of its people, and direct results (Drucker & Maciariello, 2006). What today’s manager is being held most accountable for, I believe based primarily upon the economic and competitive environment we are living in, are the direct results of their teams. This is countervailing to what is necessary for a perpetual organization, however, as this only focuses the manger’s time and attention on one aspect of three when looking to get things done through others. This does not take into account the necessary activities for developing people, and this does not take into account the necessary activities for building and reaffirming an organization’s values.

Taking concepts such as cognitive dissonance into account, this leads the manager to believe that if results are what receive emphasis from senior leadership, then results must be what emphasis receives their time and attention as well. Managers who are not expected to focus on development do not develop people, or perpetuate their organizations values. Instead, these managers rewrite the organizations values to emphasize action and results, just as senior leadership has done for them. The working manager therefore prevails as if results are to be center-stage, the manger will take on just as much of the technical and activity-based responsibilities of the team as any specialist under his/her charge. This will not lead to sustainability, however, only immediate outcomes. Technology, advancing organizational forms, and the diversity of our interconnected global workforce should serve as a primer for developmental action, not something to return to ‘once the work is finished’.

– Justin

Drucker, P. F. & Maciariello, J. A. (2006). The effective executive in action. New York, NY: HarperCollins Publishers.

On Perpetual Organizational Progress

Emergence as a recognized entity secures a tentative place for an organization in a population, but its persistence depends upon the continual replication of its routines and competencies (Aldrich & Ruef, 2006, p. 94).  We think of where we work as somewhere fixed; an institution in its truest sense, a building with cubicles, desks with computers, and employees with bosses.  Yet what the research has shown is this is only the case because we collectively make it so each day, and the day we cease to do so is the day our organization equally ceases to persist.  From this outlook, though, comes an equally ambitious upside… we then have a choice on the organizational routines and competencies we elect to replicate and utilize.  Said differently, we can begin to rethink, regroup, redirect, and retool at any time as the organization is not in a fixed state.  So what’s stopping us?  Transformational change involves a radical shift from one state of being to another, which is an extremely painful process… proactive transformation requires an awareness of the consequences the “new” context will have on the existing culture, behaviors, and mindset, if it is to be engaged in willingly (Biscaccianti, Esposito, & Williams, 2011, p. 30).

We as individual members of an organization function as both user and supporter of the organization continually and paradoxically.  We are project managers, financial analysts, account executives, and customer service representatives.  We are defined by our role, by our processes, by the systems we use, the skills we have, and the declarative & procedural knowledge we employ.  We do not change because we choose not to change, and we choose not to change because we took far too long learning and working and struggling to get where we are with what we know.  Is this an accurate look on reality, though?  To seek perpetual organizational progress is to seek a framework and mindset of near-daily renewal of our routines and competencies for the sake of our company’s progress, not for change’s sake alone, nor at the expense of individual accomplishment.  The organization at its essence is an aggregation of human effort, not of best practices, industry standards, and heralded products and services.  Put another way, individuals can be wildly successful and equally accomplished, while the organizations they work for is under a constant state of flux and renewal.  One can use and support an organization differently each day, while being regarded the expert of his/her craft.  Thus, in order to pursue perpetual organizational progress, a new lens with which to view change is necessary.

The essence of the problem-finding and problem-solving approach revolves around the identification of problem characteristics and the extent to which they entail corresponding impediments to the activities of problem finding, framing, and formulating; problem solving; and solution implementation…  methodologically, this approach responds to design science’s call to comparatively evaluate alternative governing mechanisms that mitigate impediments, leading to more comprehensive problem formulations, more efficient searching for and creating of valuable solutions, and more successful implementation of solutions (2012, p. 58).  This approach to organizational design allows us to ask far broader questions of management, and of every member’s contribution to ongoing organizational success.  Success herein and thus far has not been defined, and this definition remains absent as the definition must instead remain iterative.  We should not seek success in traditional terms as traditional terms warrant traditional practices, and those practices warrant the knowledge we already have and the processes we already use.  Perpetual progress then means a perpetual identification of new problems, new obstacles, new impediments, new solutions, and a new definition of success with each march forward.

Is there a magic recipe all companies should follow for identifying the problems we must then address in perpetuity?  With persistence as the goal the answer then remains, not likely.  What can be done, however, is we can instead codify the process for identifying problems at the individual organizational level, as those same routines and competencies which brought us to today can then serve as filters for identifying further opportunities for progress.  Cognitive heuristics – problem-solving techniques that reduce complex situations to simpler judgmental operations – can become specific to an organizational form, or even an individual organization (Aldrich & Ruef, 2006, p. 120).  The very fabric which defines how our organizations are successful now then becomes not what we choose to change, yet instead what we use to evaluate what else should change.  Success today sets not tomorrow’s bar, it identifies today’s neighboring problem.  All else may change to exist on-par with that new success.

Today’s performance management systems seek to evaluate how well individual members are faring at performing pre-determined routines.  Individual performance measurement is accepted as a retrospective task seeking convergent methods of routine persistence and level of competence.  We set new goals, yet of the same routines.  We establish new targets, yet of only marginally enlarged job descriptions.  Skeptical?  Ask yourself when you were last given a revised job description based on what you’ve learned during your year(s) of service and growth.  Better still, ask yourself when you directly contributed toward the authorship of such a document.  Rational system theorists stress goal specificity and formalization, natural system theorists generally acknowledge the existence of these attributes but argue that other characteristics – characteristics shared with all social groups – are of greater significance, and open systems are [instead] capable of self-maintenance on the basis of throughput of resources from the environment, [and] this throughput is essential to the system’s viability (Scott, 2003).  Social systems warranting the identification of work performed indeed, and based on the resources provided by the surrounding environment.  This then obviates the idea that performance management should be based on a fixed target, much as organizational progress is only perpetual when fixed routines and competencies have been abandoned.

To seek resolution, then, of the competing challenges between management’s historical predisposition toward a rational system, and its desire to emulate open systems thinking, we seek not a replacement for today’s routines or tomorrow’s stretch goals.  We seek instead, an entirely different unit of analysis, and object of our futurist affection.  What we should be promoting instead of leadership alone are communities of actors who get on with things naturally, leadership together with management being an intrinsic part of that (Mintzberg, 2009, p. 9).  We seek the ability to fluidly move between the routines which bring us present success, the pursuit of impediments to success elsewhere, and the ability to base our progress on an iterative view of success itself and our progress toward it, thereby managing performance on numerous planes simultaneously.  Those planes then include perhaps a normative look at performance via the evaluations we all know and review periodically, the plane of success impediments identified, the working definition of success holistically, and the actions/strategies necessary to balance them all.  And is there a process for identifying these actions/strategies?  Indeed there is.  Positive deviance (PD) is founded on the premise that at least one person in a community, working with the same resources as everyone else, has already licked the problem that confounds others… from the PD perspective, individual difference is regarded as a community resource… community engagement is essential to discovering noteworthy variants in their midst and adapting their practices and strategies (Pascale, Sternin, & Sternin, 2010, p. 3).  We can embrace the bestseller lists without reservation and engage in either frequency imitation, trait imitation, outcome imitation, or a combination of the three.  Conversely, we can seek these deviants, and not for their solutions, but for their methodology at removing impediment in the name of a new successful day, every day.

– Justin

My Philosophy of Teaching

A Call to Action

Exercising the courage to become more purpose-centered, other-focused, internally directed, and externally open results in increased hope and unleashes a variety of other positive emotions (Quinn, 2004). I as a teacher am not so solely because of anything tangible. Nor am I a teacher solely for those inspired moments in each student’s day. Rather, I feel that teaching is both a privilege and a responsibility. It is a privilege as I do have the opportunity to touch lives, bring new hope to possibly otherwise under-informed futures, and hopefully and occasionally inspire someone to be great. Yet, I additionally feel teaching is a responsibility each generation has to its successors. As society can be regarded as a construct of social networks, a collection of living systems, and its role to be that of sustainability long-term; teachers hold the responsibility of ushering in an informed era for those that follow such that they have the opportunity to continue the successes of the past and create their own in the process.

Learning as SKILLS

Self-Knowledge Inventory of Lifelong Learning Strategies (SKILLS) is based upon five aspects of learning which are essential to the learning process, these are the constructs of metacognition, metamotivation, memory, resource management, and critical thinking (Conti & Fellenz, 1991). Taking this construct as developed by the Center for Adult Learning Research at Montana State University into account, it creates a paradigm with which to gauge not only the structure and success of a given lesson plan, yet the success of each student in terms of their own personal level of learning as well. As metacognition regards the ability of the learner to reflect upon what has been learned and work to make their own learning process more efficient over time, it is my responsibility to ensure each learner has the tools to do so. As metamotivation regards the learner’s control over their own motivational strategies, it is both my responsibility and privilege to ensure those options exist while in a learning environment. As both memory and resource management are stand-alone concepts, I operate with an obligation to ensure the methodologies I employ allow for greater capture and memory usage while allowing for greater resource utilization and management as well. Finally, as critical thinking is a concept not uncommon in the academic environment, I will put defining this term to the side and instead comment that critical thinking is what I feel the majority of my teaching strategy is reliant upon. As critical thinking is what I feel separates the successful from those otherwise not experiencing similar success, I feel critical thinking and success are mutually beneficial and directly correlated. Yet, to ensure the greatest level of critical thinking in those I guide, I return to Quinn’s words regarding being purpose-centered, externally focused, and use these emphases to ensure each learner operates at their highest critical thinking potential.

Sculpting Futures

The workplace, the professions, the leaders and foot soldiers of civic society must all do their part – and that obligation cannot be spurned or postponed or fobbed off on institutions that are incapable of picking up the responsibility (Gardner, 2006). Institutions of higher learning have existed far before any referenced work concerning concepts such as adult learning strategies. Yet, very little separates the adult from the adult learner and again from those instructing such as myself. As that responsibility exists to ensure the sustainable future of our society, I feel taking an analytical approach to learning as with the SKILLS construct, aids in ensuring both that privilege and responsibility are well served. Finally, as a litmus test for whether I have succeeded as a teacher, I look to Wind & Crook’s definition of advancement. Science sometimes advances not through evolutionary progress in a given framework but through sudden leaps to a new model for viewing the world (Wind & Crook, 2005).

– Justin

Conti, G.J. & Fellenz, R.A. (1991). Assessing adult learning strategies. Bozeman, MT: Montana State University.

Gardner, H. (2006). Five minds for the future. Boston, MA: Harvard Business School Press.

Quinn, R.E. (2004). Building the bridge as you walk on it: A guide for leading change. San Francisco, CA: John Wiley & Sons, Inc.

Wind, Y.J. & Crook, C. (2005). The power of impossible thinking: Transform the business of your life and the life of your business. Upper Saddle River, NJ: Wharton School Publishing.

Do You Lead by Listening?

As Flight 1549 plummeted down, they chanted in unison to passengers, “Brace, brace, heads down, stay down,” preventing many injuries during the rough water landing; a testament to the leadership aboard the ‘Miracle on the Hudson’, and to the power of repetitive and concrete instruction for provoking action (Sutton, 2010). Examples such as this one are those rare gems which can take an entire tome on leadership, and give it a single, palpable schema with which to walk away from the reading and immediately apply its lessons. Yet is leadership entirely about leading? Is being a follower both the antithetical and only alternative?

Leadership is leading, yes as the term begets, but leading is additionally listening. In a summer article for the Harvard Business Review, Martin (2007) wrote, “Brilliant leaders excel at integrative thinking. They can hold two opposing ideas in their minds at once. Then, rather than settling for choice A or B, they forge an innovative “third way” that contains elements of both but improves on each… Embrace the complexity of conflicting options. And emulate great leaders’ decision-making approach – looking beyond obvious considerations” (p. 73). The funny thing is, though, some leaders may hear this and think they must come up with all of those great options & ideas personally. ‘They put me in charge because they expect me to have all the answers’ you may say. ‘My people can’t possibly think I am weak and in need of their help to decide what to do’. I challenge this thinking to instead reply that your abilities as an individual contributor may have been what brought you praise, possibly even initial consideration for the position you now serve, but it wasn’t why you were selected.

You were selected because you see the opportunities a better run team can collectively contribute to organizational success, more colloquially known as ‘seeing both the forest and the trees’. But where can we turn for a repetitive, concrete example of how to get the best of our people by learning to listen better, rather than relying on our ideas and personal experiences alone? How about a solution dating back to 1956, Bloom’s Taxonomy.

Educators have been using it for decades, and anyone who has frequented grad school has been exposed at least minimally in conversation with fellow scholars. The crux of Bloom’s work gives us categorical direction with which to mine meaningful data from our direct reports, by simply channeling the intentions of our listening activities via the following levels of thinking as described by Anderson & Krathwohl (2001):

Remembering: Retrieving, recognizing, and recalling relevant knowledge from long-term memory.

Understanding: Constructing meaning from oral, written, and graphic messages through interpreting, exemplifying, classifying, summarizing, inferring, comparing, and explaining.

Applying: Carrying out or using a procedure through executing, or implementing.

Analyzing: Breaking material into constituent parts, determining how the parts relate to one another and to an overall structure or purpose through differentiating, organizing, and attributing.

Evaluating: Making judgments based on criteria and standards through checking and critiquing.

Creating: Putting elements together to form a coherent or functional whole; reorganizing elements into a new pattern or structure through generating, planning, or producing.

So try this yourself with your team, by applying at least two of these levels in your next meeting, see if the conversation changes shape from what you’re used to hearing (or saying).

– Justin

Forehand, M. (2005). Bloom’s taxonomy: Original and revised. In M. Orey (Ed.), Emerging perspectives on learning, teaching, and technology. Retrieved from http://projects.coe.uga.edu/epltt/

Martin, R. (2007). How successful leaders think. Harvard Business Review, 85(6), 60-67.

Sutton, R. I. (2010). Good boss, bad boss: How to be the best – and learn from the worst. New York, NY: Business Plus.