Monday, October 22, 2007

Empowerment Evaluation in AEA Annual November Conference

Empowerment Evaluation in 21st American Evaluation Association annual conference

Mohammad Hasan Mohaqeq Moein

2007-10-23

IRAN-TEHRAN

21st American Evaluation Association annual conference will be held Wednesday, November 7, through Saturday, November 10, 2007 in Baltimore, Maryland, USA. AEA's annual meeting is expected to bring together over 2500 evaluation practitioners, academics, and students, and represents from around the world. The conference is broken down into 38 Topical Strands that examine the field from the vantage point of a particular methodology, context, or issue of interest to the field as well as the Presidential Strand highlighting this year's Presidential Theme of "Evaluation and Learning."

In the Letter of Invitation to Submit for Evaluation 2007 from AEA's President: Hallie Preskill we can read the conference Theme: For as long as contemporary forms of evaluation have been around, questions concerning effective practice, relevant theories, impacts and consequences, and use of findings, have been posed and debated by practitioners and scholars in the field. Embedded in all of these questions is an implicit assumption that evaluation, in some way, is about learning…learning about a program and its outcomes, learning from an evaluation process, learning how to do evaluation, or learning about evaluation’s effect on others. If learning is the act, process, or experience of gaining knowledge or skills, then it is hard to imagine evaluation as anything other than a means for learning.

Increasingly over the years, evaluators have come to acknowledge the learning aspect of evaluation by framing it as evaluation capacity building, evaluation use and influence, evaluation for organizational learning, empowerment evaluation, knowledge management, and within the broad construct of evaluation, social betterment. Inherent in each of these areas of study and practice is the notion that learning is a fundamental process that enhances our individual and collective capacity to create the results we truly desire now and in the future. Consequently, learning from and about evaluation has the potential to generate knowledge for decision-making and action and to move us in the direction of our visions.

The 2007 Presidential Strand theme, "Evaluation and Learning," will provide a focus for us to explore this topic in various ways. For example, the following questions, though not exhaustive, illustrate some of the issues that could be addressed within the context of the conference theme:

  • What does it mean to learn from and about evaluation processes and outcomes?
  • How does evaluation facilitate learning? Conversely, how and when does evaluation hinder learning?
  • How might learning from evaluation be enhanced and sustained?
  • In what ways can evaluation create learning communities of practice?
  • How can differences—individual learning styles, community cultures, ethnic and geographic dimensions—enhance and challenge our learning from and about evaluation?
  • What does learning from evaluation look like in different organizational and community contexts?
  • Who else is involved in the process of learning from evaluation in different contexts and how?
  • How, and in what ways, does learning lead to evaluation?
  • What kinds of evaluation designs and approaches maximize which kinds of learning from and about evaluation?
  • What is the relationship between workplace and adult learning theory and evaluation theory? What other theories help us understand the intersection of learning and evaluation?
  • The AEA Annual Conference is a vibrant and exciting learning community: how do we maximize our own learning in the evaluation profession?

Near 50 Professional Development Workshops are hands-on, interactive sessions that provide an opportunity to learn new skills or hone existing ones at Evaluation 2007. Professional development workshops precede and follow the conference (November 5, 6 & 11). These workshops differ from sessions offered during the conference itself in at least three ways:

1. Each is longer (either 3, 6, or 12 hours in length) and thus provides a more in-depth exploration of a skill or area of knowledge,

2. Presenters are paid for their time and are expected to have significant experience both presenting and in the subject area, and,

3. Attendees pay separately for these workshops and are given the opportunity to evaluate the experience.

Empowerment Evaluation Approach have a grate position in this annual conference, same all last annuals from 1993 that the theme of annual conference was Empowerment Evaluation. I see these titles from EE: Business Meeting Session 126, Think Tank Session 576, Expert Lecture Session 705, Multipaper Session 799, Multipaper Session 812 and one workshop. Some facts from these proceedings are:


Session Title: Collaborative, Participatory and Empowerment Evaluation TIG Business Meeting

Business Meeting Session 126 to be held in Hanover Suite B on Wednesday, November 7, 4:30 PM to 6:00 PM

Sponsored by: the Collaborative, Participatory & Empowerment Evaluation TIG

TIG Leader(s): David Fetterman, Stanford University profdavidf@yahoo.com , & Liliana Rodriguez-Campos, University of South Florida, lrodriguez@coedu.usf.edu

Session Title: Arkansas Evaluation Center and Empowerment Evaluation: We Invite Your Participation as We Think About How to Build Evaluation Capacity and Facilitate Organizational Learning in Arkansas

Think Tank Session 576 to be held in Carroll Room on Friday, November 9, 11:15 AM to 12:00 PM

Sponsored by: the Collaborative, Participatory & Empowerment Evaluation TIG

Presenter: David Fetterman, Stanford University, profdavidf@yahoo.com

Discussant: Linda Delaney, University of Arkansas, linda2inspire@earthlink.net

Abstract: A new Arkansas Evaluation Center will be housed at the University of Arkansas Pine Bluff. The Center emerged from empowerment evaluation training efforts in a tobacco prevention program (funded by the Minority Initiated Sub-Recipient Grant's Office). The aim of the Center is to help others help themselves through evaluation. The Center is designed to build local evaluation capacity in the State to help improve program development and accountability. The Center will consist of two parts: an academic program beginning with a certificate program and later offering a masters degree. It will combine face-to-face and distance learning. The second part of the Center will focus on professional development: including guest speakers, workshops, conferences, and publications. The Center will be grounded in an empowerment evaluation philosophical orientation and guided by pragmatic mixed-methods training. In addition, it will help evaluators learn how to use new technological and web-based tools.

Session Title: Identifying Critical Processes and Outcomes Across Evaluation Approaches: Empowerment, Practical Participatory, Transformative, and Utilization-focused

Expert Lecture Session 705 to be held in Liberty Ballroom Section B on Saturday, November 10, 9:35 AM to 10:20 AM

Sponsored by: the Theories of Evaluation TIG

Chair: Tanner LeBaron Wallace, University of California, Los Angeles,

twallace@ucla.edu

Presenter: Marvin Alkin, University of California, Los Angeles, alkin@gseis.ucla.edu

Discussants:

J Bradley Cousins, University of Ottawa, bcousins@uottawa.ca

David Fetterman, Stanford University, profdavidf@yahoo.com

Donna Mertens, Gallaudet University, donna.mertens@gallaudet.edu

Michael Quinn Patton, Utilization-Focused Evaluation, mqpatton@prodigy.net

Abstract: Inspired by the recent American Journal of Evaluation article by Robin Miller and Rebecca Campbell (2006), this session proposes a set of identifiable processes and outcomes for four particular evaluation approaches-Empowerment, Practical Participatory, Transformative, and Utilization-Focused. The four evaluation theorists responsible for each approach will serve as discussants to critique our proposed set of evaluation principles. This session seeks to answer the following two questions for each approach: What process criteria would identify each specific evaluation approach in practice? And what observed outcomes are necessary in order to make a judgment that the evaluation was "successful" in regards to the particular evaluation approach? Providing answers to these questions through both the presentation and the discussion among the theorists will provide comparative insights into common and distinct elements among the approaches. Our ultimate aim is to advance the discipline of evaluation through increasing conceptual clarity.

Session Title: Using Empowerment Evaluation to Facilitate Organizational Transformation: A Stanford University Medical Center Case Example

Multipaper Session 799 to be held in Hanover Suite B on Saturday, November 10, 12:10 PM to 1:40 PM

Sponsored by: the Collaborative, Participatory & Empowerment Evaluation TIG

Chair: David Fetterman, Stanford University, profdavidf@yahoo.com

Discussants:

  1. Abraham Wandersman, University of South Carolina, wandersman@sc.edu: Empowerment evaluation is guiding evaluation efforts throughout the Stanford University Medical Center. Empowerment evaluation is a collaborative approach and designed to build evaluative capacity, engaging people in their own self-assessment and learning. The process typically consists of three steps: 1) mission; 2) taking stock; and 3) planning for the future. Strategies are monitored and information is fed back to make mid-course corrections and/or build on successes. The process depends on cycles of reflection and action in an attempt to reduce the gap between theories of action (espoused) and theories of use (observed behavior). The approach relies on critical friends to help facilitate the process. This is an important case example organizationally because the effort represents a rare opportunity to align and build on medical student education, resident training, and the education of fellows. The data generated are used to inform decision making, improve curricular practices, and enhance critical judgment.

  1. Jennifer Berry, Stanford University, jenberry@stanford.edu & David Fetterman, Stanford University, profdavidf@yahoo.com : Using Empowerment Evaluation to Engage Stakeholders and Facilitate Curriculum Reform: When Stanford University School of Medicine undertook a major reform of its curriculum, the School adopted an empowerment evaluation approach to help monitor and facilitate implementation of the new curriculum. Empowerment evaluation is collaborative, engaging faculty, students, and administration in the cyclical process of reflection and action. Empowerment evaluation relies on the theory of process use. Empowerment evaluation theories and tools were used to facilitate organizational transformation at the course level. Our process included: using the School's mission as a guide; taking stock by holding focus groups and developing new survey instruments, including learning climate assessments; and planning for the future by facilitating discussions about evaluation findings with key stakeholders and having the faculty and teaching assistants revise specific courses. We also established a feedback loop to measure the success of reforms and revisions from one year to the next. Case examples highlight measurable evidence of curricular improvement.

  1. Kambria Hooper, Stanford University, khooper@stanford.edu : Organizational Learning Through Empowerment Evaluation: Improving Reflection Skills With a 360 Degree Evaluation: This study explores the impact of a 360 degree empowerment evaluation system in one of Stanford School of Medicine's required classes for Stanford medical students. This evaluation system has three levels of reflection and improvement. The first is the individual member's performance. The second level of reflection and improvement is small group performance. The final level is organizational learning; the course directors and staff reflect on data, looking for group variability or patterns, to create new goals for the course structure or curriculum. Organizational learning is dependent on each member's ability to give and receive constructive, formative feedback. In response to resistance and confusion around the new evaluation system, we developed several interventions to improve the ability of students, faculty and simulated patients to give and receive constructive feedback. This evaluation demonstrates how organizational learning is improved when the organization's members have opportunities to reflect on individual and team performance.

  1. Andrew Nevins, Stanford University, anevins@stanford.edu : Overestimation of Skills in Medical School: The Need to Train Students How to Self-assess: Stanford's School of Medicine used standardized patients (SP) to help assess medical students' skills. This study focuses on students at the preclinical or course level. Clinical skills were assessed by checklists compiled from a consensus of faculty experts. Students also rated their perception of patient satisfaction on a 1 (low) to 9 (high) scale. SPs completed a matching questionnaire, rating their satisfaction with the student. Student and SP satisfaction ratings were paired and correlated, consistent with empowerment evaluation practices. Overall, students over-rated their performance by 0.75 points. The lowest quintile overestimated performance by 1.57 points, while the highest quintile underestimated performance by 0.003 points (p<0.01).>

  1. David Fetterman, Stanford University, profdavidf@yahoo.com & Jennifer Berry, Stanford University, jenberry@stanford.edu: Empowerment Evaluation: the Power of Dialogue: Empowerment evaluation has 3 steps including mission, taking stock, and planning for the future. However, the middle stage is not always explored in depth. One of the central features of the taking stock step is dialogue. Program participants rate how well they are doing at this step in the process, using a 1 (low) to 10 (high) rating systems. They are also required to provide evidence to support their ratings. However, it is the generative dialogue that is most characteristic of this part of the process and critical to authentic learning, on the community of learners as well as organizational learning levels. Each participant explains why they gave their rating, using documentation to build a culture of evidence. Three examples of dialogue (and norming) are provided: 1) engaged scholarly concentration directors; 2) faculty, administrators, and students grappling with curricular problems; and 3) committed clerkship directors guiding student learning in hospitals.

  1. Heather A Davidson, Stanford University, hads@stanford.edu : Using Principles of Empowerment Evaluation to Build Capacity for Institutional Learning: A Pilot Project at Stanford Hospital : Residency education is rapidly changing from an apprentice-based to a competency-based model where performance outcomes must guide individual feedback and continuous program improvement to meet new accreditation standards. This change represents a cultural shift for teaching hospitals and a management shift that must support systems of assessment. Many faculty members do not have the tools needed to design and implement these goals. Since institutional accreditation requires that all residency programs undergo a peer-led internal review process, Stanford Hospital has created a new protocol to build evaluation capacity. Utilizing principles of empowerment evaluation, the pilot project formalizes feedback loops needed at both program and institutional levels. By combining performance benchmark and portfolio techniques with a mock accreditation site visit, the new protocol provides a more comprehensive assessment of overall program needs; evidence of program quality across the institution; and supports a learning culture where faculty share educational initiatives.

Sixth Presenter Alice Edler 6507236412 edlera@aol.com Stanford University Empowerment Evaluation: A Catalyst for Culture Change in Post Graduate Medical Education. Pediatric anesthesia requires special skills for interacting with small patients, not required in general anesthesia training. Empowerment evaluation was used to assess these behaviors in a Stanford pediatric anesthesia fellowship. Trainees, faculty and aggregate data revealed the need for more clinical decision- making opportunities in the fellowship. Clinical judgment ranked the lowest. The role of administrative chief fellow emerged from the self-assessment. It allowed for more opportunities for decision making in day-to-day schedules, curriculum, and disciplinary decisions. This position was rotated over all the fellows. Individual and group improvements were evidenced. Fellows assumed the responsibility for creating new rotations and revising their schedules based on perceived curriculum needs. Faculty evaluations of clinical judgment significantly increased in the clinical judgment item (see table 1). Information from the EE has allowed fellows to model self determination, form a more cohesive group, and provide opportunities for high stakes clinical decision-making.

Session Title: Empowerment Evaluation Communities of Learners: From Rural Spain to the Arkansas Delta

Multipaper Session 812 to be held in Carroll Room on Saturday, November 10, 1:50 PM to 3:20 PM

Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG

Chair: David Fetterman, Stanford University, profdavidf@yahoo.com

Discussants:

  1. Stewart I Donaldson, Claremont Graduate University, stewart.donaldson@cgu.edu : Empowerment evaluation is the use of evaluation concepts, techniques, and findings to foster improvement and self-determination. It employs both qualitative and quantitative methodologies. It also knows no national boundaries. It is being applied countries ranging from Brazil to Japan, as well as Mexico, United Kingdom, Finland, New Zealand, Spain, and the United States. These panel members highlight how empowerment evaluation is being used in rural Spain and the Arkansas Delta. In both cases, they depend on communities of learners to facilitate the process. The third member of the panel highlights a web-based tool to support empowerment evaluation that allow crosses all geographic boundaries.

  1. Jose Maria Diaz Puente, Polytechnic University, Madrid, jmdiazpuente@gmail.com : Learning From Empowerment Evaluation in Rural Spain: Implications for the European Union : At the present time, thousands of evaluation works are carried out each year in the European Union to analyze the efficacy of European policies and seek the best way to improve the programs being implemented. Many of these works are related to programs applied in the rural areas that occupy up to 80% of the territory of the EU and include many of the most disadvantaged regions. The results of the application of empowerment evaluation in the rural areas of Spain show that this evaluation approach is an appropriate way to foster learning in the rural context. The learning experience was related to capacity building in stakeholders and evaluation team, the evaluator role and advocacy, the impact of the empowerment evaluation approach, its potential limitations, difficulties and applicability to rural development in the EU.

  1. Linda Delaney, Fetterman and Assoc, linda2inspire@earthlink.net & David Fetterman, Stanford University, profdavidf@yahoo.com : Empowerment Evaluation: Transforming Data Into Dollars and the Politics of Community Support in Arkansas Tobacco Prevention Projects : Empowerment evaluation is being used to facilitate tobacco prevention work in the State of Arkansas. The University of Arkansas's Depart of Education is guiding this effort, under the Minority Initiated Sub-Recipient Grant's Office. Teams of community agencies are working together with individual evaluators throughout the state to collect tobacco prevention data and turn it into meaningful results in their communities. They are also using the data collectively to demonstrate how a collective can be effective. The grantees and evaluators are collecting data about the number of people who quit smoking and translating that into dollars saved in terms of excess medical expenses. This has caught the attention of the Black Caucus and the legislature. Lessons learned about transforming data and the politics of community support are shared.

  1. Abraham Wandersman, University of South Carolina, wandersman@sc.edu : Empowerment Evaluation and the Web: (interactive Getting to Outcomes) iGTO : iGTO is an Internet based approach to Getting to Outcomes called Interactive Getting to Outcomes. It is a capacity-building system, funded by NIAAA that is designed to help practitioners reach results using science and best practices. Getting to Outcomes (GTO) is a ten-step approach to results-based accountability. The ten steps are as follows; Needs/Resources, Goals, Best Practices, Fit, Capacity, Planning, Implementation, Outcomes, CQI, and Sustainability. iGTO plays the role of quality improvement/quality assurance in a system that has tools, training, technical assistance, and quality improvement/quality assurance. With iGTO, the organizations uses empowerment evaluation approaches to assess process and outcomes and promote continuous quality improvement. Wandersman et al highlight the use of iGTO in two large state grants to demonstrate the utility of this new tool.

Session Title: Building and Assessing Capacity for Evaluation: Creating Communities of Learners Among Service Providers

Panel Session 867 to be held in Hanover Suite B on Saturday, November 10, 3:30 PM to 5:00 PM

Sponsored by: the Collaborative, Participatory & Empowerment Evaluation TIG

Chair: Tina Taylor-Ritzler, University of Illinois, Chicago, tritzler@uic.edu

Discussants:

  1. David Fetterman, Stanford University, profdavidf@yahoo.com: Community-based organizations are currently experiencing pressure to learn about evaluation and conduct their own evaluations. Some are able to meet these demands through partnerships with academic institutions designed to build capacity for evaluation utilizing empowerment and participatory approaches. Although there is literature available on evaluation capacity building, much is needed in terms of understanding its conceptualization, the process of building it and how to measure it. In this session, several researchers will present their work with a variety of community-based organizations in creating capacity for evaluation. First, we will present an ecological, contextual and interactive framework of evaluation capacity building that integrates models from the evaluation literature. Second, we will describe methods and strategies in measuring capacity building and we will discuss how to create learning communities with agency staff. Third, we will provide exemplars and discuss challenges encounter when doing this work and implications for the field of evaluation. Fourth, we will discuss evaluation capacity building strategies (ECB) used and how they have been evaluated based on a review of research on ECB. Finally, we will hear commentaries from a prominent researcher in the field: David Fetterman.

  1. Yolanda Suarez-Balcazar, University of Illinois, Chicago, ysuarez@uic.edu : Building Capacity for Evaluation Among Service Providers: Conceptual Framework and Exemplar : Based on the work we have been doing at the Center for Capacity Building for Minorities with Disabilities Research, we propose a contextual framework of capacity for evaluation. A contextual and interactive model suggests a dynamic interplay between person factors and organizational factors. The person or group factors are exemplified by agency staff and/or program implementers while organizational factors speak for organizational policies, organizational culture and support systems that create an environment that facilitates capacity for evaluation. The framework assumes interplay between personal factors and organizational factors. As such, a CBO staff member may be willing and ready to learn how to evaluate a program he/she implements but lacks organizational support to do so in the form of lack of allocated time and resources. Capacity for evaluation can be created and facilitated at the individual level. Here, we are referring to the staff member or members who implement agency programs, are in direct contact with participants, and are experiencing tremendous pressure to document what they do and to produce tangible outcomes. We will discuss individual factors such as personal readiness, level of competence and experience, and individual leadership. The environment, policies, procedures and culture of the organization may be more or less facilitative of building capacity for evaluation in individual staff and the organization as a whole. The presenters will also discuss several factors at the organization level that can facilitate the process of building capacity: including organizational readiness, organizational resources and support allocated to evaluation, organizational leadership, organizational culture, organizational capacity to mainstream evaluation practices, organizational capacity to utilize findings and develop practices that sustain evaluation capacity, and organizational capacity to seek funding for their programs. We will also discuss implications for the art and the science of evaluation.

  1. Tina Taylor-Ritzler, University of Illinois, Chicago, tritzler@uic.edu : Measuring Evaluation Capacity: Methodologies and Instruments : Although there is a large literature on evaluation capacity building, it lacks specificity on issues of measurement and assessment of evaluation capacity. Most have looked only at evaluation products agencies generate (reports to funders) and satisfaction with training. We will present our multiple method system for assessing and measuring evaluation capacity. In this session, we will present the work being conducted nationally by the Center for Capacity Building on Minorities with Disabilities Research. We will describe in detail the instruments and procedures we use and challenges we encounter when measuring evaluation capacity building with organizations serving ethnic minorities with disabilities. We will share data drawn from multiple case study examples. Finally, our discussants will share their perspective on the contribution of our work to scholarship on evaluation capacity building.

  1. Rita O'Sullivan, University of North Carolina, Chapel Hill, ritao@email.unc.edu : Using Collaborative Evaluation as a Strategy for Evaluation Capacity Building: First 5 Los Angeles' Quality Care Initiative : Collaborative Evaluation (O'sullivan, 2004) uses an approach to evaluation, which results in enhanced evaluation capacity building among key stakeholders. Evaluation Assessment and Policy Connections (EvAP) at the University of North Carolina at Chapel Hill worked collaboratively with First 5 Los Angeles staff and its 53 Childcare grantees to design a 30-month evaluation that would provide process, outcome, and policy information about the initiative. The evaluation activities also addressed enhancing the evaluation capacity of First 5 grantees, staff, partners, and Commissioners. Collaborative evaluation engages key program stakeholders actively in the evaluation process. Unlike distanced evaluation, where evaluators have little or no contact with program staff, collaborative evaluation deliberately seeks involvement from all program stakeholders during all stages of the evaluation. A collaborative stance can strengthen evaluation results and increase utilization of evaluation findings. Additionally, programs participating in collaborative evaluations develop an enhanced capacity to consume and conduct evaluations, while evaluators gain a better understanding of the program. The collaborative evaluation approach assumes that evaluation expertise within programs is developmental; and thus, the degree of collaboration must vary by the nature and readiness of the program. Evaluations completed with this collaborative approach have yielded improved evaluation capacity as measured by data quality, report writing, and evaluation use with program in the areas of education, social services, and health; the presenter also has found that collaborative evaluation may increase the resources available to the evaluation. This presentation will report how the evaluation contributed to the capacity building of the 53 grantees the majority of which were community based organizations.

  1. Jennifer Duffy, University of South Carolina, jenduffy@sc.edu : A Review of Research on Evaluation Capacity Building Strategies : The growing literature on evaluation capacity building is one resource for learning more about what evaluation capacity building looks like in the field and what evidence there is for the success of these strategies. We will present findings from a review of empirical research on evaluation capacity building. The strategies for building evaluation capacity that are identified in this research will be described, and the methods used to evaluate these strategies will be discussed. We will highlight the evidence for successful strategies and limitations of the existing research. Questions for future research will be identified, with a focus on identifying successful strategies for building evaluation capacity.

Workshop number 42: Empowerment Evaluation

Empowerment Evaluation builds program capacity and fosters program improvement. It teaches people to help themselves by learning how to evaluate their own programs. The basic steps of empowerment evaluation include: 1) establishing a mission or unifying purpose for a group or program; 2) taking stock - creating a baseline to measure future growth and improvement; and 3) planning for the future - establishing goals and strategies to achieve goals, as well as credible evidence to monitor change. The role of the evaluator is that of coach or facilitator in an empowerment evaluation, since the group is in charge of the evaluation itself.

Employing lecture, activities, demonstration and case examples ranging from townships in South Africa to a $15 million Hewlett-Packard Digital Village project, the workshop will introduce you to the steps of empowerment evaluation and tools to facilitate the approach.

You will learn:

o How to plan and conduct an empowerment evaluation,

o Ways to employ new technologies as part of empowerment evaluation including use of digital photography, quicktime video, online surveys, and web-based telephone/videoconferencing,

o The dynamics of process use, theories of action, and theories of use.

David Fetterman hails from Stanford University and is the editor of (and a contributor to) the recently published Empowerment Evaluation Principles in Practice (Guilford). He Chairs the Collaborative, Participatory and Empowerment Evaluation AEA Topical Interest Group and is a highly experienced and sought after facilitator.

Session 42: Empowerment Evaluation


Scheduled: Wednesday, November 7, 12:00 PM to 3:00 PM


Level: Beginner, no prerequisites

1 comment:

Dr. David Fetterman said...

Many thanks for sharing the news of our work. We are looking forward to productive exchange at the conference and invite all contributions as we learn together. Best wishes.

- David