Friday, July 28, 2017

Dan Stufflebeam, passed away!

Daniel Leroy Stufflebeam, PhD; Age 80, of Kalamazoo, Michigan passed away, Sunday, July 23, 2017, what a bad news!
We cannot speaks about evaluation without speaks about Stufflebeam!
Commiseration to his family and our filed
I remember this work:
Neither too narrow nor too broad
Mohaqeqmoein,M.H.& Fetterman, D.M.
July 2008
To be useful a definition should be neither too narrow nor too broad, and like many disciplines evaluation suffers from both (Coryn, 2007, 31). Stufflebeam & Shinkfield's taxonomy suffers from being at the "too narrow" and restrictive end of the evaluation definition spectrum; excluding high quality and useful evaluation work. Stufflebeam & Shinkfield in their book" Evaluation Theory, Models, and Applications" present a new taxonomy for evaluation approaches.
more info:

Sunday, August 08, 2010

Summative Evaluation in Last Tenure Day

Summative Evaluation in Last Tenure Day

Mohamad Hasan Mohaqeq Moein

August 2010-08-09

Jeffrey Zients on the last day of his tenure in OMB on the first blog as an OMB’s Acting Director Posted a note on August 02, 2010 at 10:53 AM EDT Under title: Discovering What Works. In his late blog, he refers to something that you can read completely at:

I apply to, this final document, in a split manner in some important PART (9 SECTION below) for better understanding:

1. OMB latest memorandum on "Evaluating Programs for Efficacy and Cost-Efficiency” signed by OMB Director Peter Orszag on the last day of his tenure.

2. OMB issued NEW guidance to Federal agencies about conducting high-priority evaluations.

3. Jeffrey Zients served as the Chief Performance Officer of the US Government, and what he knows is that determining which programs work and which do not is critical to discovering whether government operations are doing what they are supposed to do in a cost-efficient manner.

4. Yet too many important programs have never been formally evaluated.

5. The results of evaluated programs have not been fully taken into the decision-making process, at the level of either budgetary decisions or management practices.

6. For an organization as large as the Federal Government, with as many priorities and obligations as it has, the fact that we have rarely evaluated multiple approaches to the same problem makes it difficult to be confident that taxpayers’ dollars are being spent effectively and efficiently.

7. Running rigorous evaluations takes money, but investments in rigorous evaluations are a drop in the bucket relative to the dollars at risk of being poorly spent when we fail to learn what works and what doesn’t.

8. OMB is allocating a small amount of funding for agencies that voluntarily demonstrate how their FY 2012 funding priorities are subjected to rigorous evaluation, with an emphasis on evaluations aimed at determining the casual effects of programs or particular strategies, interventions, and activities within programs.

9. Finding out if a program works is common sense, and the basis upon which we can decide which programs should continue and which need to be fixed or terminated. This guidance will help us do just that.

When I deeply thinking about 9 above and combined numbers 3+4+5+9, I concluded that American Organizations + American Management + American evaluations are at risk! I aim jump to history of evaluation from this point. Hence, I want more digging this problematic issue.

I think managers and experts in OMB like Jeffrey Zients knows what are problematic position of these issues, but what are their solutions? And whether these explanations could make better American management, organizations & evaluations?

OMB newest addresses to issues presented in number 1 & 2 above than I labeled it for better communication: USA National Evaluation Policy. This policy for Fiscal Year 2012 that made in July 29, 2010 was under title: Evaluating Programs for Efficacy and Cost-Efficiency. The prior twins OMB solutions for Fiscal Year 2011 that made in October 7, 2009 were under title: Increased Emphasis on Program Evaluations and you could read completely at:

The main OMB solutions for the issues that fully support and endorsements at 2 USA National Evaluation Policy introduces in number 6, 7, 8 & 9 above Jeffrey Zients sentences.

The fully ignored in 2 USA National (Federal) Evaluation Policy is organized social capitals in USA expert civil associations like American Evaluation Association. I want clearly say: OMB Ignores positions of evaluation professional associations?

The main different between 2 national document that strongly observable is the centralistic and more closed manner of lately National Evaluation Policy in compares with prior document. For example: in USA National Evaluation Policy for Fiscal Year 2011 we have a new Inter-agency working group. That Together with the Domestic Policy Council, National Economic Council and the Council of Economic Advisors, to promote stronger evaluation across the Federal government. But in USA National Evaluation Policy for Fiscal Year 2012 we have an OMB’s Resource Management Offices (RMOs) to coordinate and improve the design, implementation, and utilization of evaluations with many important determinant functions.

In USA National Evaluation Policy for Fiscal Year 2011 we have more open government functions in compare the last document! In this document we read more about Public Availability of Information on Federal Evaluations that must be On-line information about existing evaluations. OMB will work with agencies to make information readily available online about all Federal evaluations focused on program impacts that are planned or already underway. OMB will work with agencies to expand the information about program evaluations that they make public. The goal is to make researchers, policymakers, and the general public aware of studies planned or underway that (1) examine whether a program is achieving its intended outcomes; or (2) study alternative approaches for achieving outcomes to determine which strategies are most effective.

OMB will issue a budget data request regarding the public availability of program evaluation information. As necessary, we will work with agencies to determine how best to make more information available online. Public awareness will promote two objectives.

First, it will allow experts inside and outside the government to engage early in the development of program evaluations. In particular, OMB welcomes input on the best strategies for achieving wide consultation in the development of evaluation designs.
Second, public awareness will promote transparency, since agency program evaluations will be made public regardless of the results. This function is analogous to that of the HHS clinical trial registry and results data bank (

But in USA National Evaluation Policy for Fiscal Year 2012 only we have one paragraph in the open government subject that is the below:

On-line information about existing evaluations: OMB is working with agencies to make information readily available online about all Federal evaluations focused on program impacts that are planned, already underway, or recently completed. A Budget Data Request (BDR) is being issued concurrently to this memo to assist in the completion of this request.

Another different between tow documents is about Evaluation Plan. That stress only in USA National Evaluation Policy for Fiscal Year 2012 observed. These anxieties are under below refers:

Agencies should provide OMB with a plan demonstrating how evaluation resources, funded though base programs and new proposals, will be allocated to address budget priorities. The content and format for this information should be developed in consultation with the RMO

For each evaluation study for which they are seeking additional funding and every proposed evaluation study costing $1 million or more (RMOs have the discretion to adopt lower thresholds) that is part of their base funding request.

But why and how organized social capitals ignored?

In February 3, 2009, Debra Rog (President) William Trochim (Immediate Past President) & Leslie Cooksy (President Elect) On behalf of the American Evaluation Association wrote a letter for Peter Orszag Director, Office of Management and Budget and probably in an attending session take it to him.

In this letter that has a 13 page attachment and Prepared by AEA Evaluation Policy Task Force on February, 2009 and published under title:
“An Evaluation Roadmap for a More Effective Government” at: this organized social professional capitals & evaluation leaders wrote:

“We are writing to propose for your consideration a major initiative to improve oversight and accountability of Federal programs by systematically embracing program evaluation as an essential function of government. In the attachment we describe how evaluation can be used to improve the effectiveness and efficiency of Federal programs, assess which programs are working and which are not, and provide critical information needed for making difficult decisions about them. We provide a roadmap for improving government through evaluation, outlining steps to strengthen the practice of evaluation throughout the life cycle of programs.
We understand how complex and demanding is the work before you. We hope our suggestions will be useful to you and we stand ready to assist you on matters of program evaluation.”

I labeled 2 OMB document as a National Evaluation Policy, but the Evaluation Policy Task Force has a definition for Evaluation Policy. According to this task force “The term “evaluation policy” encompasses a wide range of potential topics that include (but are not limited to): when systematic evaluation gets employed, and on what programs, policies and practices; how evaluators are identified and selected; the relationship of evaluators to what is being evaluated; the timing, planning, budgeting and funding, contracting, implementation, methods and approaches, reporting, use and dissemination of evaluations; and, the relationship of evaluation policies to existing or prospective professional standards.” We could compare the contents of these documents by this definition and know what the matters are.

I labeled “An Evaluation Roadmap for a More Effective Government” document as a "potential USA National Evaluation Policy" from AEA perspective. We could read in this National Evaluation Policy:

In the past eight years, the Office of Management and Budget attempted to institute consistent evaluation requirements across all Federal programs through its Program Assessment Rating Tool (PART) program. However, that effort, while a step in the right direction,
1. was not adequately resourced,
2. was inconsistently applied,
3. was too narrow relative to the options that are suitable across the life cycle and circumstances of different programs, and
4. did not provide the high-quality training and support for agencies to accomplish evaluation effectively.
While significant advances in the use of evaluation have occurred in the Federal Government since the 1960s, the commitment needed to consistently ensure that decisions are informed by evaluation has not yet been made.

The Obama administration has a unique opportunity to advance its broad policy agenda by integrating program evaluation as a central component of Federal program management.

The time is especially right for such a bold move. The breadth and seriousness of the challenges we face provide a political climate that could support a commitment to a major advance in Federal program management. The lessons that have been learned in those agencies that have experience in applying evaluation constitute a solid knowledge base upon which we can build.
And, the field of evaluation has evolved to a point where it is more capable than ever before to support a significant expansion in the scope of Federal efforts.
The new administration would benefit significantly by using program evaluation to

• address questions about current and emerging problems
• reduce waste and enhance efficiency
• increase accountability and transparency
• monitor program performance
• improve programs and policies in a systematic manner
• support major decisions about program reform, expansion, or termination
• assess whether existing programs are still needed or effective
• identify program implementation and outcome failures
• inform the development of new programs where needed
• share information about effective practices across government programs and agencies

The key is to make program evaluation integral to managing government programs at all stages, from planning and initial development through start up, ongoing implementation, appropriations, and reauthorization. In short, what is needed is a transformation of the Federal management culture to one that incorporates evaluation as an essential management function.

After 19 month from AEA dedicate proposal to OMB director and structure, every one can see the result of lessens learned in Jeffrey Zients 9 statements above and we could predict the future of American evaluation, organization & management in twin USA National Evaluation Policy for Fiscal Years 2011 & 2012.
I think the problematic issue in the era of this note not only belongs to past and current experiences, but also we will encounters with this in next years! Why? The answer is simple and virulent. We don’t learn from failures.
In "potential USA National Evaluation Policy" from AEA perspective refers to history of these failures. We can read at it:

Current Status of GPRA and PART

The most significant evaluation-related initiatives of the last 15 years have been the enactment of the Government Performance and Results Act of 1993 (GPRA) and, more recently during the George W. Bush administration, OMB’s Program Assessment Rating Tool (PART).
Generally, GPRA encourages a strategic, agency wide, mission view and also focuses on whether government programs achieve results in terms of the goals and objectives for which they are established. Evaluation was defined in GPRA as addressing the "manner and extent to which" agencies achieve their goals, thus addressing both implementation and results. In practice, it has been implemented in a way that emphasizes the use of performance indicators and measurement to see whether a goal has been reached or not, with less attention being paid to evaluation studies that might shed light on the role the program played in reaching the goal, on why programs do or do not meet their goals and objectives, and on how programs might be improved. As a result, there is less information through this process that can guide programmatic or policy action.
PART focuses on programs’ effectiveness and efficiency, especially on their impact. It draws on GPRA in terms of its analysis of whether programs meet their performance goals. However, it recognizes that some programs can meet their goals and still fail to have meaningful impact because of shortcomings in their designs or their goals. It attempts to assess whether programs are free from design flaws that prevent them from being effective. It introduces evaluation, and even calls for a body of independent evaluations for programs. For the most part, however, it emphasizes the use of evaluation as a way to determine program impact. While possibly not intended, it has had the effect of over-advocating for one particular type of impact evaluation, namely, randomized controlled trials, as a “gold standard” for the measurement of program success. This has tended to limit its ability to recognize success in programs for which randomized controlled trials are not a suitable method for assessing effectiveness or improving performance.
Some distrust PART results because they believe the goals and objectives upon which its analyses are based may be driven by political ideologies. In particular, Congress has distanced itself from PART. Some have noted that PART excludes policies like tax expenditures and focuses on discrete programs when multiple activities that cut across agency boundaries may contribute to achievement of goals.
OMB has moved to address some of these perceived shortcomings by initiating a pilot test of alternative impact assessment methodologies. In addition, in November 2007 President Bush signed an Executive Order on Improving Government Program Performance. It creates the position of Performance Improvement Officer in each Federal agency and establishes a government wide Performance Improvement Council under the direction of OMB to more systematically promote performance assessments of programs.
GPRA and PART have made the use of performance measurement and management as a staple of government program management. But they fall considerably short of what is needed to address the problems our country faces.

Going Beyond GPRA and PART

What we are proposing is a vision for transforming the view of what agency heads and the Congress can do to benefit from program evaluation and systematic analysis both to improve the design, implementation, and effectiveness of programs and to assess what works and what doesn’t, and why. This vision is a comprehensive one that recognizes that evaluation is more than simply “looking in the rearview mirror” and that it needs to be utilized earlier in the life of a program, as an integral part of managing government programs at all stages, from initial development through start up, ongoing implementation, appropriations, and reauthorization.


Ok! I must take my conclusion from this long note. When I deeply think about these maters I more admire Professor Michael Scriven for his DEBATE on rethinking all of evaluation! He proposed workshop for the summer institute at Claremont Graduate University, where he will suggest it’s time to reconceptualize evaluation from the ground up. I wrote an article under title of: "THE FIFTH GENERATION EVALUATION: CAPACITY BUILDING Revising of Evaluation roles in Organizational Learning and Knowledge Management" that you could read more detail of my view point on this conclusion at:

Jane Davidson on 8/2/10 in Genuine Evaluation blog at: wrote:

I think one of the key reasons why we see little success in building evaluative and reflective thinking in organizations is because too often it is seen as a ‘people-change’ challenge (targeting individuals, and blaming them when it doesn’t happen) rather than a whole-organization culture change challenge (i.e. a leadership responsibility to create and build energy behind a genuine transformation).
From the literature on organizational culture change Here is ten tips drawn from a keynote I did earlier this year on how to build a learning organization:

1. Get top management commitment to learning from failure – and make it highly visible (walk, not just talk!)
2. Identify and work with ‘evaluation evangelists’ – influential people who will help lead the change
3. Communicate the ‘evaluation imperative’ – explain clearly why this is a powerful and exciting new way of doing things
4. Train/coach people in evaluation skills, knowledge & know-how; develop tools together
5. Provide diverse exemplars of great evaluative inquiry, reflection & use – people need to know what good evaluation and learning looks like in practice
6. Model the importance of external criticism (especially for senior management)
7. Develop and empower (and/or, hire in) a critical mass of people with the “evaluative attitude”
8. Listen to skeptics & cynics; allow powerful change blockers the chance to move aside
9. Recognize and reward changed behaviors & mindset
10. Highlight & celebrate good examples of ‘learning from disappointing results’ – and using those learning’s to make timely improvements

Tuesday, January 05, 2010


Revising of Evaluation roles in Organizational Learning and Knowledge Management

Mohammed Hasan Mohaqeq Moein

Faculty member of Imam Khomeini Higher Education Center


How does practice improve, and how can evaluation contribute to this improvement? Evaluation could have significant share in organizational development and knowledge management in associations. Both in management and science we are tackling with evaluation process. Art & science of evaluation have an authoritative to taking many roles in knowledge management growth in organizations. Evaluation approaches and techniques could function in production, empowerment, assessment and documentation of individual and organizational experience in society. In this article I develop some questions and answers in the file most intended for enlargement of knowledge management train via systematic evaluation influences in programs and organizations. Individuals and organizations dynamism is a phenomenon that dependence with their social engagements. Dynamism is a learning outcome. Individuals & organizations could learn from evaluation and with increase of agreement and certainty they could change their behaviors. Evaluation capacity buildings is an outcome of interrelated and interdepend elements of individual Evaluation Capacity and Sustainable Evaluation Practice in the context, interact and surroundings of culture for example Organization's Infrastructure- culture, leadership, communication, systems, Structures- .In historical typology of evaluation, grate evaluation researchers and developers discussed about four generations of evaluation: Measurement, Description, Judgment & Interpretive. I think evaluation filed and profession near to another generation of evaluation: THE FIFTH GENERATION EVALUATION: CAPACITY BUILDING.

Key words: Evaluation Capacity Building, Fifth Generation evaluation, Learning, individuals Evaluation Capacity, Sustainable Evaluation Practice, Organization's Infrastructure.

This article has been ACCEPTED by the scientific committee of the 2nd Iranian Knowledge Conference for Poster presentation. The 2nd Iranian Knowledge Management Conference gathers top researchers and practitioners from all over the world to exchange their findings and achievements on Knowledge Management. This Conference will feature invited keynote presentations, panels on topical issues, refereed paper presentations and workshops on new areas of knowledge management.

This event will take place in Jan. 30-31, 2010 at Razi Intl. Conference Center, Tehran, Iran. For more information go to:

Tuesday, July 08, 2008

Neither too narrow nor too broad

Offer this pleasant partnership to the 1ST ARKANSAS SUMMER EVALUATION WORKSHOP

Neither too narrow nor too broad

Mohaqeqmoein,M.H.& Fetterman, D.M.

July 2008

To be useful a definition should be neither too narrow nor too broad, and like many disciplines evaluation suffers from both (Coryn, 2007, 31). Stufflebeam & Shinkfield's taxonomy suffers from being at the "too narrow" and restrictive end of the evaluation definition spectrum; excluding high quality and useful evaluation work. Stufflebeam & Shinkfield in their book" Evaluation Theory, Models, and Applications" present a new taxonomy for evaluation approaches.

1. Pseudo evaluations.(approach 1 to approach 5) include empowerment evaluation approach
2. Questions- and Methods-Oriented Evaluation Approaches (Quasi-Evaluation Studies). (approach 6 to approach 19)
3. Improvement- and Accountability-Oriented Evaluation Approaches. (Approach 20 to approach 22) include CIPP model.
4. Social Agenda and Advocacy Approaches. (approach 23 to approach 25)
5. Eclectic Evaluation Approaches ( approach 26) include Utilization-Focused Evaluation

Putting the intellectual mischief and attempt to incite or provoke a reaction aside, the categories as constructed are telling - but more about the authors than the approaches.An overriding question associated with each of these evaluation approaches is : Who is in control? Who is the evaluation for? Why is the evaluation being conducted? Without answering these questions it is impossible to categorize them accurately or meaningfully. A quick review of a few paragraphs in the book reveal the "soft underbelly" of their logic and the flaws associated with their thinking.

On page 154 we can read:

When an external evaluator's efforts to empower a group to conduct its own evaluations are advanced as external or independent evaluations, they fit our label of empowerment under the guise of evaluation.

Already we see two flaws in their thinking. There is no claim about empowering anyone in empowerment evaluation. Empowerment evaluations can not and do not attempt to empower anyone. People empower themselves. Empowerment evaluators create an environment for people to empower themselves (Fetterman and Wandersman, 2005).

Second, no one claims that empowerment evaluation is an external or independent evaluation. It is explicitly an internal form of evaluation designed to foster self-determination and improvement (and cultivate internal forms of accountability - the kind that lasts long after the formal external evaluation disappears and the authorities shift their attention to other matters).

Such applications give the evaluees the power to write or edit the interim or final reports while claiming or giving the illusion that an independent evaluator prepared and delivered the reports or at least endorsed internal evaluation reports….

Once again we see a straw person argument. No one is making claims of this nature except the authors. The power of empowerment evaluation is process use. The more that people take an active role in conducting their own evaluations the more likely they are to: 1) find the findings credible; and 2) accept and implement the recommendations (because they are theirs). Writing and/or editing is part of cultivating ownership and should be encouraged for accuracy. The internal evaluation may be endorsed or not by an external body but that is not the point of conducting an internal empowerment evaluation. The point is to build internal evaluation capacity.

Objectives of training and empowering a disadvantaged group to conduct evaluations are laudable in their own right. However empowering groups to do their own evaluation is not evaluation (Stufflebeam and Shinkfield, 2007, 154).

Here again we see a failure to understand what empowerment evaluation is or is not before launching into a critique. There is agreement that training "disadvantaged" groups to conduct their own evaluations is laudable. However, that is where the agreement ends. This is where we see an unnecessarily restrictive and intolerant tone. A group that conducts its own evaluation is by definition evaluation. They may not find it a sufficiently credible form of evaluation, they may not find that this form conforms with their understanding or what evaluation is supposed to do, but it is a form of evaluation. They can only see external accountability as a goal of evaluation and in essence define it accordingly. However, there is a whole world out there in which evaluation is used to help develop programs and contribute to knowledge. Accountability is only one of many purposes of evaluation as Chelimsky pointed out many years ago. Empowerment evaluation focuses on helping program develop with contributions to accountability and knowledge construction. Their blindness to the multiple purposes of evaluation blind them to the potential of evaluation itself as a productive force in the world.

On page 330 we continue to read:

Since information empowers those who hold the information, the CIPP model emphasizes the importance of even-handedness in involving and informing program's stakeholders.

A cardinal rule in traditional forms of evaluation is not to promote your own approach as superior in an argument about the validity of various approaches. Following their "rules" and logic it would not be "objective" for them to evaluate their own approach in such a laudatory fashion.
In addition, it stretches credulity to suggest that the CIPP model in particular, which does not advocate for program or participant control, is empowering. Cousins widely-recognized graph compares CIPP or objectivist approaches with collaborative, participatory, and empowerment evaluation approaches and the CIPP model is the furthest removed from anything vaguely resembling an empowering approach (with the lowest level of participant involvement or control).

Moreover, evaluators should strive to reach and involve those most in need and with little access to and influence over services. While evaluators should control the evaluation process to ensure its integrity, CIPP evaluators accord beneficiaries and other stakeholders more than a passive recipient's role. Evaluators are charged to keep stakeholders informed and provide them appropriate opportunities to contribute.

There is a paternalistic tone here "evaluators should strive to...involve those", "evaluators should control the evaluation process. evaluators accord...stakeholders more than a passive role." In addition, "Evaluators are charged to keep the stakeholders informed." In essence the evaluator "holds all the cards" and decides when and how participants can participate in the process. This is often perceived as condescending, demeaning, and authoritarian, while stated as beneficent and generous. It ignores the right and responsibility of people to be independent self-sufficient masters of their own destinies.

Involving all levels of stakeholders is considered ethically responsible because it equitably empowers the disadvantaged as well as the advantaged to help define the appropriate evaluation questions and criteria, provide evaluative input, critique draft reports, and receive, review ,and use evaluation findings. Involving all stakeholder groups is also wise because sustained, consequential involvement positions stakeholders to contribute information and valuable insights and inclines them to study, accept, value and act on evaluation reports (ibid,330).

The spirit behind this statement is simultaneously commendable and abhorred to anyone committed to self-determination. These statements suggest that there is more than a faint hint of recognition about the importance of involving local stakeholders in an evaluation, to help "empower" them and to improve the accuracy of the evaluation. It is important to applaud and recognize how these evaluators are moving the right direction, but not far enough and not from the eyes of participants and staff members who operate the program every day. It is still presumptuous to assume that it is sufficient to "involve" or "inform". The time is long overdue to share control and enter an evaluation as a partnership, not a master throwing a dog a bone.

One cannot help but ask at the end of these excerpts: Why in the CIPP model is involving and empowering stakeholders wise and ethical, while the same in empowerment evaluation, is categorized as "in the guise of evaluation? The answer is a product of the authors' narrow definition of evaluation. The answer lies in who is allowed to control the shape and direction of the evaluation and who's interests are being served in the process.


Coryn, Chris L. S., (2007), Evaluation of researchers and their research: Toward making the implicit explicit, Western Michigan University, Kalamazoo , Interdisciplinary Doctoral Program in Evaluation, Unpublished doctoral dissertation.

Fetterman, D.M. and Wandersman, A. (2005). Empowerment evaluation principles in practice. New York: Guilford Publications.

Stufflebeam Daniel L. and Anthony J. Shinkfield, (2007), Evaluation Theory, Models, and Applications , Josseybass publication.

Monday, October 22, 2007

Empowerment Evaluation in AEA Annual November Conference

Empowerment Evaluation in 21st American Evaluation Association annual conference

Mohammad Hasan Mohaqeq Moein



21st American Evaluation Association annual conference will be held Wednesday, November 7, through Saturday, November 10, 2007 in Baltimore, Maryland, USA. AEA's annual meeting is expected to bring together over 2500 evaluation practitioners, academics, and students, and represents from around the world. The conference is broken down into 38 Topical Strands that examine the field from the vantage point of a particular methodology, context, or issue of interest to the field as well as the Presidential Strand highlighting this year's Presidential Theme of "Evaluation and Learning."

In the Letter of Invitation to Submit for Evaluation 2007 from AEA's President: Hallie Preskill we can read the conference Theme: For as long as contemporary forms of evaluation have been around, questions concerning effective practice, relevant theories, impacts and consequences, and use of findings, have been posed and debated by practitioners and scholars in the field. Embedded in all of these questions is an implicit assumption that evaluation, in some way, is about learning…learning about a program and its outcomes, learning from an evaluation process, learning how to do evaluation, or learning about evaluation’s effect on others. If learning is the act, process, or experience of gaining knowledge or skills, then it is hard to imagine evaluation as anything other than a means for learning.

Increasingly over the years, evaluators have come to acknowledge the learning aspect of evaluation by framing it as evaluation capacity building, evaluation use and influence, evaluation for organizational learning, empowerment evaluation, knowledge management, and within the broad construct of evaluation, social betterment. Inherent in each of these areas of study and practice is the notion that learning is a fundamental process that enhances our individual and collective capacity to create the results we truly desire now and in the future. Consequently, learning from and about evaluation has the potential to generate knowledge for decision-making and action and to move us in the direction of our visions.

The 2007 Presidential Strand theme, "Evaluation and Learning," will provide a focus for us to explore this topic in various ways. For example, the following questions, though not exhaustive, illustrate some of the issues that could be addressed within the context of the conference theme:

  • What does it mean to learn from and about evaluation processes and outcomes?
  • How does evaluation facilitate learning? Conversely, how and when does evaluation hinder learning?
  • How might learning from evaluation be enhanced and sustained?
  • In what ways can evaluation create learning communities of practice?
  • How can differences—individual learning styles, community cultures, ethnic and geographic dimensions—enhance and challenge our learning from and about evaluation?
  • What does learning from evaluation look like in different organizational and community contexts?
  • Who else is involved in the process of learning from evaluation in different contexts and how?
  • How, and in what ways, does learning lead to evaluation?
  • What kinds of evaluation designs and approaches maximize which kinds of learning from and about evaluation?
  • What is the relationship between workplace and adult learning theory and evaluation theory? What other theories help us understand the intersection of learning and evaluation?
  • The AEA Annual Conference is a vibrant and exciting learning community: how do we maximize our own learning in the evaluation profession?

Near 50 Professional Development Workshops are hands-on, interactive sessions that provide an opportunity to learn new skills or hone existing ones at Evaluation 2007. Professional development workshops precede and follow the conference (November 5, 6 & 11). These workshops differ from sessions offered during the conference itself in at least three ways:

1. Each is longer (either 3, 6, or 12 hours in length) and thus provides a more in-depth exploration of a skill or area of knowledge,

2. Presenters are paid for their time and are expected to have significant experience both presenting and in the subject area, and,

3. Attendees pay separately for these workshops and are given the opportunity to evaluate the experience.

Empowerment Evaluation Approach have a grate position in this annual conference, same all last annuals from 1993 that the theme of annual conference was Empowerment Evaluation. I see these titles from EE: Business Meeting Session 126, Think Tank Session 576, Expert Lecture Session 705, Multipaper Session 799, Multipaper Session 812 and one workshop. Some facts from these proceedings are:

Session Title: Collaborative, Participatory and Empowerment Evaluation TIG Business Meeting

Business Meeting Session 126 to be held in Hanover Suite B on Wednesday, November 7, 4:30 PM to 6:00 PM

Sponsored by: the Collaborative, Participatory & Empowerment Evaluation TIG

TIG Leader(s): David Fetterman, Stanford University , & Liliana Rodriguez-Campos, University of South Florida,

Session Title: Arkansas Evaluation Center and Empowerment Evaluation: We Invite Your Participation as We Think About How to Build Evaluation Capacity and Facilitate Organizational Learning in Arkansas

Think Tank Session 576 to be held in Carroll Room on Friday, November 9, 11:15 AM to 12:00 PM

Sponsored by: the Collaborative, Participatory & Empowerment Evaluation TIG

Presenter: David Fetterman, Stanford University,

Discussant: Linda Delaney, University of Arkansas,

Abstract: A new Arkansas Evaluation Center will be housed at the University of Arkansas Pine Bluff. The Center emerged from empowerment evaluation training efforts in a tobacco prevention program (funded by the Minority Initiated Sub-Recipient Grant's Office). The aim of the Center is to help others help themselves through evaluation. The Center is designed to build local evaluation capacity in the State to help improve program development and accountability. The Center will consist of two parts: an academic program beginning with a certificate program and later offering a masters degree. It will combine face-to-face and distance learning. The second part of the Center will focus on professional development: including guest speakers, workshops, conferences, and publications. The Center will be grounded in an empowerment evaluation philosophical orientation and guided by pragmatic mixed-methods training. In addition, it will help evaluators learn how to use new technological and web-based tools.

Session Title: Identifying Critical Processes and Outcomes Across Evaluation Approaches: Empowerment, Practical Participatory, Transformative, and Utilization-focused

Expert Lecture Session 705 to be held in Liberty Ballroom Section B on Saturday, November 10, 9:35 AM to 10:20 AM

Sponsored by: the Theories of Evaluation TIG

Chair: Tanner LeBaron Wallace, University of California, Los Angeles,

Presenter: Marvin Alkin, University of California, Los Angeles,


J Bradley Cousins, University of Ottawa,

David Fetterman, Stanford University,

Donna Mertens, Gallaudet University,

Michael Quinn Patton, Utilization-Focused Evaluation,

Abstract: Inspired by the recent American Journal of Evaluation article by Robin Miller and Rebecca Campbell (2006), this session proposes a set of identifiable processes and outcomes for four particular evaluation approaches-Empowerment, Practical Participatory, Transformative, and Utilization-Focused. The four evaluation theorists responsible for each approach will serve as discussants to critique our proposed set of evaluation principles. This session seeks to answer the following two questions for each approach: What process criteria would identify each specific evaluation approach in practice? And what observed outcomes are necessary in order to make a judgment that the evaluation was "successful" in regards to the particular evaluation approach? Providing answers to these questions through both the presentation and the discussion among the theorists will provide comparative insights into common and distinct elements among the approaches. Our ultimate aim is to advance the discipline of evaluation through increasing conceptual clarity.

Session Title: Using Empowerment Evaluation to Facilitate Organizational Transformation: A Stanford University Medical Center Case Example

Multipaper Session 799 to be held in Hanover Suite B on Saturday, November 10, 12:10 PM to 1:40 PM

Sponsored by: the Collaborative, Participatory & Empowerment Evaluation TIG

Chair: David Fetterman, Stanford University,


  1. Abraham Wandersman, University of South Carolina, Empowerment evaluation is guiding evaluation efforts throughout the Stanford University Medical Center. Empowerment evaluation is a collaborative approach and designed to build evaluative capacity, engaging people in their own self-assessment and learning. The process typically consists of three steps: 1) mission; 2) taking stock; and 3) planning for the future. Strategies are monitored and information is fed back to make mid-course corrections and/or build on successes. The process depends on cycles of reflection and action in an attempt to reduce the gap between theories of action (espoused) and theories of use (observed behavior). The approach relies on critical friends to help facilitate the process. This is an important case example organizationally because the effort represents a rare opportunity to align and build on medical student education, resident training, and the education of fellows. The data generated are used to inform decision making, improve curricular practices, and enhance critical judgment.

  1. Jennifer Berry, Stanford University, & David Fetterman, Stanford University, : Using Empowerment Evaluation to Engage Stakeholders and Facilitate Curriculum Reform: When Stanford University School of Medicine undertook a major reform of its curriculum, the School adopted an empowerment evaluation approach to help monitor and facilitate implementation of the new curriculum. Empowerment evaluation is collaborative, engaging faculty, students, and administration in the cyclical process of reflection and action. Empowerment evaluation relies on the theory of process use. Empowerment evaluation theories and tools were used to facilitate organizational transformation at the course level. Our process included: using the School's mission as a guide; taking stock by holding focus groups and developing new survey instruments, including learning climate assessments; and planning for the future by facilitating discussions about evaluation findings with key stakeholders and having the faculty and teaching assistants revise specific courses. We also established a feedback loop to measure the success of reforms and revisions from one year to the next. Case examples highlight measurable evidence of curricular improvement.

  1. Kambria Hooper, Stanford University, : Organizational Learning Through Empowerment Evaluation: Improving Reflection Skills With a 360 Degree Evaluation: This study explores the impact of a 360 degree empowerment evaluation system in one of Stanford School of Medicine's required classes for Stanford medical students. This evaluation system has three levels of reflection and improvement. The first is the individual member's performance. The second level of reflection and improvement is small group performance. The final level is organizational learning; the course directors and staff reflect on data, looking for group variability or patterns, to create new goals for the course structure or curriculum. Organizational learning is dependent on each member's ability to give and receive constructive, formative feedback. In response to resistance and confusion around the new evaluation system, we developed several interventions to improve the ability of students, faculty and simulated patients to give and receive constructive feedback. This evaluation demonstrates how organizational learning is improved when the organization's members have opportunities to reflect on individual and team performance.

  1. Andrew Nevins, Stanford University, : Overestimation of Skills in Medical School: The Need to Train Students How to Self-assess: Stanford's School of Medicine used standardized patients (SP) to help assess medical students' skills. This study focuses on students at the preclinical or course level. Clinical skills were assessed by checklists compiled from a consensus of faculty experts. Students also rated their perception of patient satisfaction on a 1 (low) to 9 (high) scale. SPs completed a matching questionnaire, rating their satisfaction with the student. Student and SP satisfaction ratings were paired and correlated, consistent with empowerment evaluation practices. Overall, students over-rated their performance by 0.75 points. The lowest quintile overestimated performance by 1.57 points, while the highest quintile underestimated performance by 0.003 points (p<0.01).>

  1. David Fetterman, Stanford University, & Jennifer Berry, Stanford University, Empowerment Evaluation: the Power of Dialogue: Empowerment evaluation has 3 steps including mission, taking stock, and planning for the future. However, the middle stage is not always explored in depth. One of the central features of the taking stock step is dialogue. Program participants rate how well they are doing at this step in the process, using a 1 (low) to 10 (high) rating systems. They are also required to provide evidence to support their ratings. However, it is the generative dialogue that is most characteristic of this part of the process and critical to authentic learning, on the community of learners as well as organizational learning levels. Each participant explains why they gave their rating, using documentation to build a culture of evidence. Three examples of dialogue (and norming) are provided: 1) engaged scholarly concentration directors; 2) faculty, administrators, and students grappling with curricular problems; and 3) committed clerkship directors guiding student learning in hospitals.

  1. Heather A Davidson, Stanford University, : Using Principles of Empowerment Evaluation to Build Capacity for Institutional Learning: A Pilot Project at Stanford Hospital : Residency education is rapidly changing from an apprentice-based to a competency-based model where performance outcomes must guide individual feedback and continuous program improvement to meet new accreditation standards. This change represents a cultural shift for teaching hospitals and a management shift that must support systems of assessment. Many faculty members do not have the tools needed to design and implement these goals. Since institutional accreditation requires that all residency programs undergo a peer-led internal review process, Stanford Hospital has created a new protocol to build evaluation capacity. Utilizing principles of empowerment evaluation, the pilot project formalizes feedback loops needed at both program and institutional levels. By combining performance benchmark and portfolio techniques with a mock accreditation site visit, the new protocol provides a more comprehensive assessment of overall program needs; evidence of program quality across the institution; and supports a learning culture where faculty share educational initiatives.

Sixth Presenter Alice Edler 6507236412 Stanford University Empowerment Evaluation: A Catalyst for Culture Change in Post Graduate Medical Education. Pediatric anesthesia requires special skills for interacting with small patients, not required in general anesthesia training. Empowerment evaluation was used to assess these behaviors in a Stanford pediatric anesthesia fellowship. Trainees, faculty and aggregate data revealed the need for more clinical decision- making opportunities in the fellowship. Clinical judgment ranked the lowest. The role of administrative chief fellow emerged from the self-assessment. It allowed for more opportunities for decision making in day-to-day schedules, curriculum, and disciplinary decisions. This position was rotated over all the fellows. Individual and group improvements were evidenced. Fellows assumed the responsibility for creating new rotations and revising their schedules based on perceived curriculum needs. Faculty evaluations of clinical judgment significantly increased in the clinical judgment item (see table 1). Information from the EE has allowed fellows to model self determination, form a more cohesive group, and provide opportunities for high stakes clinical decision-making.

Session Title: Empowerment Evaluation Communities of Learners: From Rural Spain to the Arkansas Delta

Multipaper Session 812 to be held in Carroll Room on Saturday, November 10, 1:50 PM to 3:20 PM

Sponsored by the Collaborative, Participatory & Empowerment Evaluation TIG

Chair: David Fetterman, Stanford University,


  1. Stewart I Donaldson, Claremont Graduate University, : Empowerment evaluation is the use of evaluation concepts, techniques, and findings to foster improvement and self-determination. It employs both qualitative and quantitative methodologies. It also knows no national boundaries. It is being applied countries ranging from Brazil to Japan, as well as Mexico, United Kingdom, Finland, New Zealand, Spain, and the United States. These panel members highlight how empowerment evaluation is being used in rural Spain and the Arkansas Delta. In both cases, they depend on communities of learners to facilitate the process. The third member of the panel highlights a web-based tool to support empowerment evaluation that allow crosses all geographic boundaries.

  1. Jose Maria Diaz Puente, Polytechnic University, Madrid, : Learning From Empowerment Evaluation in Rural Spain: Implications for the European Union : At the present time, thousands of evaluation works are carried out each year in the European Union to analyze the efficacy of European policies and seek the best way to improve the programs being implemented. Many of these works are related to programs applied in the rural areas that occupy up to 80% of the territory of the EU and include many of the most disadvantaged regions. The results of the application of empowerment evaluation in the rural areas of Spain show that this evaluation approach is an appropriate way to foster learning in the rural context. The learning experience was related to capacity building in stakeholders and evaluation team, the evaluator role and advocacy, the impact of the empowerment evaluation approach, its potential limitations, difficulties and applicability to rural development in the EU.

  1. Linda Delaney, Fetterman and Assoc, & David Fetterman, Stanford University, : Empowerment Evaluation: Transforming Data Into Dollars and the Politics of Community Support in Arkansas Tobacco Prevention Projects : Empowerment evaluation is being used to facilitate tobacco prevention work in the State of Arkansas. The University of Arkansas's Depart of Education is guiding this effort, under the Minority Initiated Sub-Recipient Grant's Office. Teams of community agencies are working together with individual evaluators throughout the state to collect tobacco prevention data and turn it into meaningful results in their communities. They are also using the data collectively to demonstrate how a collective can be effective. The grantees and evaluators are collecting data about the number of people who quit smoking and translating that into dollars saved in terms of excess medical expenses. This has caught the attention of the Black Caucus and the legislature. Lessons learned about transforming data and the politics of community support are shared.

  1. Abraham Wandersman, University of South Carolina, : Empowerment Evaluation and the Web: (interactive Getting to Outcomes) iGTO : iGTO is an Internet based approach to Getting to Outcomes called Interactive Getting to Outcomes. It is a capacity-building system, funded by NIAAA that is designed to help practitioners reach results using science and best practices. Getting to Outcomes (GTO) is a ten-step approach to results-based accountability. The ten steps are as follows; Needs/Resources, Goals, Best Practices, Fit, Capacity, Planning, Implementation, Outcomes, CQI, and Sustainability. iGTO plays the role of quality improvement/quality assurance in a system that has tools, training, technical assistance, and quality improvement/quality assurance. With iGTO, the organizations uses empowerment evaluation approaches to assess process and outcomes and promote continuous quality improvement. Wandersman et al highlight the use of iGTO in two large state grants to demonstrate the utility of this new tool.

Session Title: Building and Assessing Capacity for Evaluation: Creating Communities of Learners Among Service Providers

Panel Session 867 to be held in Hanover Suite B on Saturday, November 10, 3:30 PM to 5:00 PM

Sponsored by: the Collaborative, Participatory & Empowerment Evaluation TIG

Chair: Tina Taylor-Ritzler, University of Illinois, Chicago,


  1. David Fetterman, Stanford University, Community-based organizations are currently experiencing pressure to learn about evaluation and conduct their own evaluations. Some are able to meet these demands through partnerships with academic institutions designed to build capacity for evaluation utilizing empowerment and participatory approaches. Although there is literature available on evaluation capacity building, much is needed in terms of understanding its conceptualization, the process of building it and how to measure it. In this session, several researchers will present their work with a variety of community-based organizations in creating capacity for evaluation. First, we will present an ecological, contextual and interactive framework of evaluation capacity building that integrates models from the evaluation literature. Second, we will describe methods and strategies in measuring capacity building and we will discuss how to create learning communities with agency staff. Third, we will provide exemplars and discuss challenges encounter when doing this work and implications for the field of evaluation. Fourth, we will discuss evaluation capacity building strategies (ECB) used and how they have been evaluated based on a review of research on ECB. Finally, we will hear commentaries from a prominent researcher in the field: David Fetterman.

  1. Yolanda Suarez-Balcazar, University of Illinois, Chicago, : Building Capacity for Evaluation Among Service Providers: Conceptual Framework and Exemplar : Based on the work we have been doing at the Center for Capacity Building for Minorities with Disabilities Research, we propose a contextual framework of capacity for evaluation. A contextual and interactive model suggests a dynamic interplay between person factors and organizational factors. The person or group factors are exemplified by agency staff and/or program implementers while organizational factors speak for organizational policies, organizational culture and support systems that create an environment that facilitates capacity for evaluation. The framework assumes interplay between personal factors and organizational factors. As such, a CBO staff member may be willing and ready to learn how to evaluate a program he/she implements but lacks organizational support to do so in the form of lack of allocated time and resources. Capacity for evaluation can be created and facilitated at the individual level. Here, we are referring to the staff member or members who implement agency programs, are in direct contact with participants, and are experiencing tremendous pressure to document what they do and to produce tangible outcomes. We will discuss individual factors such as personal readiness, level of competence and experience, and individual leadership. The environment, policies, procedures and culture of the organization may be more or less facilitative of building capacity for evaluation in individual staff and the organization as a whole. The presenters will also discuss several factors at the organization level that can facilitate the process of building capacity: including organizational readiness, organizational resources and support allocated to evaluation, organizational leadership, organizational culture, organizational capacity to mainstream evaluation practices, organizational capacity to utilize findings and develop practices that sustain evaluation capacity, and organizational capacity to seek funding for their programs. We will also discuss implications for the art and the science of evaluation.

  1. Tina Taylor-Ritzler, University of Illinois, Chicago, : Measuring Evaluation Capacity: Methodologies and Instruments : Although there is a large literature on evaluation capacity building, it lacks specificity on issues of measurement and assessment of evaluation capacity. Most have looked only at evaluation products agencies generate (reports to funders) and satisfaction with training. We will present our multiple method system for assessing and measuring evaluation capacity. In this session, we will present the work being conducted nationally by the Center for Capacity Building on Minorities with Disabilities Research. We will describe in detail the instruments and procedures we use and challenges we encounter when measuring evaluation capacity building with organizations serving ethnic minorities with disabilities. We will share data drawn from multiple case study examples. Finally, our discussants will share their perspective on the contribution of our work to scholarship on evaluation capacity building.

  1. Rita O'Sullivan, University of North Carolina, Chapel Hill, : Using Collaborative Evaluation as a Strategy for Evaluation Capacity Building: First 5 Los Angeles' Quality Care Initiative : Collaborative Evaluation (O'sullivan, 2004) uses an approach to evaluation, which results in enhanced evaluation capacity building among key stakeholders. Evaluation Assessment and Policy Connections (EvAP) at the University of North Carolina at Chapel Hill worked collaboratively with First 5 Los Angeles staff and its 53 Childcare grantees to design a 30-month evaluation that would provide process, outcome, and policy information about the initiative. The evaluation activities also addressed enhancing the evaluation capacity of First 5 grantees, staff, partners, and Commissioners. Collaborative evaluation engages key program stakeholders actively in the evaluation process. Unlike distanced evaluation, where evaluators have little or no contact with program staff, collaborative evaluation deliberately seeks involvement from all program stakeholders during all stages of the evaluation. A collaborative stance can strengthen evaluation results and increase utilization of evaluation findings. Additionally, programs participating in collaborative evaluations develop an enhanced capacity to consume and conduct evaluations, while evaluators gain a better understanding of the program. The collaborative evaluation approach assumes that evaluation expertise within programs is developmental; and thus, the degree of collaboration must vary by the nature and readiness of the program. Evaluations completed with this collaborative approach have yielded improved evaluation capacity as measured by data quality, report writing, and evaluation use with program in the areas of education, social services, and health; the presenter also has found that collaborative evaluation may increase the resources available to the evaluation. This presentation will report how the evaluation contributed to the capacity building of the 53 grantees the majority of which were community based organizations.

  1. Jennifer Duffy, University of South Carolina, : A Review of Research on Evaluation Capacity Building Strategies : The growing literature on evaluation capacity building is one resource for learning more about what evaluation capacity building looks like in the field and what evidence there is for the success of these strategies. We will present findings from a review of empirical research on evaluation capacity building. The strategies for building evaluation capacity that are identified in this research will be described, and the methods used to evaluate these strategies will be discussed. We will highlight the evidence for successful strategies and limitations of the existing research. Questions for future research will be identified, with a focus on identifying successful strategies for building evaluation capacity.

Workshop number 42: Empowerment Evaluation

Empowerment Evaluation builds program capacity and fosters program improvement. It teaches people to help themselves by learning how to evaluate their own programs. The basic steps of empowerment evaluation include: 1) establishing a mission or unifying purpose for a group or program; 2) taking stock - creating a baseline to measure future growth and improvement; and 3) planning for the future - establishing goals and strategies to achieve goals, as well as credible evidence to monitor change. The role of the evaluator is that of coach or facilitator in an empowerment evaluation, since the group is in charge of the evaluation itself.

Employing lecture, activities, demonstration and case examples ranging from townships in South Africa to a $15 million Hewlett-Packard Digital Village project, the workshop will introduce you to the steps of empowerment evaluation and tools to facilitate the approach.

You will learn:

o How to plan and conduct an empowerment evaluation,

o Ways to employ new technologies as part of empowerment evaluation including use of digital photography, quicktime video, online surveys, and web-based telephone/videoconferencing,

o The dynamics of process use, theories of action, and theories of use.

David Fetterman hails from Stanford University and is the editor of (and a contributor to) the recently published Empowerment Evaluation Principles in Practice (Guilford). He Chairs the Collaborative, Participatory and Empowerment Evaluation AEA Topical Interest Group and is a highly experienced and sought after facilitator.

Session 42: Empowerment Evaluation

Scheduled: Wednesday, November 7, 12:00 PM to 3:00 PM

Level: Beginner, no prerequisites