Sunday, August 08, 2010

Summative Evaluation in Last Tenure Day

Summative Evaluation in Last Tenure Day

Mohamad Hasan Mohaqeq Moein

August 2010-08-09

Jeffrey Zients on the last day of his tenure in OMB on the first blog as an OMB’s Acting Director Posted a note on August 02, 2010 at 10:53 AM EDT Under title: Discovering What Works. In his late blog, he refers to something that you can read completely at:

http://www.whitehouse.gov/blog/2010/08/02/discovering-what-works



I apply to, this final document, in a split manner in some important PART (9 SECTION below) for better understanding:


1. OMB latest memorandum on "Evaluating Programs for Efficacy and Cost-Efficiency” signed by OMB Director Peter Orszag on the last day of his tenure.


http://www.whitehouse.gov/sites/default/files/omb/memoranda/2010/m10-32.pdf



2. OMB issued NEW guidance to Federal agencies about conducting high-priority evaluations.

3. Jeffrey Zients served as the Chief Performance Officer of the US Government, and what he knows is that determining which programs work and which do not is critical to discovering whether government operations are doing what they are supposed to do in a cost-efficient manner.


4. Yet too many important programs have never been formally evaluated.

5. The results of evaluated programs have not been fully taken into the decision-making process, at the level of either budgetary decisions or management practices.


6. For an organization as large as the Federal Government, with as many priorities and obligations as it has, the fact that we have rarely evaluated multiple approaches to the same problem makes it difficult to be confident that taxpayers’ dollars are being spent effectively and efficiently.

7. Running rigorous evaluations takes money, but investments in rigorous evaluations are a drop in the bucket relative to the dollars at risk of being poorly spent when we fail to learn what works and what doesn’t.

8. OMB is allocating a small amount of funding for agencies that voluntarily demonstrate how their FY 2012 funding priorities are subjected to rigorous evaluation, with an emphasis on evaluations aimed at determining the casual effects of programs or particular strategies, interventions, and activities within programs.


9. Finding out if a program works is common sense, and the basis upon which we can decide which programs should continue and which need to be fixed or terminated. This guidance will help us do just that.



When I deeply thinking about 9 above and combined numbers 3+4+5+9, I concluded that American Organizations + American Management + American evaluations are at risk! I aim jump to history of evaluation from this point. Hence, I want more digging this problematic issue.

I think managers and experts in OMB like Jeffrey Zients knows what are problematic position of these issues, but what are their solutions? And whether these explanations could make better American management, organizations & evaluations?

OMB newest addresses to issues presented in number 1 & 2 above than I labeled it for better communication: USA National Evaluation Policy. This policy for Fiscal Year 2012 that made in July 29, 2010 was under title: Evaluating Programs for Efficacy and Cost-Efficiency. The prior twins OMB solutions for Fiscal Year 2011 that made in October 7, 2009 were under title: Increased Emphasis on Program Evaluations and you could read completely at:

http://www.whitehouse.gov/omb/assets/memoranda_2010/m10-01.pdf

The main OMB solutions for the issues that fully support and endorsements at 2 USA National Evaluation Policy introduces in number 6, 7, 8 & 9 above Jeffrey Zients sentences.

The fully ignored in 2 USA National (Federal) Evaluation Policy is organized social capitals in USA expert civil associations like American Evaluation Association. I want clearly say: OMB Ignores positions of evaluation professional associations?

The main different between 2 national document that strongly observable is the centralistic and more closed manner of lately National Evaluation Policy in compares with prior document. For example: in USA National Evaluation Policy for Fiscal Year 2011 we have a new Inter-agency working group. That Together with the Domestic Policy Council, National Economic Council and the Council of Economic Advisors, to promote stronger evaluation across the Federal government. But in USA National Evaluation Policy for Fiscal Year 2012 we have an OMB’s Resource Management Offices (RMOs) to coordinate and improve the design, implementation, and utilization of evaluations with many important determinant functions.

In USA National Evaluation Policy for Fiscal Year 2011 we have more open government functions in compare the last document! In this document we read more about Public Availability of Information on Federal Evaluations that must be On-line information about existing evaluations. OMB will work with agencies to make information readily available online about all Federal evaluations focused on program impacts that are planned or already underway. OMB will work with agencies to expand the information about program evaluations that they make public. The goal is to make researchers, policymakers, and the general public aware of studies planned or underway that (1) examine whether a program is achieving its intended outcomes; or (2) study alternative approaches for achieving outcomes to determine which strategies are most effective.

OMB will issue a budget data request regarding the public availability of program evaluation information. As necessary, we will work with agencies to determine how best to make more information available online. Public awareness will promote two objectives.

First, it will allow experts inside and outside the government to engage early in the development of program evaluations. In particular, OMB welcomes input on the best strategies for achieving wide consultation in the development of evaluation designs.
Second, public awareness will promote transparency, since agency program evaluations will be made public regardless of the results. This function is analogous to that of the HHS clinical trial registry and results data bank (ClinicalTrials.gov).

But in USA National Evaluation Policy for Fiscal Year 2012 only we have one paragraph in the open government subject that is the below:

On-line information about existing evaluations: OMB is working with agencies to make information readily available online about all Federal evaluations focused on program impacts that are planned, already underway, or recently completed. A Budget Data Request (BDR) is being issued concurrently to this memo to assist in the completion of this request.

Another different between tow documents is about Evaluation Plan. That stress only in USA National Evaluation Policy for Fiscal Year 2012 observed. These anxieties are under below refers:

Agencies should provide OMB with a plan demonstrating how evaluation resources, funded though base programs and new proposals, will be allocated to address budget priorities. The content and format for this information should be developed in consultation with the RMO

For each evaluation study for which they are seeking additional funding and every proposed evaluation study costing $1 million or more (RMOs have the discretion to adopt lower thresholds) that is part of their base funding request.


But why and how organized social capitals ignored?


In February 3, 2009, Debra Rog (President) William Trochim (Immediate Past President) & Leslie Cooksy (President Elect) On behalf of the American Evaluation Association wrote a letter for Peter Orszag Director, Office of Management and Budget and probably in an attending session take it to him.

In this letter that has a 13 page attachment and Prepared by AEA Evaluation Policy Task Force on February, 2009 and published under title:
“An Evaluation Roadmap for a More Effective Government” at:

http://www.eval.org/aea09.eptf.eval.roadmapF.pdf this organized social professional capitals & evaluation leaders wrote:

“We are writing to propose for your consideration a major initiative to improve oversight and accountability of Federal programs by systematically embracing program evaluation as an essential function of government. In the attachment we describe how evaluation can be used to improve the effectiveness and efficiency of Federal programs, assess which programs are working and which are not, and provide critical information needed for making difficult decisions about them. We provide a roadmap for improving government through evaluation, outlining steps to strengthen the practice of evaluation throughout the life cycle of programs.
We understand how complex and demanding is the work before you. We hope our suggestions will be useful to you and we stand ready to assist you on matters of program evaluation.”

I labeled 2 OMB document as a National Evaluation Policy, but the Evaluation Policy Task Force has a definition for Evaluation Policy. According to this task force “The term “evaluation policy” encompasses a wide range of potential topics that include (but are not limited to): when systematic evaluation gets employed, and on what programs, policies and practices; how evaluators are identified and selected; the relationship of evaluators to what is being evaluated; the timing, planning, budgeting and funding, contracting, implementation, methods and approaches, reporting, use and dissemination of evaluations; and, the relationship of evaluation policies to existing or prospective professional standards.” We could compare the contents of these documents by this definition and know what the matters are.

I labeled “An Evaluation Roadmap for a More Effective Government” document as a "potential USA National Evaluation Policy" from AEA perspective. We could read in this National Evaluation Policy:

In the past eight years, the Office of Management and Budget attempted to institute consistent evaluation requirements across all Federal programs through its Program Assessment Rating Tool (PART) program. However, that effort, while a step in the right direction,
1. was not adequately resourced,
2. was inconsistently applied,
3. was too narrow relative to the options that are suitable across the life cycle and circumstances of different programs, and
4. did not provide the high-quality training and support for agencies to accomplish evaluation effectively.
While significant advances in the use of evaluation have occurred in the Federal Government since the 1960s, the commitment needed to consistently ensure that decisions are informed by evaluation has not yet been made.

The Obama administration has a unique opportunity to advance its broad policy agenda by integrating program evaluation as a central component of Federal program management.

The time is especially right for such a bold move. The breadth and seriousness of the challenges we face provide a political climate that could support a commitment to a major advance in Federal program management. The lessons that have been learned in those agencies that have experience in applying evaluation constitute a solid knowledge base upon which we can build.
And, the field of evaluation has evolved to a point where it is more capable than ever before to support a significant expansion in the scope of Federal efforts.
The new administration would benefit significantly by using program evaluation to

• address questions about current and emerging problems
• reduce waste and enhance efficiency
• increase accountability and transparency
• monitor program performance
• improve programs and policies in a systematic manner
• support major decisions about program reform, expansion, or termination
• assess whether existing programs are still needed or effective
• identify program implementation and outcome failures
• inform the development of new programs where needed
• share information about effective practices across government programs and agencies

The key is to make program evaluation integral to managing government programs at all stages, from planning and initial development through start up, ongoing implementation, appropriations, and reauthorization. In short, what is needed is a transformation of the Federal management culture to one that incorporates evaluation as an essential management function.

After 19 month from AEA dedicate proposal to OMB director and structure, every one can see the result of lessens learned in Jeffrey Zients 9 statements above and we could predict the future of American evaluation, organization & management in twin USA National Evaluation Policy for Fiscal Years 2011 & 2012.
I think the problematic issue in the era of this note not only belongs to past and current experiences, but also we will encounters with this in next years! Why? The answer is simple and virulent. We don’t learn from failures.
In "potential USA National Evaluation Policy" from AEA perspective refers to history of these failures. We can read at it:

Current Status of GPRA and PART

The most significant evaluation-related initiatives of the last 15 years have been the enactment of the Government Performance and Results Act of 1993 (GPRA) and, more recently during the George W. Bush administration, OMB’s Program Assessment Rating Tool (PART).
Generally, GPRA encourages a strategic, agency wide, mission view and also focuses on whether government programs achieve results in terms of the goals and objectives for which they are established. Evaluation was defined in GPRA as addressing the "manner and extent to which" agencies achieve their goals, thus addressing both implementation and results. In practice, it has been implemented in a way that emphasizes the use of performance indicators and measurement to see whether a goal has been reached or not, with less attention being paid to evaluation studies that might shed light on the role the program played in reaching the goal, on why programs do or do not meet their goals and objectives, and on how programs might be improved. As a result, there is less information through this process that can guide programmatic or policy action.
PART focuses on programs’ effectiveness and efficiency, especially on their impact. It draws on GPRA in terms of its analysis of whether programs meet their performance goals. However, it recognizes that some programs can meet their goals and still fail to have meaningful impact because of shortcomings in their designs or their goals. It attempts to assess whether programs are free from design flaws that prevent them from being effective. It introduces evaluation, and even calls for a body of independent evaluations for programs. For the most part, however, it emphasizes the use of evaluation as a way to determine program impact. While possibly not intended, it has had the effect of over-advocating for one particular type of impact evaluation, namely, randomized controlled trials, as a “gold standard” for the measurement of program success. This has tended to limit its ability to recognize success in programs for which randomized controlled trials are not a suitable method for assessing effectiveness or improving performance.
Some distrust PART results because they believe the goals and objectives upon which its analyses are based may be driven by political ideologies. In particular, Congress has distanced itself from PART. Some have noted that PART excludes policies like tax expenditures and focuses on discrete programs when multiple activities that cut across agency boundaries may contribute to achievement of goals.
OMB has moved to address some of these perceived shortcomings by initiating a pilot test of alternative impact assessment methodologies. In addition, in November 2007 President Bush signed an Executive Order on Improving Government Program Performance. It creates the position of Performance Improvement Officer in each Federal agency and establishes a government wide Performance Improvement Council under the direction of OMB to more systematically promote performance assessments of programs.
GPRA and PART have made the use of performance measurement and management as a staple of government program management. But they fall considerably short of what is needed to address the problems our country faces.

Going Beyond GPRA and PART

What we are proposing is a vision for transforming the view of what agency heads and the Congress can do to benefit from program evaluation and systematic analysis both to improve the design, implementation, and effectiveness of programs and to assess what works and what doesn’t, and why. This vision is a comprehensive one that recognizes that evaluation is more than simply “looking in the rearview mirror” and that it needs to be utilized earlier in the life of a program, as an integral part of managing government programs at all stages, from initial development through start up, ongoing implementation, appropriations, and reauthorization.

Conclusion

Ok! I must take my conclusion from this long note. When I deeply think about these maters I more admire Professor Michael Scriven for his DEBATE on rethinking all of evaluation! He proposed workshop for the summer institute at Claremont Graduate University, where he will suggest it’s time to reconceptualize evaluation from the ground up. I wrote an article under title of: "THE FIFTH GENERATION EVALUATION: CAPACITY BUILDING Revising of Evaluation roles in Organizational Learning and Knowledge Management" that you could read more detail of my view point on this conclusion at:

http://empowermentevaluation.blogspot.com/2010/01/fifth-generation-evaluation-capacity.html


Jane Davidson on 8/2/10 in Genuine Evaluation blog at: http://genuineevaluation.com wrote:


I think one of the key reasons why we see little success in building evaluative and reflective thinking in organizations is because too often it is seen as a ‘people-change’ challenge (targeting individuals, and blaming them when it doesn’t happen) rather than a whole-organization culture change challenge (i.e. a leadership responsibility to create and build energy behind a genuine transformation).
From the literature on organizational culture change Here is ten tips drawn from a keynote I did earlier this year on how to build a learning organization:

1. Get top management commitment to learning from failure – and make it highly visible (walk, not just talk!)
2. Identify and work with ‘evaluation evangelists’ – influential people who will help lead the change
3. Communicate the ‘evaluation imperative’ – explain clearly why this is a powerful and exciting new way of doing things
4. Train/coach people in evaluation skills, knowledge & know-how; develop tools together
5. Provide diverse exemplars of great evaluative inquiry, reflection & use – people need to know what good evaluation and learning looks like in practice
6. Model the importance of external criticism (especially for senior management)
7. Develop and empower (and/or, hire in) a critical mass of people with the “evaluative attitude”
8. Listen to skeptics & cynics; allow powerful change blockers the chance to move aside
9. Recognize and reward changed behaviors & mindset
10. Highlight & celebrate good examples of ‘learning from disappointing results’ – and using those learning’s to make timely improvements

Tuesday, January 05, 2010

FIFTH GENERATION EVALUATION

THE FIFTH GENERATION EVALUATION: CAPACITY BUILDING
Revising of Evaluation roles in Organizational Learning and Knowledge Management

Mohammed Hasan Mohaqeq Moein

Faculty member of Imam Khomeini Higher Education Center

mmoein@gmail.com

Abstract

How does practice improve, and how can evaluation contribute to this improvement? Evaluation could have significant share in organizational development and knowledge management in associations. Both in management and science we are tackling with evaluation process. Art & science of evaluation have an authoritative to taking many roles in knowledge management growth in organizations. Evaluation approaches and techniques could function in production, empowerment, assessment and documentation of individual and organizational experience in society. In this article I develop some questions and answers in the file most intended for enlargement of knowledge management train via systematic evaluation influences in programs and organizations. Individuals and organizations dynamism is a phenomenon that dependence with their social engagements. Dynamism is a learning outcome. Individuals & organizations could learn from evaluation and with increase of agreement and certainty they could change their behaviors. Evaluation capacity buildings is an outcome of interrelated and interdepend elements of individual Evaluation Capacity and Sustainable Evaluation Practice in the context, interact and surroundings of culture for example Organization's Infrastructure- culture, leadership, communication, systems, Structures- .In historical typology of evaluation, grate evaluation researchers and developers discussed about four generations of evaluation: Measurement, Description, Judgment & Interpretive. I think evaluation filed and profession near to another generation of evaluation: THE FIFTH GENERATION EVALUATION: CAPACITY BUILDING.

Key words: Evaluation Capacity Building, Fifth Generation evaluation, Learning, individuals Evaluation Capacity, Sustainable Evaluation Practice, Organization's Infrastructure.



This article has been ACCEPTED by the scientific committee of the 2nd Iranian Knowledge Conference for Poster presentation. The 2nd Iranian Knowledge Management Conference gathers top researchers and practitioners from all over the world to exchange their findings and achievements on Knowledge Management. This Conference will feature invited keynote presentations, panels on topical issues, refereed paper presentations and workshops on new areas of knowledge management.

This event will take place in Jan. 30-31, 2010 at Razi Intl. Conference Center, Tehran, Iran. For more information go to:

http://www.kmiran.com/km2010/enindex.php