COVID-19 Update: We are currently shipping orders daily. However, due to transit disruptions in some geographies, deliveries may be delayed. To provide all customers with timely access to content, we are offering 50% off Science and Technology Print & eBook bundle options. Terms & conditions.
Evaluation and Experiment - 1st Edition - ISBN: 9780120888504, 9781483260846

Evaluation and Experiment

1st Edition

Some Critical Issues in Assessing Social Programs

Editors: Carl A. Bennett Arthur A. Lumsdaine
eBook ISBN: 9781483260846
Imprint: Academic Press
Published Date: 28th January 1975
Page Count: 572
Sales tax will be calculated at check-out Price includes VAT/GST
Price includes VAT/GST

Institutional Subscription

Secure Checkout

Personal information is secured with SSL technology.

Free Shipping

Free global shipping
No minimum order.


Evaluation and Experiment: Some Critical Issues in Assessing Social Programs is a collection of papers presented at the 1973 symposium held at The Battelle Seattle Research Center. This book contains eight chapters that consider some selected aspects of the problems in evaluating the outcomes of socially important programs, such as those dealing with education, health, and economic policy.

The first chapter provides an overview of the issues around the Social Program Evaluation. The next chapters deal with the successes and failures brought by social innovations; the quasi-experimental evaluation in compensatory education to estimate the true effects of such education programs; and the usefulness and validity of econometric and related nonexperimental approaches for assessing the effects of social programs. These topics are followed by surveys of a number of additional program-evaluation studies, particularly in the field of family planning or fertility control, mostly carried out as experiments or quasi-experiments in Asian and Latin American countries. Other chapters describe the decision processes that involve explicit assessment of the worth or merit of outcomes and employ multivalued utility analysis and outline the ways in which evaluative data are useful in providing feedback to program or institutional operations and decisions. The final chapter discusses resolutions for some of the disagreements expressed by others concerning the role of field experiments, constraints in their utilization, and other factors that enter into a comprehensive conception of program evaluation.

Table of Contents


1. Social Program Evaluation: Definitions and Issues

I. Introduction

II. Purposes and Functions of Evaluation

A. Definitions and Distinctions

B. Evaluation as Part of the Feedback Process

C. Relation of Output Measures to the Feedback Process

D. Some Questions About the Purpose and Utilization of Evaluation

III. Methodology in Impact Assessment

A. Some General Considerations

B. Data Needs and Analysis

IV. Assessment and Value Judgments

A. Values and Evaluation

B. Criterion Formulation

V. Some Organizational and Ethical Issues

VI. Critical Issues

2. Assessing Social Innovations: An Empirical Base for Policy

I. The General Idea

II. Introduction

A. The Plan of the Paper

B. Evaluating Social Programs

C. Initial Ignorance

D. Methods of Investigation

E. Large and Small Effects

III. Three Instructive Examples

A. The Salk Vaccine Trials

B. The Gamma Globulin Study

C. Emergency School Assistance Program

D. Afterword

IV. Ratings of Innovations

A. Sources of the Studies and Their Biases

B. Medical and Social Innovations

C. Our Ratings of Social Innovations

D. Social Innovations

E. Summary for Social Innovations

F. Evaluations of Socio-Medical Innovations

G. Summary for Socio-Medical Innovations

H. Evaluations of Medical, Mainly Surgical, Innovations

I. Summary of Medical Ratings

J. Summary of Ratings

V. Findings from Nonrandomized Studies

A. Nonrandomized Studies

B. Nonrandomized Studies in Medicine

C. Summary for Section V

VI. Issues Related to Randomization

A. The Idea of a Sample as a Microcosm

B. Searching for Small Program Effects

C. Studying the Interaction Effects in Social Programs

D. Unmeasurable Effects

E. Validity of Inference from One-Site Studies

F. Does Randomization Imply Coercion?

G. The Ethics of Controlled Field Studies

H. Need to Develop Methodology

I. Need for an Ongoing Capability for Doing Randomized Controlled Field Studies

VII. Issues of Feasibility in Installing Program Evaluations

A. Specifying the Treatment

B. Incentives for Participation

C. A Multiplicity of Program Goals

VIII. Costs, Timeliness, and Randomized Field Studies

A. Costs and Benefits of Doing Randomized Controlled Field Studies

B. Value of a Field Trial

C. The Question of "Gradualism"

D. "Stalling" and Evaluating Innovations

E. Time and Doing Field Studies

IX. Issues That Arise in Implementing Innovations

A. Evolutionary Development of Programs

B. Field Trials and Policy Inaction

C. Political Obstacles

X. Findings and Recommendations

A. The Results of Innovations

B. Findings for Nonrandomized Trials

C. Beneficial Small Effects

D. Costs and Time

E. Feasibility of Randomized Trials

F. Evolutionary Evaluations

G. Long-Run Development

H. Controlled Trials vs. Fooling Around

3. Making the Case for Randomized Assignment to Treatments by Considering the Alternatives: Six Ways in Which Quasi-Experimental Evaluations in Compensatory Education Tend to Underestimate Effects

I. Introduction

II. Common Sense and Scientific Knowing

III. Experimentation in Education

IV. Six Sources of Underadjustment Bias

A. Systematic Underadjustment of Preexisting Differences

B. Differential Growth Rates

C. Increases in Reliability with Age

D. Lower Reliability in the More Disadvantaged Group

E. Test Floor and Ceiling Effects

F. Grouping Feedback Effects

V. Summary Comments

4. Regression and Selection Models to Improve Nonexperimental Comparisons

I. Introduction

II. An Alternate Approach to Bias in Treatment Effects

III. Models Which Allow Unbiased Estimation

5. Field Trial Designs in Gauging the Impact of Fertility Planning Programs

I. Introduction

A. Purpose and Rationale

B. Perspectives

II. Field Studies of Fertility Program Impacts

A. The Nature of This Survey

B. Randomization in Sample Selection and Experimental Assignment

III. Important Aspects of Various Classes of Study Design Exemplified

A. Major Types of Design Employed

B. Patterns of Comparison in Population Program Impact Studies

C. Three Illustrative "True Experiments"

D. Quasi-Experiments Varying in Strength as to Evidence of Impact

E. Weaker Quasi-Experimental Designs

F. Correlational Analysis of Impact on Fertility Indices

G. "Preexperimental" or Post Hoc Studies

IV. Special Features Observed in Field Experiments

A. Main Features of the Twelve "True" Experiments

B. Features Brought out in More Complex Experiments

V. Measures of Impact Used in Field Studies

VI. Summary and Conclusions

A. Resumé

B. Recommendations

C. Concluding Remarks

6. Experiments and Evaluations: A Reexamination

I. Introduction

A. A Definition

B. The Confusing Diversity of Current Evaluation Practice

II. Decision Analysis as a Paradigm for Evaluation Research

A. Stakes, as Well as Odds, Control Decisions

B. Inconsistent Values Held by Disagreeing Groups Control Most Decisions

C. The Decision-Theoretic Evaluation Framework

D. Multi-Attribute Utility Analyses

E. Interpersonal and Intergroup Disagreements

F. The Integration of Planning and Evaluation

III. Some Comments and Complaints, Mostly About Experimental Evaluations

A. What Is a Variable in a Social Program?

B. How to Aggregate the Effects of Heterogeneous Programs

C. Effect Size, Variance, and Variable Definition

D. What Can Happen When Large Effects Are Not Found

E. Causal Inferences

F. Using All the Data

G. Who Decides What Will Be Studied?

H. The Temporal Integration of Planning, Evaluation, and Program Changes

IV. Conclusion

7. Feedback in Social Systems: Operational and Systemic Research on Production, Maintenance, Control and Adaptive Functions

I. Introduction

II. Types of Feedback

A. Operational and Systemic Levels of Feedback

B. Social System Functions

C. Energic and Informational Forms of Feedback

III. The Development of Feedback

A. Around Production Problems

B. Around Maintenance Problems

C. Relationship to the Managerial and Political Structure

D. The Political Structure and Maintenance Problems

E. Adaptation Problems and Secondary Effects

IV. The Improvement of System Functioning Through Feedback

A. Direct vs. Indirect Feedback

B. Task Requirements as a Determinant of the Nature of Feedback Loops

C. Tying Feedback to System Functioning

8. Assessing Alternative Conceptions of Evaluation

I. Introduction

II. Determination of Impact

III. Evaluation and Experiment

A. Pilot Programs

B. Experimentation and Innovation

C. Comparative Evaluation and Program Evolution

IV. Decision vs. Understanding

V. Other Considerations Concerning Implementation

A. Role of the Evaluator

B. Use and Misuse of Evaluative Research Findings

VI. Some Suggested Conclusions and Recommendations

A. Management and Organizational Aspects of Evaluation

B. Ascertaining Program Impacts

C. Use of Information for Decision Making

VII. Sources of Ideas


No. of pages:
© Academic Press 1975
28th January 1975
Academic Press
eBook ISBN:

About the Editors

Carl A. Bennett

Arthur A. Lumsdaine

Ratings and Reviews