Statistics for Experimentalists

Statistics for Experimentalists

1st Edition - January 1, 1969

Write a review

  • Author: B. E. Cooper
  • eBook ISBN: 9781483280523

Purchase options

Purchase options
DRM-free (PDF)
Sales tax will be calculated at check-out

Institutional Subscription

Free Global Shipping
No minimum order

Description

Statistics for Experimentalists aims to provide experimental scientists with a working knowledge of statistical methods and search approaches to the analysis of data. The book first elaborates on probability and continuous probability distributions. Discussions focus on properties of continuous random variables and normal variables, independence of two random variables, central moments of a continuous distribution, prediction from a normal distribution, binomial probabilities, and multiplication of probabilities and independence. The text then examines estimation and tests of significance. Topics include estimators and estimates, expected values, minimum variance linear unbiased estimators, sufficient estimators, methods of maximum likelihood and least squares, and the test of significance method. The manuscript ponders on distribution-free tests, Poisson process and counting problems, correlation and function fitting, balanced incomplete randomized block designs and the analysis of covariance, and experimental design. The publication is a valuable reference for statisticians and researchers interested in the use of statistical methods.

Table of Contents


  • Preface

    Chapter 1. Introduction

    1.1. Experimental Results

    1.2. Experimental Design

    1.3. The Power of an Experiment

    1.4. Generality of the Conclusions

    Chapter 2. Probability

    2.1. The Classical Definition of Probability

    2.2. Population and Sample

    2.3. Probability Distributions

    2.4. Probability Notation

    2.5. Addition of Probabilities

    2.6. Multiplication of Probabilities and Independence

    2.7. Binomial Probabilities

    2.8. Discrete and Continuous Distributions

    2.9. Mean and Variance of a Discrete Probability Distribution

    2.10. Crude and Central Moments of Discrete Probability Distributions

    2.11. Modal Value of a Discrete Probability Distribution

    Chapter 3. Continuous Probability Distributions

    3.1. The Normal Probability Distribution

    3.2. Mean and Variance of a Continuous Probability Distribution

    3.3. Prediction from a Normal Distribution

    3.4. Percentage Points and Significance Levels

    3.5. Central Moments of a Continuous Distribution

    3.6. Notation

    3.7. Independence of Two Random Variables

    3.8. Properties of Normal Variables

    3.9. Properties of Continuous Random Variables

    3.10. The Central Limit Theorem

    3.11. Distributions Other than Normal

    3.12. The Chi-Squared Distribution

    Chapter 4. Estimation

    4.1. The Random Sample

    4.2. Estimators and Estimates

    4.3. Expected Values

    4.4. Unbiased Estimators

    4.5. Minimum Variance Linear Unbiased Estimators

    4.6. Efficiency Ratio

    4.7. Consistent Estimators

    4.8. Sufficient Estimators

    4.9. The Method of Maximum Likelihood

    4.10. The Joint Estimation of Two Parameters

    4.11. The Method of Least Squares

    4.12. The Methods of Maximum Likelihood and Least Squares

    Appendix

    Chapter 5. Tests of Significance—I

    5.1. The Test of Significance Method

    5.2. Is the Population Mean Equal to a Particular Value?

    5.3. Is the Population Variance Equal to a Particular Value?

    5.4. Is the Population Distribution of a Particular Form?

    Chapter 6. Tests of Significance—II

    6.1. Are the Means of Two Populations Equal?

    6.2. Are the Variances of Two Populations Equal?

    6.3. Are the Variances of More than Two Populations Equal?

    Chapter 7. Tests of Significance—III

    7.1. Are the Distributions of Several Populations Identical?

    7.2. The 2 X 2 Table

    7.3. A Test for Independence

    7.4. Testing Particular Group Proportions

    7.5. Are the Means of Several Populations Equal?

    7.6. Robustness of Tests Assuming Normal Populations

    7.7. Transformations and Distribution-Free Tests

    Chapter 8. Analysis of Variance—I. Hierarchical Designs

    8.1. Are the Means of Two or More Populations Equal?

    8.2. Fixed-Effect Model for a Hierarchical Design with Three Levels

    8.3. Example of a Four-Level Hierarchical Design (Mixed Model)

    8.4. The General Method of Computation

    8.5. Further Reading on Analysis of Variance

    Chapter 9. Analysis of Variance—II. Factorial Designs

    9.1. Two-Way Factorial Experiment Without Replication (Fixed-Effect Model)

    9.2. Two-Way Experiment with Replication (Fixed-Effect Model)

    9.3. Two-Way Factorial Experiment (Random-Effect Models)

    9.4. Two-Way Factorial Experiment with Replication (Mixed Model)

    9.5. Replicates or Repeats

    9.6. Three-Way Factorial Experiment (Fixed-Effect Model)

    9.7. An Unsymmetrical Factor

    9.8. Replicated Three-Way Factorial Experiment (Random-Effect Model)

    9.9. Main Effects and Interactions

    9.10. Unequal Replication

    9.11. Variability Estimate from Another Source

    9.12. Variations of a Factorial Experiment

    9.13. Polynomial Partitioning

    Chapter 10. Experimental Design

    10.1. Example Experiment

    10.2. Complete Randomization

    10.3. Randomized Blocks

    10.4. The Latin Square

    10.5. The Graeco-Latin Squares

    10.6. Further Alphabets

    Chapter 11. Balanced Incomplete Randomized Block Designs and the Analysis of Covariance

    11.1. Balanced Incomplete Randomized Block Designs

    11.2. A Youden Square

    11.3. Other Incomplete Block Designs

    11.4. Analysis of Covariance (Concomitant Observations)

    Chapter 12. Correlation and Function Fitting

    12.1. The Correlation Coefficient

    12.2. Function Fitting

    12.3. Function Fitting (Situation 1)

    12.4. Function Fitting (Situation 2)

    12.5. Function Fitting (Situation 3)

    12.6. Function Fitting (Situation 4)

    12.7. Function Fitting (Situation 5)

    12.8. Fitting Other Functions

    12.9. Multiple Regression

    12.10. Polynomial Regression

    12.11. Multiple and Polynomial Regression in Situations 2 to 5

    12.12. Fitting Other Functions

    Appendix. Matrix Algebra

    Chapter 13. The Poisson Process and Counting Problems

    13.1. The Poisson Process

    13.2. A Single Count

    13.3. A Number of Counts of the Same Source

    13.4. Function Fitting Involving Counts

    Chapter 14. Distributions-Free Tests

    14.1. The Randomization Method

    14.2. The Ranking Method

    14.3. Two-Sample Location Rank Tests

    14.4. Two-Sample Dispersion Rank Tests

    14.5. Two-Sample Distribution Rank Tests

    14.6. Power Comparisons for Location Tests

    14.7. Location Rank Tests for Many Samples

    14.8. Two-Way Factorial Analysis by Ranks

    14.9. Rank Correlation

    Hints and Anwers to Problems

    References

    Tables Section

    1. The Normal Distribution

    2. Significance Points for Student's t-Distribution

    3. Non-Central t-Distribution

    4. Significance Points for the Chi-Squared Distribution

    5. Significance Points for Welch's Test

    6. Significance Points for the F-Distribution (Variance Ratio)

    7. Significance Points for the Sample Correlation Coefficient when ρ = 0

    8. Significance Points for the Maximum F-Ratio

    9. Significance Points for Wilcoxon's Test in Mann-Whitney Form

    Index

Product details

  • No. of pages: 324
  • Language: English
  • Copyright: © Pergamon 1969
  • Published: January 1, 1969
  • Imprint: Pergamon
  • eBook ISBN: 9781483280523

About the Author

B. E. Cooper

Ratings and Reviews

Write a review

There are currently no reviews for "Statistics for Experimentalists"