Introductory Statistics for Psychology - 1st Edition - ISBN: 9780124454804, 9781483257860

Introductory Statistics for Psychology

1st Edition

The Logic and the Methods

Authors: Gustav Levine
eBook ISBN: 9781483257860
Imprint: Academic Press
Published Date: 1st January 1981
Page Count: 512
Sales tax will be calculated at check-out Price includes VAT/GST
25% off
25% off
25% off
25% off
25% off
20% off
20% off
25% off
25% off
25% off
25% off
25% off
20% off
20% off
25% off
25% off
25% off
25% off
25% off
20% off
20% off
56.99
42.74
42.74
42.74
42.74
42.74
45.59
45.59
70.95
53.21
53.21
53.21
53.21
53.21
56.76
56.76
93.95
70.46
70.46
70.46
70.46
70.46
75.16
75.16
Unavailable
Price includes VAT/GST
× DRM-Free

Easy - Download and start reading immediately. There’s no activation process to access eBooks; all eBooks are fully searchable, and enabled for copying, pasting, and printing.

Flexible - Read on multiple operating systems and devices. Easily read eBooks on smart phones, computers, or any eBook readers, including Kindle.

Open - Buy once, receive and download all available eBook formats, including PDF, EPUB, and Mobi (for Kindle).

Institutional Access

Secure Checkout

Personal information is secured with SSL technology.

Free Shipping

Free global shipping
No minimum order.

Description

Introductory Statistics for Psychology: The Logic and the Methods presents the concepts of experimental design that are carefully interwoven with the statistical material. This book emphasizes the verbalization of conclusions to experiments, which is another means of communicating the reasons for statistical analyses.

Organized into 17 chapters, this book begins with an overview of alternative ways of stating the conclusions from a significant interaction. This text then presents the analysis of variance and introduces the summation sign and its use. Other chapters consider frequency distribution as any presentation of data that offers the frequency with which each score occurs. This book discusses as well the differences in and among people, which are a constant source of variability in test scores, and in most other measurements of people. The final chapter deals with the working knowledge of arithmetic and elementary algebra.

This book is a valuable resource for students and psychologists.

Table of Contents


One Introduction

Relationships Between Variables

Restricting Questions to Two Variables at a Time

Controlling a Variable

Experimental Manipulation of the Controlled Variable

Classification of the Controlled Variable

Independent and Dependent Variables

The Degree of Relationship Between Variables

The Goals of Psychological Research

The Place of Statistics in Psychology

Descriptive Statistics

Inferential Statistics

Two the Average

The Mean

The Median

The Middle Rank and the Median

Choosing Between the Mean and the Median

The Mode

Summary Comparison

The Symbols in Statistical Formulas

The Variables X and Y

Subscripts for Variables

The Rules of Summation

The First Rule of Summation

The Second Rule of Summation

The Third Rule of Summation

The Sum of the Deviations from the Mean Equals Zero

The Logic and Purpose of a Proof

Three Frequency Distributions

Advantages of Frequency Distributions

Computing the Mean of a Frequency Distribution

Graphs

Modal Peaks

Skewness

Continuous Distributions

Histograms

Improper Uses of Graphs

Graphing Relationships Between Variables

Grouped Data

The Interval Size in a Grouped Frequency Distribution

The Range of a Distribution

Choosing the Size and Number of Intervals

Zero Frequencies

Unequal Intervals

Graphing Grouped Data

Computing the Mean with Grouped Data

Cumulative Frequency Distributions

Graphs of Cumulative Frequency Distributions

Four Percentiles

Computing Percentile Ranks of Raw Scores

Computing Percentile Ranks in Grouped Frequency Distributions

The Use of Percentile Ranks

The Use of Percentiles

Deciles

Quartiles

Computing Percentiles

Computing the Median as the 50th Percentile

Five Variability

Populations versus Samples

Infinite Populations

Parameters versus Statistics

Random Samples

Measures of Variability from the Complete Population

The Range

The Mean Deviation

The Variance

The Standard Deviation

Sample Estimates of Variability

Degrees of Freedom

The Estimate of the Variance

The Estimate of the Standard Deviation

Computational Formulas for Variance and Standard Deviation

Proving the Equality of Defining and Computational Formulas

Computational Formulas for Samples

Contrasting Defining and Computational Formulas

Computations with Frequency Distributions

Sixz Scores and Effects of Linear Transformations

Adding a Constant Value to the Scores of a Distribution

The Variance and Standard Deviation are Unchanged by Addition of a Constant

Multiplying the Scores of a Distribution by a Constant

Changes in the Variance and Standard Deviation

Effects of z Score Transformations

Seven Probability

The Sample Space

Events and Sample Points

The Axioms of Probability

Probability as a Closed System

Equal Probabilities, Theoretically Assigned

Complementary Events

Summing Mutually Exclusive Events ("Or Relations")

Joint Events ("And Relations")

Comparing Theoretical and Empirical Probabilities

Empirical Basis of Probability

Eight the Binomial Distribution

Reaching Conclusions from Unlikely Events

An Empirical Model of Chance

Rejecting Initial Assumptions

The Null Hypothesis

A Theoretical Probability Distribution for Coin Tossing

Stating the Distribution as an Equation

The Binomial Coefficient

Theoretical Analysis of the Binomial Distribution

Assumptions of the Binomial Distribution

The Binomial Distribution as a Model of Survival in Illness

Critical Values

Type I Errors

Type II Errors

Uncertainty About Errors

Statistical Significance

Controlling the Probability of Being Wrong

Verbalizing Statistically-Based Conclusions

Nine The Normal Distribution

Defining Probabilities in Continuous Distributions

The Defining Equation for the Normal Distribution

The Normal Distribution of z Scores

Using the Table of Probabilities Under the Normal Curve

Sample Means as Estimates of Population Means

The Standard Error of the Mean

The Normal Distribution of Sample Means

The Central Limit Theorem

Using the Normal Distribution for Statistical Inference

Directional versus Nondirectional Hypotheses

Nondirectional Hypotheses (Two-Tailed Tests of Significance)

Graphic Presentation of Type II Error Probabilities

Conditions for Using a One-Tailed Test of Significance

Doubt About the Use of One-Tailed Tests of Significance

Defense of One-Tailed Tests

Summary of the Issues in One- versus Two-Tailed Tests of Significance

Ten The t Distribution

Using the t Distribution for Statistical Inference

The Table for the t Distribution

Matched-Pair t Tests

Paired Scores from Different Subjects

t Test for the Difference Between Two Means

The Null Hypothesis when Comparing Two Means

The Standard Error of the Difference Between Two Means

Degrees of Freedom when Testing the Difference Between Means

Working with Different Sample Sizes

The Power of t Tests

Sample Size and Power of a t Test

A Note on Assumptions

Eleven Correlation

Degree of Relationship

Linear Relationships

Correlation and Slope

Negative Correlation

The Correlation Coefficient and Its Values

Cross Products and the Covariance

Correlation with z Scores

An Interpretation of Correlation

Correlation and Causation

The Point Biserial Correlation Coefficient

Statistical Inference in Correlation

Testing Sampled Correlations for Significance

Prediction from Regression Lines

Obtaining the Slope with ρxy

Regression Toward the Mean

A Note About Assumptions

Twelve Correlation and Tests

Reliability

Values for Reliability Coefficients

Sample Size in the Assessment of Reliability

Reliability and Number of Test Items

Test-Retest Reliability

The Alternate Test Form Reliability Coefficient

The Split-Half Reliability Coefficient

Coefficient Alpha

Comparisons of the Reliability Coefficients

Validity

Testing Validity Through Tests of Significance

Reliability versus Validity

Thirteen Analysis of Variance

Experimental Manipulation versus Classification

Summary of When to Use Analysis of Variance

Control of the Independent Variable

Conclusions About Cause and Effect

The Group Mean as an Index of Treatment Effects

The Null Hypothesis in Analysis of Variance

Random Variability Within a Group

Random Variability Between Means

Using Variability to Detect Treatment Effects

Two Different Variance Estimates as Measures of Variability

The F Distribution

Double Subscript Notation in Analysis of Variance

The Within-Groups Variance

The Within-Groups Sum of Squares

The Within-Groups Degrees of Freedom

The Computational Formula for the Within-Groups Mean Square

The Between-Groups Variance

The Between-Groups Mean Square

The Computational Formula for the Between-Groups Mean Square

The F Ratio and Mean Squares

The Table for Critical Values of F

The Total Sum of Squares and Total Degrees of Freedom

A Summary Table for Analysis of Variance

Computations with Unequal n

A Note on Assumptions

A Note on the Importance of This Chapter

Fourteen Statistics Following Significance

Degree of Relationship in Analysis of Variance

Sources of Variance in the Population of Dependent Variable Scores

Estimating the Variance Due to Treatment Effects

An Estimate of the Intraclass Correlation Coefficient

Computational Form for Estimating the Intraclass Correlation Coefficient

Omega-Squared

Multiple Comparisons

The t Test as a Basis for Multiple Comparisons

Adjusting the Type I Error Probability

When to Use the Experimentwise Criterion for the Type I Error

Fifteen Two-Factor Analysis of Variance

Subscript Notation in Multifactor Analysis of Variance

Cells

Means of Cells, Columns, and Rows

Main Effects

Simple Effects

Interactions

Interpreting Interactions

MSw in the Two-Factor Design

F Tests in the Two-Factor Design

Computation in the Two-Factor Design

Designs with More than Two Factors

Repeated Measures

Statistical Models in Analysis of Variance

Omega Squared in the Two-Factor Design

Multiple Comparisons in the Two-Factor Design

Illustration of Multiple Comparisons for a Main Effect

Illustration of Multiple Comparisons for Simple Effects

Sixteen Chi-Square

The Chi-Square Statistic and the Null Hypothesis

Expected Frequencies in Chi-Square

Computing the Chi-Square

The Chi-Square Distribution and Degrees of Freedom

Chi-Square with a 2 x 2 Contingency Table

Single Variable Problems (The Goodness of Fit Test)

Restrictions on the Use of Chi-Square

Single Subject Chi-Square

Degree of Relationship in Chi-Square

Computing the Degree of Relationship

Seventeen Postscript (Choosing a Statistic)

Appendix A: Some Useful Principles of Elementary Algebra

Appendix B: Tables

Table 1: Table of Squares, Square Roots, and Reciprocals

Table 2: Table of Random Numbers

Table 3: Table of Probabilities Under the Normal Curve

Table 4: The Critical Values of t

Table 5: The Critical Values of the Pearson r

Table 6: The Critical Values of F

Table 7: The Critical Values of the Dunn Multiple Comparison Test

Table 8: The Critical Values of Chi-Square

Appendix C: Answers to Chapter Problems

Glossary of Symbols

Index

Details

No. of pages:
512
Language:
English
Copyright:
© Academic Press 1981
Published:
Imprint:
Academic Press
eBook ISBN:
9781483257860

About the Author

Gustav Levine

Ratings and Reviews