Doing Bayesian Data Analysis - 2nd Edition - ISBN: 9780124058880, 9780124059160

Doing Bayesian Data Analysis

2nd Edition

A Tutorial with R, JAGS, and Stan

Authors: John Kruschke
eBook ISBN: 9780124059160
Hardcover ISBN: 9780124058880
Imprint: Academic Press
Published Date: 3rd November 2014
Page Count: 776
Sales tax will be calculated at check-out Price includes VAT/GST
89.95
54.99
64.95
Unavailable
Price includes VAT/GST
× Read this ebook on your PC, Mac, Apple iOS and Andriod mobile devices and eReader

This ebook is protected by Adobe Content Server digital rights management.

For more information on how to use .acsm files please click the Ebook Format Help link.

Institutional Access

Secure Checkout

Personal information is secured with SSL technology.

Free Shipping

Free global shipping
No minimum order.

Description

Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan, Second Edition provides an accessible approach for conducting Bayesian data analysis, as material is explained clearly with concrete examples. Included are step-by-step instructions on how to carry out Bayesian data analyses in the popular and free software R and WinBugs, as well as new programs in JAGS and Stan. The new programs are designed to be much easier to use than the scripts in the first edition. In particular, there are now compact high-level scripts that make it easy to run the programs on your own data sets.

The book is divided into three parts and begins with the basics: models, probability, Bayes’ rule, and the R programming language. The discussion then moves to the fundamentals applied to inferring a binomial probability, before concluding with chapters on the generalized linear model. Topics include metric-predicted variable on one or two groups; metric-predicted variable with one metric predictor; metric-predicted variable with multiple metric predictors; metric-predicted variable with one nominal predictor; and metric-predicted variable with multiple nominal predictors. The exercises found in the text have explicit purposes and guidelines for accomplishment.

This book is intended for first-year graduate students or advanced undergraduates in statistics, data analysis, psychology, cognitive science, social sciences, clinical sciences, and consumer sciences in business.

Key Features

  • Accessible, including the basics of essential concepts of probability and random sampling
  • Examples with R programming language and JAGS software
  • Comprehensive coverage of all scenarios addressed by non-Bayesian textbooks: t-tests, analysis of variance (ANOVA) and comparisons in ANOVA, multiple regression, and chi-square (contingency table analysis)
  • Coverage of experiment planning
  • R and JAGS computer programming code on website
  • Exercises have explicit purposes and guidelines for accomplishment
  • Provides step-by-step instructions on how to conduct Bayesian data analyses in the popular and free software R and WinBugs

Readership

First-year Graduate Students and Advanced Undergraduate Students in Statistics, Data Analysis, Psychology, Cognitive Science, Social Sciences, Clinical Sciences and Consumer Sciences in Business.

Table of Contents

Chapter 1: What's in This Book (Read This First!)

  • 1.1 Real people can read this book
  • 1.2 What's in this book
  • 1.3 What's new in the second edition?
  • 1.4 Gimme feedback (Be polite)
  • 1.5 Thank you!

Part I: The Basics: Models, Probability, Bayes’ Rule, and R

Introduction

Chapter 2: Introduction: Credibility, Models, and Parameters

  • 2.1 Bayesian inference is reallocation of credibility across possibilities
  • 2.2 Possibilities are parameter values in descriptive models
  • 2.3 The steps of bayesian data analysis
  • 2.4 Exercises

Chapter 3: The R Programming Language

  • 3.1 Get the software
  • 3.2 A simple example of R in action
  • 3.3 Basic commands and operators in R
  • 3.4 Variable types
  • 3.5 Loading and saving data
  • 3.6 Some utility functions
  • 3.7 Programming in R
  • 3.8 Graphical plots: Opening and saving
  • 3.9 Conclusion
  • 3.10 Exercises

Chapter 4: What is This Stuff Called Probability?

  • 4.1 The set of all possible events
  • 4.2 Probability: Outside or inside the head
  • 4.3 Probability distributions
  • 4.4 Two-way distributions
  • 4.5 Appendix: R code for figure 4.1
  • 4.6 Exercises

Chapter 5: Bayes' Rule

  • 5.1 Bayes' rule
  • 5.2 Applied to parameters and data
  • 5.3 Complete examples: Estimating bias in a coin
  • 5.4 Why bayesian inference can be difficult
  • 5.5 Appendix: R code for figures 5.1, 5.2, etc.
  • 5.6 Exercises

Part II: All the Fundamentals Applied to Inferring a Binomial Probability

Introduction

Chapter 6: Inferring a Binomial Probability via Exact Mathematical Analysis

  • 6.1 The likelihood function: Bernoulli distribution
  • 6.2 A description of credibilities: The beta distribution
  • 6.3 The posterior beta
  • 6.4 Examples
  • 6.5 Summary
  • 6.6 Appendix: R code for figure 6.4
  • 6.7 Exercises

Chapter 7: Markov Chain Monte Carlo

  • 7.1 Approximating a distribution with a large sample
  • 7.2 A simple case of the metropolis algorithm
  • 7.3 The metropolis algorithm more generally
  • 7.4 Toward gibbs sampling: Estimating two coin biases
  • 7.5 Mcmc representativeness, accuracy, and efficiency
  • 7.6 Summary
  • 7.7 Exercises

Chapter 8: JAGS

  • 8.1 Jags and its relation to R
  • 8.2 A complete example
  • 8.3 Simplified scripts for frequently used analyses
  • 8.4 Example: difference of biases
  • 8.5 Sampling from the prior distribution in jags
  • 8.6 Probability distributions available in jags
  • 8.7 Faster sampling with parallel processing in runjags
  • 8.8 Tips for expanding jags models
  • 8.9 Exercises

Chapter 9: Hierarchical Models

  • 9.1 A single coin from a single mint
  • 9.2 Multiple coins from a single mint
  • 9.3 Shrinkage in hierarchical models
  • 9.4 Speeding up jags
  • 9.5 Extending the hierarchy: Subjects within categories
  • 9.6 Exercises

Chapter 10: Model Comparison and Hierarchical Modeling

  • 10.1 General formula and the bayes factor
  • 10.2 Example: two factories of coins
  • 10.3 Solution by MCMC
  • 10.4 Prediction: Model averaging
  • 10.5 Model complexity naturally accounted for
  • 10.6 Extreme sensitivity to prior distribution
  • 10.7 Exercises

Chapter 11: Null Hypothesis Significance Testing

  • 11.1 Paved with good intentions
  • 11.2 Prior knowledge
  • 11.3 Confidence interval and highest density interval
  • 11.4 Multiple comparisons
  • 11.5 What a sampling distribution is good for
  • 11.6 Exercises

Chapter 12: Bayesian Approaches to Testing a Point (“Null”) Hypothesis

  • 12.1 The estimation approach
  • 12.2 The model-comparison approach
  • 12.3 Relations of parameter estimation and model comparison
  • 12.4. Estimation or model comparison?
  • 12.5. Exercises

Chapter 13: Goals, Power, and Sample Size

  • 13.1 The will to power
  • 13.2 Computing power and sample size
  • 13.3 Sequential testing and the goal of precision
  • 13.4 Discussion
  • 13.5 Exercises

Chapter 14: Stan

  • 14.1 HMC sampling
  • 14.2 Installing stan
  • 14.3 A complete example
  • 14.4 Specify models top-down in stan
  • 14.5 Limitations and extras
  • 14.6 Exercises

Part III: The Generalized Linear Model

Introduction

Chapter 15: Overview of the Generalized Linear Model

  • 15.1 Types of variables
  • 15.2 Linear combination of predictors
  • 15.3 Linking from combined predictors to noisy predicted data
  • 15.4 Formal expression of the GLM
  • 15.5 Exercises

Chapter 16: Metric-Predicted Variable on One or Two Groups

  • 16.1 Estimating the mean and standard deviation of a normal distribution
  • 16.2 Outliers and robust estimation: The t distribution
  • 16.3 Two groups
  • 16.4 Other noise distributions and transforming data
  • 16.5 Exercises

Chapter 17: Metric Predicted Variable with One Metric Predictor

  • 17.1 Simple linear regression
  • 17.2 Robust linear regression
  • 17.3 Hierarchical regression on individuals within groups
  • 17.4 Quadratic trend and weighted data
  • 17.5 Procedure and perils for expanding a model
  • 17.6 Exercises

Chapter 18: Metric Predicted Variable with Multiple Metric Predictors

  • 18.1 Multiple linear regression
  • 18.2 Multiplicative interaction of metric predictors
  • 18.3 Shrinkage of regression coefficients
  • 18.4 Variable selection
  • 18.5 Exercises

Chapter 19: Metric Predicted Variable with One Nominal Predictor

  • 19.1 Describing multiple groups of metric data
  • 19.2 Traditional analysis of variance
  • 19.3 Hierarchical bayesian approach
  • 19.4 Including a metric predictor
  • 19.5 Heterogeneous variances and robustness against outliers
  • 19.6 Exercises

Chapter 20: Metric Predicted Variable with Multiple Nominal Predictors

  • 20.1 Describing groups of metric data with multiple nominal predictors
  • 20.2 Hierarchical bayesian approach
  • 20.3 Rescaling can change interactions, homogeneity, and normality
  • 20.4 Heterogeneous variances and robustness against outliers
  • 20.5 Within-subject designs
  • 20.6 Model comparison approach
  • 20.7 Exercises

Chapter 21: Dichotomous Predicted Variable

  • 21.1 Multiple metric predictors
  • 21.2 Interpreting the regression coefficients
  • 21.3 Robust logistic regression
  • 21.4 Nominal predictors
  • 21.5 Exercises

Chapter 22: Nominal Predicted Variable

  • 22.1 Softmax regression
  • 22.2 Conditional logistic regression
  • 22.3 Implementation in jags
  • 22.4 Generalizations and variations of the models
  • 22.5 Exercises

Chapter 23: Ordinal Predicted Variable

  • 23.1 Modeling ordinal data with an underlying metric variable
  • 23.2 The case of a single group
  • 23.3 The case of two groups
  • 23.4 The case of metric predictors
  • 23.5 Posterior prediction
  • 23.6 Generalizations and extensions
  • 23.7 Exercises

Chapter 24: Count Predicted Variable

  • 24.1 Poisson exponential model
  • 24.2 Example: hair eye go again
  • 24.3 Example: interaction contrasts, shrinkage, and omnibus test
  • 24.4 Log-linear models for contingency tables
  • 24.5 Exercises

Chapter 25: Tools in the Trunk

  • 25.1 Reporting a bayesian analysis
  • 25.2 Functions for computing highest density intervals
  • 25.3 Reparameterization
  • 25.4 Censored data in JAGS
  • 25.5 What next?

Details

No. of pages:
776
Language:
English
Copyright:
© Academic Press 2015
Published:
Imprint:
Academic Press
eBook ISBN:
9780124059160
Hardcover ISBN:
9780124058880

About the Author

John Kruschke

John Kruschke

John K. Kruschke is Professor of Psychological and Brain Sciences, and Adjunct Professor of Statistics, at Indiana University in Bloomington, Indiana, USA. He is eight-time winner of Teaching Excellence Recognition Awards from Indiana University. He won the Troland Research Award from the National Academy of Sciences (USA), and the Remak Distinguished Scholar Award from Indiana University. He has been on the editorial boards of various scientific journals, including Psychological Review, the Journal of Experimental Psychology: General, and the Journal of Mathematical Psychology, among others.

After attending the Summer Science Program as a high school student and considering a career in astronomy, Kruschke earned a bachelor's degree in mathematics (with high distinction in general scholarship) from the University of California at Berkeley. As an undergraduate, Kruschke taught self-designed tutoring sessions for many math courses at the Student Learning Center. During graduate school he attended the 1988 Connectionist Models Summer School, and earned a doctorate in psychology also from U.C. Berkeley. He joined the faculty of Indiana University in 1989. Professor Kruschke's publications can be found at his Google Scholar page. His current research interests focus on moral psychology.

Professor Kruschke taught traditional statistical methods for many years until reaching a point, circa 2003, when he could no longer teach corrections for multiple comparisons with a clear conscience. The perils of p values provoked him to find a better way, and after only several thousand hours of relentless effort, the 1st and 2nd editions of Doing Bayesian Data Analysis emerged.

Affiliations and Expertise

Indiana University, Bloomington, USA

Reviews

"Both textbook and practical guide, this work is an accessible account of Bayesian data analysis starting from the basics…This edition is truly an expanded work and includes all new programs in JAGS and Stan designed to be easier to use than the scripts of the first edition, including when running the programs on your own data sets." --MAA Reviews,  Doing Bayesian Data Analysis, Second Edition

“fills a gaping hole in what is currently available, and will serve to create its own market” Prof. Michael Lee, U. of Cal., Irvine; pres. Society for Mathematical Psych.
“has the potential to change the way most cognitive scientists and experimental psychologists approach the planning and analysis of their experiments" Prof. Geoffrey Iverson, U. of Cal., Irvine; past pres. Society for Mathematical Psych.
“better than others for reasons stylistic.... buy it -- it’s truly amazin’!” James L. (Jay) McClelland, Lucie Stern Prof. & Chair, Dept. of Psych., Stanford U.
"the best introductory textbook on Bayesian MCMC techniques" J. of Mathematical Psych.
"potential to change the methodological toolbox of a new generation of social scientists" J. of Economic Psych.
"revolutionary" British J. of Mathematical and Statistical Psych.
"writing for real people with real data. From the very first chapter, the engaging writing style will get readers excited about this topic" PsycCritiques

Ratings and Reviews