Description

Usability practitioners have diverse backgrounds. Many don’t have a background in statistics but are being asked to quantify their usability improvements. Even those with a background in statistics are hesitant to statistically analyze their data, as they are often unsure which statistical tests to use and have trouble defending the use of small test sample sizes.

The book is about providing a practical guide on how to use statistics to solve common quantitative problems arising in user research. It will address common questions facing practitioners such as: Is the current product more usable than our competition? Can we be sure at least 70% of users can complete the task on the 1st attempt? How long will it take users to purchase products on the website?

This book shows practitioners which test to use, provide a foundation for both the statistical theory and best practices in applying them. The authors draw on decades of statistical literature from Human Factors, Industrial Engineering and Psychology, as well as their own published research to provide the best solutions. They provide both concrete solutions (excel formula, links to their own web-calculators) along with an engaging discussion about the statistical reasons for why the tests work, and how to effectively communicate the results.

Table of Contents

Dedication

Acknowledgments

About the Authors

Chapter 1. Introduction and How to Use This Book

Introduction

The Organization of This Book

How to Use This Book

Key Points from the Chapter

Chapter Review Questions

References

Chapter 2. Quantifying User Research

What is User Research?

Data from User Research

Usability Testing

A/B Testing

Survey Data

Requirements Gathering

Key Points from the Chapter

References

Chapter 3. How Precise Are Our Estimates? Confidence Intervals

Introduction

Confidence Interval for a Completion Rate

Confidence Interval for Rating Scales and Other Continuous Data

Key Points from the Chapter

Chapter Review Questions

References

Chapter 4. Did We Meet or Exceed Our Goal?

Introduction

One-Tailed and Two-Tailed Tests

Comparing a Completion Rate to a Benchmark

Comparing a Satisfaction Score to a Benchmark

Comparing a Task Time to a Benchmark

Key Points from the Chapter

Chapter Review Questions

References

Chapter 5. Is There a Statistical Difference between Designs?

Introduction

Comparing Two Means (Rating Scales and Task Times)

Comparing Completion Rates, Conversion Rates, and A/B Testing

Key Points from the Chapter

Chapter Review Questions

References

Chapter 6. What Sample Sizes Do We Need?

Introduction

Estimating Values

Comparing Values

What can I Do to Control Variability?

Sample Size Estimation for Binomial Confidence Intervals

Sample Size Estimation for Chi-Square Tests (Independent Proportions)

Sample Size Estimation for McNemar Exact Tests (Matched Proportions)

Key Points from the Chapter

Chapter Review Questions

References

Chapter 7. What Sample Sizes Do We Need?

Intro

Details

No. of pages:
312
Language:
English
Copyright:
© 2012
Published:
Imprint:
Morgan Kaufmann
eBook ISBN:
9780123849694
Print ISBN:
9780123849687

About the authors

Jeff Sauro

Dr. Jeff Sauro is a six-sigma trained statistical analyst and founding principal of MeasuringU, a customer experience research firm based in Denver. For over fifteen years he’s been conducting usability and statistical analysis for companies such as Google, eBay, Walmart, Autodesk, Lenovo and Drobox or working for companies such as Oracle, Intuit and General Electric. Jeff has published over twenty peer-reviewed research articles and five books, including Customer Analytics for Dummies. He publishes a weekly article on user experience and measurement online at measuringu.com. Jeff received his Ph.D in Research Methods and Statistics from the University of Denver, his Masters in Learning, Design and Technology from Stanford University, and B.S. in Information Management & Technology and B.S. in Television, Radio and Film from Syracuse University. He lives with his wife and three children in Denver, CO.

Affiliations and Expertise

Usability Metrics and Statistical Analyst, Measuring Usability LLC, CO, USA

James Lewis

Dr. James R. (Jim) Lewis is a senior human factors engineer (at IBM since 1981) with a current focus on the measurement and evaluation of the user experience. He is a Certified Human Factors Professional with a Ph.D. in Experimental Psychology (Psycholinguistics), an M.A. in Engineering Psychology, and an M.M. in Music Theory and Composition. Jim is an internationally recognized expert in usability testing and measurement, contributing (by invitation) the chapter on usability testing for the 3rd and 4th editions of the Handbook of Human Factors and Ergonomics, presenting tutorials on usability testing and metrics at various professional conferences, and serving as the keynote speaker at HCII 2014. He was the lead interaction designer for the product now regarded as the first smart phone, the Simon, and is the author of Practical Speech User Interface Design. Jim is an IBM Master Inventor Emeritus with 88 patents issued to date by the US Patent Office. He serves on the editorial board of the International Journal of Human-Computer Interaction, is co-editor in chief of the Journal of Usability Studies, and is on the scientific advisory board of the Center for Research and Education on Aging and Technology Enhancement (CREATE). He is a member of the Usability Professionals Association (UPA), the Human Factors and Ergonomics Society (HFES), the ACM Special Interest Group in Computer-Human Interaction (SIGCHI), past-president of the Association for Voice Interaction Design (AVIxD), and is a 5th degree black belt and certified instructor with the American Taekwondo Association (ATA).

Affiliations and Expertise

Senior Human Factors Engineer, IBM, FL, USA