Description

Principles of Big Data helps readers avoid the common mistakes that endanger all Big Data projects. By stressing simple, fundamental concepts, this book teaches readers how to organize large volumes of complex data, and how to achieve data permanence when the content of the data is constantly changing. General methods for data verification and validation, as specifically applied to Big Data resources, are stressed throughout the book. The book demonstrates how adept analysts can find relationships among data objects held in disparate Big Data resources, when the data objects are endowed with semantic support (i.e., organized in classes of uniquely identified data objects). Readers will learn how their data can be integrated with data from other resources, and how the data extracted from Big Data resources can be used for purposes beyond those imagined by the data creators.

Key Features

• Learn general methods for specifying Big Data in a way that is understandable to humans and to computers.

• Avoid the pitfalls in Big Data design and analysis.

• Understand how to create and use Big Data safely and responsibly with a set of laws, regulations and ethical standards that apply to the acquisition, distribution and integration of Big Data resources.

Readership

data managers, data analysts, statisticians

Table of Contents

Dedication

Acknowledgments

Author Biography

Preface

Introduction

Definition of Big Data

Big Data Versus Small Data

Whence Comest Big Data?

The Most Common Purpose of Big Data is to Produce Small Data

Opportunities

Big Data Moves to the Center of the Information Universe

Chapter 1. Providing Structure to Unstructured Data

Background

Machine Translation

Autocoding

Indexing

Term Extraction

References

Chapter 2. Identification, Deidentification, and Reidentification

Background

Features of an Identifier System

Registered Unique Object Identifiers

Really Bad Identifier Methods

Embedding Information in an Identifier: Not Recommended

One-Way Hashes

Use Case: Hospital Registration

Deidentification

Data Scrubbing

Reidentification

Lessons Learned

References

Chapter 3. Ontologies and Semantics

Background

Classifications, the Simplest of Ontologies

Ontologies, Classes with Multiple Parents

Choosing a Class Model

Introduction to Resource Description Framework Schema

Common Pitfalls in Ontology Development

References

Chapter 4. Introspection

Background

Knowledge of Self

eXtensible Markup Language

Introduction to Meaning

Namespaces and the Aggregation of Meaningful Assertions

Resource Description Framework Triples

Reflection

Use Case: Trusted Time Stamp

Summary

References

Chapter 5. Data Integration and Software Interoperability

Background

The Committee to Survey Standards

Standard Trajectory

Specifications and Standards

Versioning

Compliance Issues

Interfaces to Big Data Resources

References

Chapter 6. Immutability

Details

No. of pages:
288
Language:
English
Copyright:
© 2013
Published:
Imprint:
Morgan Kaufmann
Print ISBN:
9780124045767
Electronic ISBN:
9780124047242

About the author

Jules Berman

Jules Berman holds two bachelor of science degrees from MIT (Mathematics, and Earth and Planetary Sciences), a PhD from Temple University, and an MD, from the University of Miami. He was a graduate researcher in the Fels Cancer Research Institute, at Temple University, and at the American Health Foundation in Valhalla, New York. His post-doctoral studies were completed at the U.S. National Institutes of Health, and his residency was completed at the George Washington University Medical Center in Washington, D.C. Dr. Berman served as Chief of Anatomic Pathology, Surgical Pathology and Cytopathology at the Veterans Administration Medical Center in Baltimore, Maryland, where he held joint appointments at the University of Maryland Medical Center and at the Johns Hopkins Medical Institutions. In 1998, he transferred to the U.S. National Institutes of Health, as a Medical Officer, and as the Program Director for Pathology Informatics in the Cancer Diagnosis Program at the National Cancer Institute. Dr. Berman is a past President of the Association for Pathology Informatics, and the 2011 recipient of the association's Lifetime Achievement Award. He is a listed author on over 200 scientific publications and has written more than a dozen books in his three areas of expertise: informatics, computer programming, and cancer biology. Dr. Berman is currently a free-lance writer.

Affiliations and Expertise

Ph.D., M.D., freelance author with expertise in informatics, computer programming, and cancer biology

Reviews

"By stressing simple, fundamental concepts, this book teaches readers how to organize large volumes of complex data, and how to achieve data permanence when the content of the data is constantly changing. General methods for data verification and validation, as specifically applied to Big Data resources, are stressed throughout the book."--ODBMS.org, March 21, 2014
"The book is written in a colloquial style and is full of anecdotes, quotations from famous people, and personal opinions."--ComputingReviews.com, February 3, 2014
"The author has produced a sober, serious treatment of this emerging phenomenon, avoiding hype and gee-whiz cases in favor of concepts and mature advice. For example, the author offers ten distinctions between big data and small data, including such factors as goals, location, data structure, preparation, and longevity. This characterization provides much greater insight into the phenomenon than the standard 3V treatment (volume, velocity, and variety)."--ComputingReviews.com, October 3, 2013