Managing Gigabytes

Managing Gigabytes

Compressing and Indexing Documents and Images, Second Edition

1st Edition - May 3, 1999

Write a review

  • Authors: Ian Witten, Alistair Moffat, Timothy Bell
  • Hardcover ISBN: 9781558605701

Purchase options

Purchase options
Available
Sales tax will be calculated at check-out

Institutional Subscription

Free Global Shipping
No minimum order

Description

In this fully updated second edition of the highly acclaimed Managing Gigabytes, authors Witten, Moffat, and Bell continue to provide unparalleled coverage of state-of-the-art techniques for compressing and indexing data. Whatever your field, if you work with large quantities of information, this book is essential reading--an authoritative theoretical resource and a practical guide to meeting the toughest storage and access challenges. It covers the latest developments in compression and indexing and their application on the Web and in digital libraries. It also details dozens of powerful techniques supported by mg, the authors' own system for compressing, storing, and retrieving text, images, and textual images. mg's source code is freely available on the Web.

Key Features

  • Up-to-date coverage of new text compression algorithms such as block sorting, approximate arithmetic coding, and fat Huffman coding
  • New sections on content-based index compression and distributed querying, with 2 new data structures for fast indexing
  • New coverage of image coding, including descriptions of de facto standards in use on the Web (GIF and PNG), information on CALIC, the new proposed JPEG Lossless standard, and JBIG2
  • New information on the Internet and WWW, digital libraries, web search engines, and agent-based retrieval
  • Accompanied by a public domain system called MG which is a fully worked-out operational example of the advanced techniques developed and explained in the book
  • New appendix on an existing digital library system that uses the MG software

Table of Contents

  • PREFACE




    1. OVERVIEW

    1.1 Document databases

    1.2 Compression

    1.3 Indexes

    1.4 Document images

    1.5 The MG system

    1.6 Further reading




    2. TEXT COMPRESSION

    2.1 Models

    2.2 Adaptive models

    2.3 Huffman Coding

    2.4 Arithmetic coding

    2.5 Symbolwise models

    2.6 Dictionary models

    2.7 Synchronization

    2.8 Performance comparisons

    2.9 Further reading




    3. INDEXING

    3.1 Sample document collections

    3.2 Inverted file indexing

    3.3 Inverted file compression

    3.4 Performance of index compression methods

    3.5 Signature files and bitmaps

    3.6 Comparison of indexing methods

    3.7 Case folding, stemming, and stop words

    3.8 Further reading




    4. QUERYING

    4.1 Accessing the lexicon

    4.2 Partially specified query terms

    4.3 Boolean query processing

    4.4 Ranking and information retrieval

    4.5 Evaluating retrieval effectiveness

    4.6 Implementation of the cosine measure

    4.7 Interactive retrieval

    4.8 Distributed retrieval

    4.9 Further reading




    5. INDEX CONSTRUCTION

    5.1 Memory­based inversion

    5.2 Sort­based inversion

    5.3 Exploiting index compression

    5.4 Compressed in­memory inversion

    5.5 Comparison of inversion methods

    5.6 Constructing signature files and bitmaps

    5.7 Dynamic collections

    5.8 Further reading




    6. IMAGE COMPRESSION

    6.1 Types of image

    6.2 The CCITT fax standard for bilevel images

    6.3 Context­based compression of bi-level images

    6.4 JBIG: A standard for bilevel images

    6.5 Lossless compression of continuous­tone images

    6.6 JPEG: A standard for continuous­tone images

    6.7 Progressive transmission of images

    6.8 Summary of image compression techniques

    6.9 Further reading




    7. TEXTUAL IMAGES

    7.1 The idea of textual image compression

    7.2 Lossy and lossless compression

    7.3 Extracting marks

    7.4 Template matching

    7.5 From marks to symbols

    7.6 Coding the components of a textual image

    7.7 Performance: lossy and lossless modes

    7.8 System considerations

    7.9 JBIG2: A standard for textual image compression

    7.10 Further reading




    8. MIXED TEXT AND IMAGES

    8.1 Orientation

    8.2 Segmentation

    8.3 Classification

    8.4 Further reading




    9. IMPLEMENTATION

    9.1 Text compression

    9.2 Text compression performance

    9.3 Images and textual images

    9.4 Index construction

    9.5 Index compression

    9.6 Query processing

    9.7 Further reading




    10. THE INFORMATION EXPLOSION

    10.1 Two millennia of information

    10.2 The Internet: a global information resource

    10.3 The paper problem

    10.4 Coping with the information explosion

    10.5 Digital libraries

    10.6 Managing gigabytes better

    10.7 Small is beautiful

    10.8 Personal information support for life

    10.9 Further reading




    A. GUIDE TO THE MG SYSTEM

    A.1 Installing the MG system

    A.2 A sample storage and retrieval session

    A.3 Database creation collection

    A.5 Nontextual files

    A.6 Image compression programs



    B. GUIDE TO THE NZDL

    B.1 What's in the NZDL?

    B.2 How the NZDL works

    B.3 Implications

    B.4 Further reading



    REFERENCES

    INDEX

Product details

  • No. of pages: 550
  • Language: English
  • Copyright: © Morgan Kaufmann 1999
  • Published: May 3, 1999
  • Imprint: Morgan Kaufmann
  • Hardcover ISBN: 9781558605701

About the Authors

Ian Witten

Ian H. Witten is a professor of computer science at the University of Waikato in New Zealand. He directs the New Zealand Digital Library research project. His research interests include information retrieval, machine learning, text compression, and programming by demonstration. He received an MA in Mathematics from Cambridge University, England; an MSc in Computer Science from the University of Calgary, Canada; and a PhD in Electrical Engineering from Essex University, England. He is a fellow of the ACM and of the Royal Society of New Zealand. He has published widely on digital libraries, machine learning, text compression, hypertext, speech synthesis and signal processing, and computer typography. He has written several books, the latest being Managing Gigabytes (1999) and Data Mining (2000), both from Morgan Kaufmann.

Affiliations and Expertise

Professor, Computer Science Department, University of Waikato, New Zealand

Alistair Moffat

Affiliations and Expertise

University of Melbourne, Australia

Timothy Bell

Affiliations and Expertise

University of Canterbury, U.K.

Ratings and Reviews

Write a review

There are currently no reviews for "Managing Gigabytes"