COVID-19 Update: We are currently shipping orders daily. However, due to transit disruptions in some geographies, deliveries may be delayed. To provide all customers with timely access to content, we are offering 50% off Science and Technology Print & eBook bundle options. Terms & conditions.
Machine Reading Comprehension - 1st Edition - ISBN: 9780323901185, 9780323901192

Machine Reading Comprehension

1st Edition

Algorithms and Practice

Author: Chenguang Zhu
eBook ISBN: 9780323901192
Paperback ISBN: 9780323901185
Imprint: Elsevier
Published Date: 20th March 2021
Page Count: 270
Sales tax will be calculated at check-out Price includes VAT/GST
280.86
155.00
134.00
175.00
148.75
Unavailable
Price includes VAT/GST

Institutional Subscription

Secure Checkout

Personal information is secured with SSL technology.

Free Shipping

Free global shipping
No minimum order.

Description

Machine reading comprehension (MRC) is a cutting-edge technology in natural language processing (NLP). MRC has recently advanced significantly, surpassing human parity in several public datasets. It has also been widely deployed by industry in search engine and quality assurance systems. Machine Reading Comprehension: Algorithms and Practice performs a deep-dive into MRC, offering a resource on the complex tasks this technology involves. The title presents the fundamentals of NLP and deep learning, before introducing the task, models, and applications of MRC. This volume gives theoretical treatment to solutions and gives detailed analysis of code, and considers applications in real-world industry. The book includes basic concepts, tasks, datasets, NLP tools, deep learning models and architecture, and insight from hands-on experience. In addition, the title presents the latest advances from the past two years of research. Structured into three sections and eight chapters, this book presents the basis of MRC; MRC models; and hands-on issues in application. This book offers a comprehensive solution for researchers in industry and academia who are looking to understand and deploy machine reading comprehension within natural language processing.

Key Features

  • Presents the first comprehensive resource on machine reading comprehension (MRC)
  • Performs a deep-dive into MRC, from fundamentals to latest developments
  • Offers the latest thinking and research in the field of MRC, including the BERT model
  • Provides theoretical discussion, code analysis, and real-world applications of MRC
  • Gives insight from research which has led to surpassing human parity in MRC

Readership

Researchers working on NLP, and particularly on MRC, in both industry and academia. Postgraduate and advanced students in machine learning, deep learning, NLP and aligned areas in computer science

Table of Contents

Part I: Foundation
Chapter 1 Introduction to Machine Reading Comprehension
1.1 The Machine Reading Comprehension Task
1.1.1 History of Machine Reading Comprehension
1.1.2 Application of Machine Reading Comprehension
1.2 Natural Language Processing
1.2.1 The Status Quo of NLP
1.2.2 Existing Issues
1.3 Deep learning
1.3.1 Features of Deep Learning
1.3.2 Achievements of Deep Learning
1.4 Evaluation of Machine Reading Comprehension
1.4.1 Answer Forms
1.4.2 ROUGE: Metric for Evaluating Freestyle Answers
1.5 MRC Datasets
1.5.1 Single-paragraph Datasets
1.5.2 Multi-paragraph Datasets
1.5.3 Corpus-based Datasets
1.6 How to Make an MRC Dataset
1.6.1 Generation of Articles and Questions
1.6.2 Generation of Correct Answers
1.6.3 How to Build a High-quality MRC Datase
1.7 Summary
Chapter 2 The Basics of Natural Language Processing
2.1 Tokenization
2.1.1 Byte Pair Encoding
2.2 The Cornerstone of NLP: Word Vectors
2.2.1 Word Vectorization
2.2.2 Word2vec
2.3 Linguistic Tagging
2.3.1 Named Entity Recognition
2.3.2 Part-of-Speech Tagging
2.4 Language Model
2.4.1 N-gram Model
2.4.2 Evaluation of Language Models
2.5 Summary
Chapter 3 Deep Learning in Natural Language Processing
3.1 From Word Vector to Text Vector
3.1.1 Using the Final State of RNN
3.1.2 CNN and Poolin
3.1.3 Parametrized Weighted Sum
3.2 Answer Multiple-choice Questions: Natural Language Understanding
3.2.1 Network Structure
3.2.2 Implementing Text Classification
3.3 Write an Article: Natural Language Generation
3.3.1 Network Architecture
3.3.2 Implementing Text Generation
3.3.3 Beam Search
3.4 Keep Focused: Attention Mechanism
3.4.1 Attention Mechanism
3.4.2 Implementing Attention Function
3.4.3 Sequence-to-sequence Model
3.5 Summary
Part II: Architecture
Chapter 4 Architecture of MRC Models
4.1 General Architecture of MRC Models
4.2 Encoding Layer
4.2.1 Establishing the Dictionary
4.2.2 Character Embeddings
4.2.3 Contextual Embeddings
4.3 Interaction Layer
4.3.1 Cross-Attention
4.3.2 Self-Attention
4.3.3 Contextual Embeddings
4.4 Output Layer
4.4.1 Construct the Question Vector
4.4.2 Generate Multiple-choice Answers
4.4.3 Generate Extractive Answers
4.4.4 Generate Freestyle Answers
4.5 Summary
Chapter 5 Common MRC Models
5.1 Bi-Directional Attention Flow Model
5.1.1 Encoding Layer
5.1.2 Interaction Layer
5.1.3 Output Layer
5.2 R-Net
5.2.1 Gated Attention-based Recurrent Network
5.2.2 Encoding Layer
5.2.3 Interaction Layer
5.2.4 Output Layer
5.3 FusionNet
5.3.1 History of Word
5.3.2 Fully-aware Attention
5.3.3 Encoding Layer
5.3.4 Interaction Layer
5.3.5 Output Layer
5.4 Essential-term-aware Retriever-Reader
5.4.1 Retriever
5.4.2 Reader
5.5 Summary
Chapter 6 Pre-trained Language Model
6.1 Pre-trained Models and Transfer Learning
6.2 Translation-based Pre-trained Language Model: CoVe
6.2.1 Machine Translation Model
6.2.2 Contextual Embeddings
6.3 Pre-trained Language Model ELMo
6.3.1 Bi-directional Language Model
6.3.2 How to Use ELMo
6.4 The Generative Pre-Training Language Model: GPT
6.4.1 Transformer
6.4.2 GPT
6.4.3 Apply GPT
6.5 The Phenomenal Pre-Trained Language Model: BERT
6.5.1 Masked Language Model
6.5.2 Next Sentence Prediction
6.5.3 Configurations of BERT Pre-training
6.5.4 Fine-tuning BERT
6.5.5 Improving BERT
6.5.6 Implementing BERT Fine-tuning in MRC
6.6 Summary
Part III: Application
Chapter 7 Code Analysis of SDNet Model

7.1 Multi-turn Conversational MRC model: SDNet
7.1.1 Encoding Layer
7.1.2 Interaction Layer and Output Layer
7.2 Introduction to Code
7.2.1 Code Structure
7.2.2 How to Run the Code
7.2.3 Configuration File
7.3 Pre-processing
7.3.1 Initialization
7.3.2 Pre-processing
7.4 Training
7.4.1 Base Class
7.4.2 Subclass
7.5 Batch Generator
7.5.1 Padding
7.5.2 Preparing Data for BERT
7.6 SDNet Model
7.6.1 Network Class
7.6.2 Network Layers
7.6.3 Generate BERT Embeddings
7.7 Summary
Chapter 8 Applications and Future of Machine Reading Comprehension
8.1 Intelligent Customer Service
8.1.1 Building Product Knowledge Base
8.1.2 Intent Understanding
8.1.3 Answer Generation
8.1.4 Other Modules
8.2 Search Engine
8.2.1 Search Engine Technology
8.2.2 MRC in Search Engine
8.2.3 Challenges and Future of MRC in Search Engine
8.3 Health Care
8.4 Laws
8.4.1 Automatic Judgement
8.4.2 Crime Classification
8.5 Finance
8.5.1 Predicting Stock Prices
8.5.2 News Summarization
8.6 Education
8.7 The Future of Machine Reading Comprehension
8.7.1 Challenges
8.7.2 Commercialization
8.8 Summary
Appendices
Appendix A Machine Learning Basics
A.1 Types of Machine Learning
A.2 Model and Parameters
A.3 Generalization and Overfitting
Appendix B Deep Learning Basics
B.1 Neural Network
B.1.1 Definition
B.1.2 Loss Function
B.1.3 Optimization
B.2 Common Types of Neural Network in Deep Learning
B.2.1 Convolutional Neural Network
B.2.2 Recurrent Neural Network
B.2.3 Dropout
B.3 The Deep Learning Framework PyTorch
B.3.1 Installing PyTorch
B.3.2 Tensor
B.3.3 Gradient Computation
B.3.4 Network Layer
B.3.5 Custom Network

Details

No. of pages:
270
Language:
English
Copyright:
© Elsevier 2021
Published:
20th March 2021
Imprint:
Elsevier
eBook ISBN:
9780323901192
Paperback ISBN:
9780323901185

About the Author

Chenguang Zhu

Chenguang Zhu

Principal Research Manager in Microsoft Corporation. Dr. Zhu obtained his Ph.D. in Computer Science from Stanford University. He is leading efforts in research and productization of natural language processing in Azure Cognitive AI. Dr. Zhu is proficient in artificial intelligence, deep learning and natural language processing, specializing in machine reading comprehension, text summarization and dialogue understanding. He has led teams to win the first place in the SQuAD 1.0 Machine Reading Comprehension Competition held by Stanford University, and reach human parity in the CoQA Conversational Reading Comprehension Competition. He has 40 papers published in top AI and NLP conferences such as ACL, EMNLP, NAACL and ICLR with more than 1,000 citations.

Affiliations and Expertise

Principal Research Manager, Microsoft Corporation, USA

Ratings and Reviews