Artificial Intelligence

Artificial Intelligence

A New Synthesis

1st Edition - April 1, 1998

Write a review

  • Author: Nils Nilsson
  • eBook ISBN: 9780080499451

Purchase options

Purchase options
DRM-free (EPub, PDF, Mobi)
Sales tax will be calculated at check-out

Institutional Subscription

Free Global Shipping
No minimum order

Description

Intelligent agents are employed as the central characters in this new introductory text. Beginning with elementary reactive agents, Nilsson gradually increases their cognitive horsepower to illustrate the most important and lasting ideas in AI. Neural networks, genetic programming, computer vision, heuristic search, knowledge representation and reasoning, Bayes networks, planning, and language understanding are each revealed through the growing capabilities of these agents. The book provides a refreshing and motivating new synthesis of the field by one of AI's master expositors and leading researchers. Artificial Intelligence: A New Synthesis takes the reader on a complete tour of this intriguing new world of AI.

Key Features

  • An evolutionary approach provides a unifying theme
  • Thorough coverage of important AI ideas, old and new
  • Frequent use of examples and illustrative diagrams
  • Extensive coverage of machine learning methods throughout the text
  • Citations to over 500 references
  • Comprehensive index

Table of Contents

  • 1 Introduction

    1.1 What is AI?

    1.2 Approaches to Artificial Intelligence

    1.3 Brief History of AI

    1.4 Plan of the Book

    1.5 Additional Readings and Discussion



    I Reactive Machines

    2 Stimulus-Response Agents

    2.1 Perception and Action

    2.1.1 Perception

    2.1.2 Action

    2.1.3 Boolean Algebra

    2.1.4 Classes and Forms of Boolean Functions

    2.2 Representing and Implementing Action Functions

    2.2.1 Production Systems

    2.2.2 Networks

    2.2.3 The Subsumption Architecture

    2.3 Additional Readings and Discussion



    3 Neural Networks

    3.1 Introduction

    3.2 Training Single TLUs

    3.2.1 TLU Geometry

    3.2.2 Augmented Vectors

    3.2.3 Gradient Descent Methods

    3.2.4 The Widrow-Hoff Procedure

    3.2.5 The Generalized Delta Procedure

    3.2.6 The Error-Correction Procedure

    3.3 Neural Networks

    3.3.1 Motivation

    3.3.2 Notation

    3.3.3 The Backpropagation Method

    3.3.4 Computing Weight Changes in the Final Layer

    3.3.5 Computing Changes to the Weights in Intermediate Layers

    3.4 Generalization, Accuracy, and Overfitting

    3.5 Additional Readings and Discussion



    4 Machine Evolution

    4.1 Evolutionary Computation

    4.2 Genetic Programming

    4.2.1 Program Representation in GP

    4.2.2 The GP Process

    4.2.3 Evolving a Wall-Following Robot

    4.3 Additional Readings and Discussion



    5 State Machines

    5.1 Representing the Environment by Feature Vectors

    5.2 Elman Networks

    5.3 Iconic Representations

    5.4 Blackboard Systems

    5.5 Additional Readings and Discussion



    6 Robot Vision

    6.1 Introduction

    6.2 Steering a Van

    6.3 Two Stages of Robot Vision

    6.4 Image Processing

    6.4.1 Averaging

    6.4.2 Edge Enhancement

    6.4.3 Combining Edge-Enhancement with Averaging

    6.4.4 Region Finding

    6.4.5 Using Image Attributes other than Intensity

    6.5 Scene Analysis

    6.5.1 Interpreting Lines and Curves in the Image

    6.5.2 Model-Based Vision

    6.6 Stereo Vision

    6.7 Additional Readings and Discussion



    II Search in State Spaces

    7 Agents that Plan

    7.1 Memory Versus Computation

    7.2 State-Space Graphs

    7.3 Searching Explicit State Spaces

    7.4 Feature-Based State Spaces

    7.5 Graph Notation 7.6 Additional Readings and Discussion



    8 Uninformed Search

    8.1 Formulating the State Space

    8.2 Components of Implicit State-Space Graphs

    8.3 Breadth-First Search

    8.4 Depth-First or Bracktracking Search

    8.5 Iterative Deepening

    8.6 Additional Readings and Discussion



    9 Heuristic Search

    9.1 Using Evaluation Functions

    9.2 A General Graph-Searching Algorithm

    9.2.1 Algorithm A

    9.2.2 Admissibility of A

    9.2.3 The Consistency (or Monotone) Condition

    9.2.4 Iterative-Deepening A

    9.2.5 Recursive Best-First Search

    9.3 Heuristic Functions and Search Efficiency

    9.4 Additional Readings and Discussion



    10 Planning, Acting, and Learning

    10.1 The Sense/Plan/Act Cycle

    10.2 Approximate Search

    10.2.1 Island-Driven Search

    10.2.2 Hierarchical Search

    10.2.3 Limited-Horizon Search

    10.2.4 Cycles

    10.2.5 Building Reactive Procedures

    10.3 Learning Heuristic Functions

    10.3.1 Explicit Graphs

    10.3.2 Implicit Graphs

    10.4 Rewards Instead of Goals

    10.5 Additional Readings and Discussion



    11 Alternative Search Formulations and Applications

    11.1 Assignment Problems

    11.2 Constructive Methods

    11.3 Heuristic Repair

    11.4 Function Optimization



    12 Adversarial Search

    12.1 Two-Agent Games

    12.2 The Minimax Procedure

    12.3 The Alpha-Beta Procedure

    12.4 The Search Efficiency of the Alpha-Beta Procedure

    12.5 Other Important Matters

    12.6 Games of Chance

    12.7 Learning Evaluation Functions

    12.8 Additional Readings and Discussion



    III Knowledge Representation and Reasoning

    13 The Propositional Calculus

    13.1 Using Constraints on Feature Values

    13.2 The Language

    13.3 Rules of Inference

    13.4 Definition of Proof

    13.5 Semantics

    13.5.1 Interpretations

    13.5.2 The Propositional Truth Table

    13.5.3 Satisfiability and Models

    13.5.4 Validity

    13.5.5 Equivalence

    13.5.6 Entailment

    13.6 Soundness and Completeness

    13.7 The PSAT Problem

    13.8 Other Important Topics

    13.8.1 Language Distinctions

    13.8.2 Metatheorems

    13.8.3 Associative Laws

    13.8.4 Distributive Laws



    14 Resolution in The Propositional Calculus

    14.1 A New Rule of Inference: Resolution

    14.1.1 Clauses as wwf

    14.1.2 Resolution on Clauses

    14.1.3 Soundness of Resolution

    14.2 Converting Arbitrary wffs to Conjunctions of Clauses

    14.3 Resolution Refutations

    14.4 Resolution Refutation Search Strategies

    14.4.1 Ordering Strategies

    14.4.2 Refinement Strategies

    14.5 Horn Clauses



    15 The Predicate Calculus

    15.1 Motivation

    15.2 The Language and its Syntax

    15.3 Semantics

    15.3.1 Worlds

    15.3.2 Interpretations

    15.3.3 Models and Related Notions

    15.3.4 Knowledge

    15.4 Quantification

    15.5 Semantics of Quantifiers

    15.5.1 Universal Quantifiers

    15.5.2 Existential Quantifiers

    15.5.3 Useful Equivalences

    15.5.4 Rules of Inference

    15.6 Predicate Calculus as a Language for Representing Knowledge

    15.6.1 Conceptualizations

    15.6.2 Examples

    15.7 Additional Readings and Discussion



    16 Resolution in the Predicate Calculus

    16.1 Unification

    16.2 Predicate-Calculus Resolution

    16.3 Completeness and Soundness

    16.4 Converting Arbitrary wffs to Clause Form

    16.5 Using Resolution to Prove Theorems

    16.6 Answer Extraction

    16.7 The Equality Predicate

    16.8 Additional Readings and Discussion



    17 Knowledge-Based Systems

    17.1 Confronting the Real World

    17.2 Reasoning Using Horn Clauses

    17.3 Maintenance in Dynamic Knowledge Bases

    17.4 Rule-Based Expert Systems

    17.5 Rule Learning

    17.5.1 Learning Propositional Calculus Rules

    17.5.2 Learning First-Order Logic Rules

    17.5.3 Explanation-Based Generalization

    17.6 Additional Readings and Discussion



    18 Representing Commonsense Knowledge

    18.1 The Commonsense World

    18.1.1 What is Commonsense Knowledge?

    18.1.2 Difficulties in Representing Commonsense Knowledge

    18.1.3 The Importance of Commonsense Knowledge

    18.1.4 Research Areas

    18.2 Time

    18.3 Knowledge Representation by Networks

    18.3.1 Taxonomic Knowledge

    18.3.2 Semantic Networks

    18.3.3 Nonmonotonic Reasoning in Semantic Networks

    18.3.4 Frames

    18.4 Additional Readings and Discussion



    19 Reasoning with Uncertain Information

    19.1 Review of Probability Theory

    19.1.1 Fundamental Ideas

    19.1.2 Conditional Probabilities

    19.2 Probabilistic Inference

    19.2.1 A General Method

    19.2.2 Conditional Independence

    19.3 Bayes Networks

    19.4 Patterns of Inference in Bayes Networks

    19.5 Uncertain Evidence

    19.6 D-Seperation

    19.7 Probabilistic Inference in Polytrees

    19.7.1 Evidence Above

    19.7.2 Evidence Below

    19.7.3 Evidence Above and Below

    19.7.4 A Numerical Example

    19.8 Additional Readings and Discussion



    20 Learning and Acting with Bayes Nets

    20.1 Learning Bayes Nets

    20.1.1 Known Network Structure

    20.1.2 Learning Network Structure

    20.2 Probabilistic Inference and Action

    20.2.1 The General Setting

    20.2.2 An Extended Example

    20.2.3 Generalizing the Example

    20.3 Additional Readings and Discussion



    IV Planning Method Based on Logic

    21 The Situation Calculus

    21.1 Reasoning about States and Actions

    21.2 Some Difficulties

    21.2.1 Frame Axioms

    21.2.2 Qualifications

    21.2.3 Ramifications

    21.3 Generating Plans

    21.4 Additional Reading and Discussion



    22 Planning

    22.1 STRIPS Planning Systems

    22.1.1 Describing States and Goals

    22.1.2 Forward Search Methods

    22.1.3 Recursive STRIPS

    22.1.4 Plans with Runtime Conditionals

    22.1.5 The Sussman Anomaly

    22.1.6 Backward Search Methods

    22.2 Plan Spaces and Partial-Order Planning

    22.3 Hierarchical Planning

    22.3.1 ABSTRIPS

    22.3.2 Combining Hierarchical and Partial-Order Planning

    22.4 Learning Plans'

    22.5 Additional Readings and Discussion



    V Communication and Integration

    23 Multiple Agents

    23.1 Interacting Agents

    23.2 Models of Other Agents

    23.2.1 Varieties of Models

    23.2.2 Simulation Strategies

    23.2.3 Simulated Databases

    23.2.4 The Intentional Stance

    23.3 A Modal Logic of Knowledge

    23.3.1 Modal Operators

    23.3.2 Knowledge Axioms

    23.3.3 Reasoning about Other Agents' Knowledge

    23.3.4 Predicting Actions of Other Agents

    23.4 Additional Readings and Discussion



    24 Communication Among Agents

    24.1 Speech Acts

    24.1.1 Planning Speech Acts

    24.1.2 Implementing Speech Acts

    24.2 Understanding Language Strings

    24.2.1 Phrase-Structure Grammars

    24.2.2 Semantic Analysis

    24.2.3 Expanding the Grammar

    24.3 Efficient Communication

    24.3.1 Use of Context

    24.3.2 Use of Knowledge to Resolve Ambiguities

    24.4 Natural Language Processing

    24.5 Additional Readings and Discussion

    25 Agent Architectures

    25.1 Three-Level Architectures

    25.2 Goal Arbitration

    25.3 The Triple-Tower Architecture

    25.4 Bootstrapping

    25.5 Additional Readings and Discussion

Product details

  • No. of pages: 513
  • Language: English
  • Copyright: © Morgan Kaufmann 1998
  • Published: April 1, 1998
  • Imprint: Morgan Kaufmann
  • eBook ISBN: 9780080499451

About the Author

Nils Nilsson

Nils Nilsson
Nils J. Nilsson's long and rich research career has contributed much to AI. He has written many books, including the classic Principles of Artificial Intelligence. Dr. Nilsson is Kumagai Professor of Engineering, Emeritus, at Stanford University. He has served on the editorial boards of Artificial Intelligence and Machine Learning and as an Area Editor for the Journal of the Association for Computing Machinery. Former Chairman of the Department of Computer Science at Stanford, and former Director of the SRI Artificial Intelligence Center, he is also a past president and Fellow of the American Association for Artificial Intelligence.

Affiliations and Expertise

Stanford University

Ratings and Reviews

Write a review

There are currently no reviews for "Artificial Intelligence"