COVID-19 Update: We are currently shipping orders daily. However, due to transit disruptions in some geographies, deliveries may be delayed. To provide all customers with timely access to content, we are offering 50% off Science and Technology Print & eBook bundle options. Terms & conditions.
Modern Control Engineering - 1st Edition - ISBN: 9780080168203, 9781483186931

Modern Control Engineering

1st Edition

Pergamon Unified Engineering Series

Author: Maxwell Noton
eBook ISBN: 9781483186931
Imprint: Pergamon
Published Date: 1st January 1972
Page Count: 288
Sales tax will be calculated at check-out Price includes VAT/GST
Price includes VAT/GST

Institutional Subscription

Secure Checkout

Personal information is secured with SSL technology.

Free Shipping

Free global shipping
No minimum order.


Modern Control Engineering focuses on the methodologies, principles, approaches, and technologies employed in modern control engineering, including dynamic programming, boundary iterations, and linear state equations. The publication fist ponders on state representation of dynamical systems and finite dimensional optimization. Discussions focus on optimal control of dynamical discrete-time systems, parameterization of dynamical control problems, conjugate direction methods, convexity and sufficiency, linear state equations, transition matrix, and stability of discrete-time linear systems. The text then tackles infinite dimensional optimization, including computations with inequality constraints, gradient method in function space, quasilinearization, computation of optimal control-direct and indirect methods, and boundary iterations. The book takes a look at dynamic programming and introductory stochastic estimation and control. Topics include deterministic multivariable observers, stochastic feedback control, stochastic linear-quadratic control problem, general calculation of optimal control by dynamic programming, and results for linear multivariable digital control systems. The publication is a dependable reference material for engineers and researchers wanting to explore modern control engineering.

Table of Contents


Chapter 1 State Representation of Dynamical Systems

1.1 State Equations

1.2 Linear State Equations

1.3 Fundamental Matrices

1.4 The Transition Matrix

1.5 Inclusion of the Forcing or Control Variables

1.6 Eigenvalues and Eigenvectors

1.7 Discrete-Time State Equations

1.8 Stability of Discrete-Time Linear Systems

1.9 Controllability

1.10 Observability


Chapter 2 Finite Dimensional Optimization

2.1 Motivation

2.2 Unconstrained Maxima and Minima

2.3 Equality Constraints

2.4 Inequality Constraints

2.5 Convexity and Sufficiency

2.6 Linear Programming

2.7 Direct Methods of Minimization

2.8 An Illustrative Minimization Example

2.9 Minimization by Steepest Descent

2.10 Second Order Gradients

2.11 Conjugate Direction Methods

2.12 One Dimensional Searches

2.13 Davidon-Fletcher-Powell

2.14 Fletcher-Reeves

2.15 Powell's Method

2.16 Direct Methods for Constrained Minimization

2.17 Penalty Functions

2.18 Use of Transformations

2.19 Parameterization of Dynamical Control Problems

2.20 Optimal Control of Dynamical Discrete-Time Systems


Chapter 3 Infinite Dimensional Optimization

3.1 A Classic Problem and a Classical Solution

3.2 Dynamical Optimization with no Terminal Constraints

3.3 A Simple Control Problem

3.4 Terminal Constraints and Variable Terminal Time

3.5 An Elementary Thrust-Programming Problem

3.6 A Foretaste of Computational Difficulties

3.7 The Linear-Quadratic Control Problem

3.8 Design of a Lateral Autostabilizer for an Aircraft

3.9 Stability of the Linear-Quadratic Regulator

3.10 Inequality Constraints

3.11 Pontryagin's Maximum or Minimum Principle

3.12 Additional Necessary Conditions and Sufficiency

3.13 Singular Control

3.14 Computation of Optimal Control — Direct and Indirect Methods

3.15 Boundary Iterations

3.16 Quasilinearization

3.17 Gradient Method in Function Space

3.18 Second Variations

3.19 Conjugate Gradients

3.20 Computations with Inequality Constraints


Chapter 4 Dynamic Programming

4.1 Historical Background

4.2 A Multi-Stage Decision Problem

4.3 The Principle of Optimality

4.4 A Simple Control Problem in Discrete Time

4.5 The General Calculation of Optimal Control by Dynamic Programming

4.6 Results for Linear Multivariable Digital Control Systems

4.7 An Example of Discrete-Time Control

4.8 Computation of Nonlinear Discrete-Time Control

4.9 The Continuous Form of Dynamic Programming

4.10 A Special Solution of the Hamilton-Jacobi Equation

4.11 Differential Dynamic Programming


Chapter 5 Introductory Stochastic Estimation and Control

5.1 Deterministic Multivariable Observers

5.2 The Kaiman Filter

5.3 Extensions of the Kaiman Filter

5.4 An Example of the Extended Kaiman Filter

5.5 Stochastic Feedback Control

5.6 The Stochastic Linear-Quadratic Control Problem


Chapter 6 Actual and Potential Applications

6.1 Resume — Practical Significance of the Results

6.2 Linear Control with Quadratic Criteria

6.3 Static and Dynamic Optimization

6.4 Applications of the Kaiman Filter

Chapter 7 Appendices

7.1 Stability of Discrete-Time Linear Systems

7.2 Differentiation of Matrix Expressions

7.3 Canonical Form for a Single-Output Linear System

7.4 Markov Sequences

Chapter 8 Supplement — Introduction to Matrices and State Variables

8.1 Matrices and Vectors

8.2 Numerical Solution of Ordinary Differential Equations

8.3 The Generalized Newton-Raphson Process

8.4 State Variable Characterization of Dynamical Systems

Chapter 9 Bibliography and References



No. of pages:
© Pergamon 1972
1st January 1972
eBook ISBN:

About the Author

Maxwell Noton

Ratings and Reviews