Control and Dynamic Systems

Control and Dynamic Systems

Advances in Theory and Applications

1st Edition - March 28, 1976

Write a review

  • Editor: C. T. Leondes
  • eBook ISBN: 9781483191249

Purchase options

Purchase options
DRM-free (PDF)
Sales tax will be calculated at check-out

Institutional Subscription

Free Global Shipping
No minimum order

Description

Control and Dynamic Systems: Advances in Theory and Applications reviews progress in the field of control and dynamic systems theory and applications, with emphasis on filtering and stochastic control in dynamic systems. Topics include linear and nonlinear filtering techniques; concepts and methods in stochastic control; and discrete-time optical stochastic observers. The theory of disturbance-accommodating controllers is also presented. Comprised of nine chapters, this volume begins with an overview of filtering and stochastic control in dynamic systems, followed by a discussion on linear and nonlinear filtering techniques. The reader is then introduced to concepts and methods in stochastic control, as well as the innovations process and its applications to sensitivity analysis and system identification. Subsequent chapters focus on the status of observer theory and its major results as applied to discrete-time linear systems; the properties of the class of discrete-time Riccati equations that arise in the filtering problem; and the theory of disturbance-accommodating controllers. The identification of noise characteristics in a Kalman filter and estimation of adaptive minimum variance in discrete-time linear systems round out the book. This monograph will be useful to practicing technologists and research workers interested in filtering and stochastic control in dynamic systems.

Table of Contents


  • Contributors

    Preface

    Contents of Previous Volume

    An Overview of Filtering and Stochastic Control in Dynamic Systems

    I. General Stochastic Control Problem

    A. Definition of the Basic Problem

    B. Some Types of Control Policies

    II. General Solution of the Optimal Stochastic Control Problem

    A. A Recursive Solution of the Control Problem

    B. Solving the Nonlinear Filtering Problem

    C. Practical Considerations in Determining a Stochastic Control Policy

    III. Linear Quadratic Systems and Extensions

    A. The linear Recursive FilteringProblem

    B. The Optimal,Closed-Loop, Stochastic Control Policy

    IV. Summary of Proposed Algorithms

    A. Nonlinear Filtering Algorithms

    B. Stochastic Control Algorithms

    References

    Linear and Nonlinear Filtering Techniques

    I. Introduction

    II. Linear Filtering

    III. System Modeling

    IV. Suboptimal Filter Design and Sensitivity Analysis

    V. Partitioned Filter Approach

    VI. Data Prefiltering

    VII. Square Root Techniques

    VIII. Divergence of the Filter

    IX. Nonlinear Filtering

    X. Concluding Remarks

    References

    Concepts and Methods in Stochastic Control

    I. Introduction

    II. Classes of Stochastic Control Policies and Some Properties of the Control

    A. Formulation of the Stochastic Control Problem

    B. Classes of Stochastic Control Policies

    C. The Dual Effect of the Control, Probing, and Caution

    III. Optimal Stochastic Control

    IV. The Optimal Control for a Class of Systems

    A. The Certainty Equivalence Result and Its Connection with the Dual Effect

    B. Discussion and Examples

    V. A Stochastic Closed-Loop Control Method for Nonlinear Systems

    A. Formulation of the Problem

    B. The Algoritiun

    C. Simulation Results

    VI. A Stochastic Resource Allocation Problem

    A. Formulation of the Problem

    B. The Algorithm

    C. Simulation Results

    VII. Conclusions

    Appendix A. Proof of the Connection Between the Certainty Equivalence and the Dual Effect

    Appendix B. The Closed-Loop Optimization of the Cost-To-Go

    References

    The Innovations Process with Applications to Identifications

    I. Introduction

    A. Problem Definition

    B. Solution Overview

    II. Mathematical Specifications and Background

    A. Objectives and Restrictions

    B. Method of Approach

    III. Sensitivity Analysis

    A. The Error Model

    B. Behavior of the Error Mean

    C. Behavior of the Error Covariance

    D. Error Correlation

    E. Behavior of the Innovations

    F. Summary

    IV. System Identification

    A. Introduction and Assumptions

    B. Limiting Behavior

    C. Some Necessary and Sufficient Conditions

    D. Some Steady State Considerations

    E. Variance and Correlation of Residuals

    F. Stochastic Approximation

    G. The Partial Derivatives

    V. Simulation Results

    A. The Boozer Example

    B. The Ohap Example

    C. Summary

    References

    Discrete-Time Optical Stochastic Observer

    I. Introduction

    II. Definition of Discrete Observer for StochasticSystems

    III. Construction of a Reduced-Order Observer

    IV. An Alternate Reduced-Order Observer Algorithm

    V. limiting Cases of the Reduced-Order Observer Solution

    A. Minimal-Order Observer

    B. Kalman Filter

    C. Some Perfect Measurements

    VI. Computational Advantages of Reduced Order Observers

    VII. An Optimal Continuous-Time Observer Solution

    VIII. Concluding Remarks

    References

    Discrete Riccati Equations: Alternative Algorithms,Asymptotic Properties,and System Theory Interpretations

    I. Introduction

    II. Square Root Algorithms and the Riccati Equation

    A. The Time-Invariant, Zero Terminal, Cost Problem

    B. The General Time-Variable Problem

    C. Square Root Algorithms 33

    D. Structure Algorithms

    E. Equivalent Optimization Problems

    III. System Structure

    A. Observability

    B. Invertibility and Detectability

    C. Matrix Characterizations

    IV. Singular Riccati Equations

    A. Asymptotic Properties

    B. Reduced Order Riccati Equations

    References

    Theory of Disturbance-Accommodating Controllers

    I. Introduction

    II. The Waveform-Mode Description of Realistic Disturbances

    III. The Waveform-Mode Characterization Versus the Statistical Characterization

    IV. State Models for Disturbances with Waveform Structure

    A. Some Examples of State Models for Common Disturbances

    B. Waveform Description of Unfamiliar Disturbances

    C. "Unfamiliar Disturbances" Arising from Modeling Errors in System Parameters

    D. Waveform Description of State-Dependent Disturbances

    E. Waveform Description with Linear State Models

    F. Experimental Determination of Linear State Models for Disturbances

    G. Noise Combined with Disturbances Having Waveform Structure

    H. Disturbance Waveform Models Equations (39) and (40) Versus Coloring Filters for White Noise

    V. Design of Disturbance-Accommodating Controllers for Stabilization, Regulation, and Servo Tracking Control Problems

    A. The Class of Systems and Disturbances to Be Considered

    B. Practical Constraints on the Structure of Disturbance-Accommodating Controllers

    C. Description of the Stabilization, Regulation and Servo-Tracking Control Problems

    D. Philosophies of Disturbance Accommodation in Control Problems

    E. The Notion of State Constructors (Observers) for Signals with Waveform Structure

    F. Design of Disturbance-Absorbing Controllers

    G. Design of Disturbance-Minimization Controllers

    H. Design of Disturbance-Utilization Controllers

    I. Design of Multimode Disturbance-Accommodating Controllers

    J. Transfer Function Interpretation of Disturbance-Accommodating Controllers

    VI. Conclusions

    References

    Appendix

    Identification of the Noise Characteristics in a Kaiman Filter

    I. Introduction

    A. Background

    B. Outline

    II. System Description

    III. Moment System Formulation

    A. Mean State Model

    B. Mean Measurement Model

    C. Mean System Statistics

    D. Covariance State Model

    E. Covariance Measurement Model

    F. Covariance System Statistics

    IV. Estimates of the Moments

    A. Estimates of the Means

    B. Estimates of the Covariance Parameters

    C. Adaptive Estimates of Both Moments

    D. Comparison of Adaptive Techniques

    V. Correlated Moment System Measurement Noise

    A. Nonwhite Moment System Measurement Noise

    B. Weighted Least Squares Estimates for No System State Noise

    C. Linear Minimum Variance Estimates for No System State Noise

    D. Weighted Leasted Squares Estimates for the General Case

    VI. Conclusions

    References

    Appendix A

    Appendix B

    Appendix C

    Adaptive Minimum VarianceEstimationin Discrete-Time LinearSystems

    I. Introduction

    II. Adaptive Filter Algorithm

    III. Hypothesis Test for Time Correlation of Residuals

    IV. Summary of the Adaptive Algorithm

    V. Convergence of Algorithm

    VI. Example

    A. Description of System

    B. Determination of Adaptive Filter Parameters

    C. Step Size Control

    D. Practical Considerations

    E. Results of ComputerSimulation

    VII. Conclusions

    Appendix

    References

    Subject Index

Product details

  • No. of pages: 648
  • Language: English
  • Copyright: © Academic Press 1976
  • Published: March 28, 1976
  • Imprint: Academic Press
  • eBook ISBN: 9781483191249

About the Editor

C. T. Leondes

Ratings and Reviews

Write a review

There are currently no reviews for "Control and Dynamic Systems"