As is well known, Pontryagin s maximum principle and Bellman s dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently Since both methods are used to investigate the same problems, a natural question one will ask is the fol lowing Q What is the relationship betwccn the maximum principlc and dy namic programming in stochastic optimal controls There did exist some researches prior to the 1980s on the relationship between these two Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases In the statement of a Pontryagin type maximum principle there is an adjoint equation, which is an ordinary differential equation ODE in the finite dimensional deterministic case and a stochastic differential equation SDE in the stochastic case The system consisting of the adjoint equa tion, the original state equation, and the maximum condition is referred to as an extended Hamiltonian system On the other hand, in Bellman s dynamic programming, there is a partial differential equation PDE , of first order in the finite dimensional deterministic case and of second or der in the stochastic case This is known as a Hamilton Jacobi Bellman HJB equation....
|Title||:||Stochastic Controls (Stochastic Modelling and Applied Probability, Band 43)|
|Publisher||:||Springer Auflage Softcover reprint of the original 1st ed 1999 22 Juni 1999|
|Number of Pages||:||464 Seiten|
|File Size||:||882 KB|
|Status||:||Available For Download|
|Last checked||:||21 Minutes ago!|
Stochastic Controls (Stochastic Modelling and Applied Probability, Band 43) Reviews
This book covers general stochastic control more thoroughly than any other book I could find.This is *not* a book on numerical methods. It is also not on the cases which yield closed-form solutions: there is a chapter on LQG problems, but for the most part, this book focuses on the general theory of stochastic controls -- which are not the easiest things to solve in general, as you may know. The book handles only diffusion processes with perfect knowledge of the past and present (natural filtration). If these sound like what you want, I doubt there's a more thorough treatment.It starts with a chapter on preliminaries of prob. spaces and stoch. processes and the Ito integral. After that, the book briefly addresses deterministic problems in order to compare solution methods to the stoch. approaches. It approaches the problems using a stochastic maximum principle and a stochastic Hamiltonian system, and also from a dynamic programming point of view using HJB equations. The authors attempt to show the relationship between the two approaches.This book is technically rigorous. Though it claims to be self-contained, the reader should certainly be familiar with functional analysis and stochastic processes.The authors try to keep the solutions as general as possible, handling non-smooth cases as well as smooth ones. This is fine, except that they don't emphasize well enough (I thought), for instance, that the solutions are much simpler when functions are well behaved on convex bodies (it's mentioned as a note on p. 120), or when diffusions are not dependent on controls, and such.Because of this tendency to present one solution which will handle any case, it could sometimes be difficult to figure out what all the terms are. In the end, it all works out. Each chapter ends with a few pages of "historical background": who did what piece of the theory when, with an excellent list of references. (I found the originals useful to help explain things, on occasion, especially to see simpler ways to do simpler cases)Altogether, a very thorough piece on general solutions to stochastic control! I was quite impressed.
Very nice book!
From every page of the book, it is clear, that the two authors know the subject, they are writing about!It is assumed, that the reader knows something about stochstic calculus and stochastic differential equations, and also about measure theoretic probability theory. My only exposure to these subjects was the book "Brownian Motion and Stochastic Calculus" by I. Karatzas and S. Shreve, and this was enough.The pace of the book was just right for me ( I am an engineer with a lot of interest in mathematics), not too slow, and not too fast.It might be advisable to read chapter 7 right after chapter 2 unless you have had previous exposure to BFSDE (Backward-Forward-Stochastic-Differential-Equations), which are extremely well explained there.The book is not free of typos ( I found about 30 ), but given the complexity of the sub/super scripts, it does not seem bad at all.
"Stochastic Control" by Yong and Zhou is a comprehensive introduction to the modern stochastic optimal control theory. While the stated goal of the book is to establish the equivalence between the Hamilton-Jacobi-Bellman and Pontryagin formulations of the subject, the authors touch upon all of its important facets.The book contains plenty of explicitly worked out examples, including classic applications of the theory to modern finance. Also, among other things, the book contains a detailed exposition of the general LQ problem and a very readable introduction to backward stochastic differential equations. A minor quibble: the generally very lucid presentation is somewhat overburdened with heavy notation.