Chaos and Time-Series Analysis. By Julien Clinton Sprott. Oxford University Press, 2003, Oxford, UK, & New York, US. 2003. xx + 507 pp; 298 figures. ISBN  0 19 850839 5 (hardback), 0 19850840 9 (paperback). $85/$45 (hardback/paperback, US), £25.95/£49.95 (hardback/paperback,UK)


Review by ©Frederick David Abraham, 2003


(1) Introduction: Intended Audience


I wanted a book to advance my own knowledge of dynamics. I got Sprott’s book to evaluate for this purpose, anticipating it might be, as he is a master at dynamics and has a knack for explaining things clearly and succinctly. It was a question as to its appropriateness for my own level of mathematical background, which I consider not very sophisticated but rather representative of many dynamical enthusiasts in our fields of biological, psychological, and social sciences. So I am reviewing this book as to the appropriateness of its use as a teaching/learning instrument in these fields, and as to its success in clarity and completeness of meeting that challenge.


A fast skim through the book showed it to be comprehensive, but possibly a bit daunting to those of us whose mathematics is limited to basic algebra, finite mathematics, and introductory calculus, as it includes topics such as Jacobian matrices of partial differentials, KAM tori, Lipshitz-Hölder exponents, Legendre transforms, and so on (things that just scare you if you haven’t encountered them before). But closer examination of each of them makes them seem quite tractable, and when you return after the first fast skim though the book, you read the preface and it sets your mind at ease, and tells you this is a book that will provide a great, complete, and manageable foundation in chaos theory and data analysis.


In his preface, Sprott explains that the book arose from a survey course he taught for upper-level undergraduate students, graduate students, and “other researchers, representing a wide variety of fields in science and engineering” As he puts it,


“This book is an introduction to the exciting developments in chaos and related topics in nonlinear dynamics, including the detection and quantification of chaos in experimental data, fractals, and complex systems . . . [mentioning] most of the important topics in nonlinear dynamics. Most of the topics are encountered several times with increasing sophistication.” The emphasis is on concepts and applications rather than proofs and derivations and is for “the student or researcher who wants to learn how to use the ideas in a practical setting, rather than the mathematically inclined reader who wants a deep theoretical understanding.

“While many books on chaos are purely qualitative and many others are highly mathematical, I have tried to minimize the mathematics while still giving the essential equations in their simplest form. I assume only an elementary knowledge of calculus. Complex numbers, differential equations, matrices, and vector calculus are used in places, but those tools are described as required. The level should thus be suitable for advanced undergraduate students in all fields of science and engineering as well as professional scientists in most disciplines.” (From the preface.)



It delivers on all these promises. Further, it is ‘hands-on’, with practical exercises and a programming project in each chapter. (Any language and computer platform will do; spreadsheets or math packages such as Maple or Mathematica may also be used if one is already capable with them. If one is not fluent in a programming language he suggests PowerBASIC—DOS or Windows versions—for its ease of learning). I found that having a dynamics program, such as Berkeley Madonna (Macey, Oster, & Zahnley, 2000) that will already solve equations with graphic displays was most useful for additional explorations. Thus Sprott’s book is most suitable for systematic study, but as with most textbooks, it can also serve as a useful reference work in your library. You may also find the programs by Sportt and Rowlands, useful supplements to the text, Chaos Demonstrations (1995) for examples of several programs, and Chaos Data Analyzer (1995), for data analyses.


(2) Contents


The 15 chapters cover the following topics: Introduction, One-dimensional maps, Nonchaotic multi-dimensional flows, Dynamical systems theory, Lyapunov exponents, Strange attractors, Bifurcations, Hamiltonian chaos, Time-series properties, Nonlinear prediction and noise reduction, Fractals, Calculation of the fractal dimension, Fractal measure and multifractals, Nonchaotic fractal sets, and Spatiotemporal chaos and complexity. In addition, there are three fantastic appendices. The first is a catalog of Common chaotic systems, there being 62 given, in five categories: noninvertible maps (12), dissipative maps (11), conservative maps (6), driven dissipative flows (8), autonomous dissipative flows (20), and conservative flows (5), each with a graph, equations, typical parametric values and initial conditions, Lyapunov exponents, Kaplan-Yorke dimension, correlation dimension, and a major reference. The second appendix gives useful mathematical formulas in ten categories: trigonometric relations, hyperbolic functions, logarithms, complex numbers, derivatives, integrals, approximations, matrices and determinants, roots of polynomials (including the Newton-Raphson method), and vector calculus. And the third appendix is a list of relevant journals. The bibliography of 715 entries covers everything from Abarbanel (1996) to Zipf (1949). Ruelle and Grassberger are the most cited senior authors (9 each), with Bak, Theiler, L. A. Smith, Grebogi, Arnold, Mandebrot, Lorenz, Sauer, and Schreiber also having 5 or more citations each. The oldest citation award goes to Huygens, 1673. There is an excellent support page with color versions of many of the figures and much supplementary information (including answers to some of the exercises) at It is continually updated, and contains many important links to other related pages of both his (such as the pages for the course that spawned the book) and on other websites.


(3)   Brief explorations of the chapters


The Introduction (chapter 1) is fairly brief but reveals both some major strategies as well as some subtle ones employed in the book. It mentions several examples of chaotic astronomical, physical, and nonphysical systems, usually quite briefly, many with photographs and diagrams. For a few he gives introductory equations, such as a second-order differential equation for the driven pendulum. These include some parametric values which yield chaos, and for one, its Poincaré sections. There is also a section on electrical circuits, which are of value whether one is intrinsically interested in electronic circuits or not. They have been tools for investigating dynamical systems since van der Pol in the 1920s. Since some can be easily and inexpensively constructed, they provide nice demonstrations for lecturing and for comparing analog and computer results. I found not only a color enlargement of one of the analog devices which made it easier to understand the circuit, and this led to questions about further details for the circuit found on another page (those in the book provide enough for one knowledgeable in basic circuits but I needed a little more help).


Another example of supplementary information on the website, was an expansion of biographical notes on Poincaré. A nice aspect of the book is the inclusion brief footnotes on individuals, such as Newton, Ulam, Kepler, Brahe, Duffing, and Landauer. There is also a pleasant sense of humor (“predicting the weather is easy, but predicting it correctly is not”; a torus described as resembling an American, not a British doughnut). After this very brief, skeletal introduction you are challenged with 14 exercises and a programming project with the logistic equation (I first did that with an ancient dialect of Basic back in the mid 80s on an Apple II; Paul Rapp also had done it and suggested to me adding a statement to produce tones, which is a nice enhancement, discussed later in Sprott’s book as sonification, a method for hearing bifurcations and attractor patterns). Programming the logistic equation prepares you for the next chapter on one dimensional maps (chapter 2) where the logistic equation is the main object of study, and where the programming exercise is taken up with an exploration of its bifurcation diagram. Similarly, the nice thing about the electronic circuits that Sprott shows is that their output can be displayed on an oscilloscope and a cheap speaker. But the first exercise. Whew! “Derive a set of four first-order ordinary differential equations whose solution would give the three-body trajectory of Figure 1. Well the figure reduces the possibilities to the simplest ones usually considered, but I suspect this problem, over which many have spent much time since Poincaré first stumbled on it, was offered to get one to wrestle with the problem rather than get to a solution. I found myself trying to cheat already. Not having Sprott’s demonstration program currently running on my computer (Chaos Demonstrations, Sprott & Rowlands, 1995) that included it, I searched his website and found a link to Wolfram’s site that told more about the 3-body problem.


Nonchaotic multi-dimensional flows (chapter 3) moves on to the continuous, multivariate case where time moves continuously rather than in discrete steps. It starts with a simple first-order, explicit, linear differential equation for population growth and decay. For those of us who have hassled over the meaning and history of some terms in dynamics, he gives most synonyms and discussion of some of these etymological issues, here and throughout the book. He uses this growth/decay model to provide the distinction between maps and flows, which he summarizes in a table. Continuing this comparison, he next takes up the logistic differential equation, (Verhulst, 1845). Then he proceeds to some multi-dimensional models: circular motion, the simple harmonic oscillator (to be taken up again in chapter 8 on Hamiltonian systems), the driven harmonic oscillator (from the introductory chapter), the damped harmonic oscillator, the driven damped harmonic oscillator, and the van der Pol equation. Under the driven harmonic oscillator, he takes up an important topic, namely that of converting a system of nonautonomous differential equations into an autonomous system, making it amenable to solution for representation in state space. Such clear explanations are not easily found elsewhere. The chapter finishes with an explanation of the various numerical methods of solving equations, the Euler, Leap-frog, Runge-Kutte second-order and fourth-order methods, and their advantages and disadvantages. This information is essential for anyone studying geometric properties of trajectories, attractors, and portraits[1] (the set of all possible trajectories that a system of equations can generate although the portrait is usually shown with just a few representative trajectories and critical features of the portrait such as limit sets, saddles, and separatrices).


Dynamical systems theory (chapter 4) is the heart of the book. Saying that it covers two-dimensional equilibria, stability, the damped harmonic oscillator (again), saddle points, area contraction and expansion, nonchaotic three-dimensional attractors, chaotic dissipative flows (including several well known ones such as those by Lorenz, Rössler, van der Pol, Ueda, some even simpler ones—some 19 are summarized in a table, and some jerk systems) would hardly reveal the importance of the concepts included. It finishes with ‘shadowing’ on the relationship between a true trajectory (theoretical or experimental) and the computed one. These provide the foundations for chaotic dynamical system theories. Here you get the no-intersection in two dimensions theorem (Poincaré-Bendixson theorem, Hirsch & Smale, 1974) which “is a cornerstone of dynamical systems theory”. Here also is where the use of of eigenvalues (characteristic exponents and multipliers) and the use of the determinant of the Jacobian are clearly developed. You need never have heard of these before. Previously I have Googled for such information. If I may be permitted (and even if I am not), I cannot do better to characterize the role of this chapter than to quote its first two paragraphs:


“We have seen examples of dynamical systems for iterated maps and continuous flows. Maps are simpler to analyze numerically and have a rich variety of dynamical behaviors, even in one dimension. By contrast, except in one dimension the solutions cannot do anything more complicated than grow or decay to an equilibrium point. Even in two dimensions, the most complicated behavior is growth or decay to a periodic limit cycle.

“In this chapter we will develop a more general theory of dynamical systems and extend the ideas to three dimensions where the flows can exhibit chaos. Although it is often difficult to calculate the trajectory, much can be gained from identifying the equilibrium points and examining the flow in their vicinity. Since the flow is usually smooth near these equilibria, we can make linear approximation and use these ideas developed in the previous chapter. From this knowledge, we can construct a good qualitative picture of how the flow must behave throughout the entire state space. The material in this chapter is slightly more formal than usual and makes some use of complex numbers and matrix algebra.”



Lyapunov exponents (chapter 5) are important for depicting the converging and diverging properties of a chaotic attractor. These are closely related to the eigenvalues upon which they depend but important differences are noted and are summarized in a table. There are important distinctions and relationships between local and global Lyapunov exponents. Examples for many systems are given and also numerical methods for their computation. The concepts of a 3-torus, hyperchaos, and non-integer dimension emerge from consideration of these systems.


The Kaplan-Yorke (Lyapunov) dimension is defined by interpolating between the number of the largest Lyapunov exponents summing to a positive number, its topological dimension—the attractor would expand (i.e., not exist) if unfolded in the dimensions represented by those exponents—and the minimum number for the sum to be negative—the attractor would contract (i.e., exist) if unfolded in the dimensions represented by those exponents. The non-integer result of this interpolation gives a measure of the dimensionality of the attractor. “The dimension of the attractor is the lower limit on the number of variables required to model its dynamics.” While related, the Liapunov[2] exponents measure behavior of a system over time; the dimension measures the complexity of the attractor. For a chaotic system of differential equations, at least one exponent must be positive (for the divergence necessary for chaos), one at least one must be zero, and at least one must be negative to insure the sum is negative and that the volume is contracting (Abarbanel, 1996, p. 27).


Strange attractors (chapter 6) begins with 12 properties such as limit set, invariance, stability, sensitivity to (divergence from) nearby initial conditions, and yes, aesthetics. He develops the idea of the probability of chaos with a dynamical system (the proportion of parameter space yielding chaos) and search techniques for finding values yielding chaos. Sprott is well known for developing this approach (see his earlier book or his web site). He is also known for programs that display such attractors in 3D (a plane with color for the third dimension). His development of such programs to display them in various formats and with their statistical properties not only illuminates these properties, but makes them available for use in research such as in his studies of the aesthetics of the attractors (many URLs are given). I remember about 1998 that a friend, Elliot Middleton, informed me of such a program on Sprott’s site which I immediately downloaded, recognizing its potential for doing a study of perception as a function of attractor dimension. I was mesmerized for hours by the beauty of the attractors, and since have used the program in psychophysical research. The chapter also includes a discussion of the routes to chaos, basin boundaries, fractal basin boundaries, and structural stability.


I consider Bifurcations (chapter 7) the most important property of nonlinear systems. Sprott considers them important “because they provide strong evidence of determinism in otherwise seemingly random systems, especially if the parameters can be repeatedly changed back and forth across the” bifurcation point. I consider them important from a slightly different perspective, that of a system that can explain differing patterns of experimental data whose connection may not have been previously noticed and for which different models might have been suggested. Both perspectives represent the same parsimonious point of view. In addition to the usual topics and forms of bifurcation, such as folds, transcritical, pitchfork, flip, Hopf, Neimark, and blue sky (R. H. Abraham & C.D. Shaw, 1992; R. H. Abraham & Stewart, 1986, and on the cover of Thompson & Stewart, 1986), it also includes homoclinic and heteroclinic bifurcations, examination of Lyapunov spectra as a function of control parameters especially with bifurcations that involve transient chaos, and crises.


Conservative systems (ideal systems that do not dissipate energy) include familiar examples like ideal frictionless pendula for which Hamiltonian equations may be used for analysis (Hamiltonian chaos, chapter 8). It also includes simplectic maps, which are important for systems where “the numerical methods may not precisely conserve the invariants”. I liked this section because it also explains the mathematical relationship between the flow (in n dimensions) and the map (in n-1 dimensions) to approximate its Poincaré section conservatively.


Since dynamical systems evolve in time, Time-series properties (chapter 9) are the heart of the analysis of data obtained from them. Often combining traditional linear methods with nonlinear methods helps to illuminate a dynamical process. Also because there are some methodological similarities of some aspects of the analyses, knowing linear methods helps to understand the nonlinear methods. Sprott provides a quick review of traditional linear methods (including topics of stationarity, detrending, noise, autocorrelation, Fourier analyses). Comparison of a noise signal with one produced by a one-dimensional map illustrates the use of surrogate data (Monte Carlo methods) and return maps to determine if your data contain deterministic as well as stochastic information. The final part of the chapter introduces time-delay embeddings used for attractor reconstruction and for “determining the dimension of an attractor”. The computer project for this chapter involves running an autocorrelation function on data generated by the Lorenz equations.

“One of the most important applications of time-series analysis is prediction (or forecasting)”, that is Nonlinear prediction and noise reduction (chapter 10). Myself, I am more interested in models than prediction, but these turn to be fundamentally related, and thus this chapter is exceptionally exciting. “In some cases, prediction entails developing a global dynamical model for the data, which may illuminate the underlying mechanisms.” Therefore, comparison of models with data lies at the heart of both prediction and evaluation of models. After showing how the linear methods of autoregression cannot help with prediction of chaos, and also showing the practical limitations of nonlinear methods, including both methods that use equations and those that use comparison of nearby trajectories, Sprott then shows how a method called random analog prediction works rather well for four examples. Similarly, linear noise reduction techniques are of little use with chaotic data, and thus state-space averaging is preferred.


Prediction depends on evaluating divergence of nearby trajectories, which, for data not modelled by equations, means measuring the rate of divergence and summarizing them in Lyapunov exponents. Sprott explains several methods of doing this and compares their relative advantages and drawbacks.


The subject of embedding from the end of the previous chapter (9) is examined further in this chapter with the discussion of false nearest neighbors, a method for estimating the optimal Cartesian embedding dimension by systematically increasing the embedding dimension until separation of nearby points no longer occurs (Kennel et al, 1992; Abarbanel et al., 1993). While traditionally the main use has been to get a best view of the attractor, to help determine when other measures of fractal dimensionality have saturated (become asymptotic), and to estimate the likely number of variables involved in the dynamical system under investigation, Sprott points out other uses, such as evaluating if sufficient points are in a neighborhood to support prediction. Stewart has described the extension of the technique to multivariate data[3] (Stewart, 1996; also presented in Abraham, 1997). The related subject of recurrence plots is also taken up, along with a derivative (actually integrated) space-time plot method, developed by Smith (1992) and Provenzale et al., (1992). Sprott evaluates various computational algorithms suggesting Schreiber’s (1995) as “a good compromise between simplicity and efficiency”.


Principal component analysis (also known by several synonyms including singular value decomposition and Karhunen-Loéve decomposition) is useful for noise reduction, in estimating dimension, and in building model equations with polynomials.


Among artificial neural network predictors, single-layer feed-forward networks are mentioned for their computational and conceptual ease and for the large literature on optimization and training. Two methods are mentioned: multi-dimensional Newton-Raphson (repeat the calculation of the error until it “stops changing or you lose patience”) and a simplified variant, simulated annealing. I might mention a really nice and innovational combination of the methods of artificial neural nets (but using multi-layer feedback) with the dynamical analysis of real neural nets in molluskan brains in the work of Mpitsos (2000).


Chapter 10 thus covered a lot of topics in a rather short space, and thus while providing a great introduction to them was sometimes rather brief, and may require additional reading or the use of Sprott’s web page and other links and references. I found that sometimes following indices to the matrix algebra required a bit of such effort, but that is related to the atrophy of my rudimentary matrix algebraic skills acquired quite some time ago.


Sprott takes up Fractals (chapter 11) from a broader view than simply that of its “geometric manifestation of chaotic dynamics”. “Dynamical systems are only one way to produce fractals.” Some of these include Cantor sets, fractal curves (devil’s staircase, Hilbert curve, Koch snowflake, the basin boundary of a Julia set, and the Weirstrass function). Several examples are given of fractal trees, fractal gaskets, fractal sponges, random fractals, and fractal landscapes (forgeries). A consideration of natural fractals (nature exhibiting fractal properties) completes the chapter. The chapter provides an important transition to the next two chapters on measuring fractal dimension. The computer project involves picking up from the project of chapter 11 and doing attractor reconstructions and return maps from the same Lorenz data set.


A few of the methods are presented for the Calculation of the fractal dimension (chapter 12). The Kaplan-Yorke dimension works when the equations are known. Estimating the Lyapunov spectrum for empirical time-series is difficult, so other methods, of which there are many, may be used. These include the similarity dimension, the capacity dimension, and the correlation dimension. Next, there is a discussion of “entropy, which is the sum of the positive Lyapunov exponents and measures the rate at which predictability is lost”; it is obviously related to Shannon’s information theory, which coined the term entropy. The BDS statistic measures the amount of determinism in a time-series by evaluating its departure from randomness. It depends on the correlation dimension. Minimum mutual information, defined using autocorrelation, is a measure that is used to help estimate lags for a time-delay embedding and thereby to play a role in the determination of the embedding dimension and the attractor reconstruction. Sprott then evaluates some practical considerations that include the speed of calculations, requirements of size of data sets, precision, noise, the use of multivariate data, filtering, missing data, sample spacing, and nonstationarity. The advantages of using multivariate data were well stated, but a more extended treatment might help to underline its importance. The final method is that of computing fractal dimension of graphic images such a logistic map, Sierpinski carpet, or trees. This method can be important especially for those trying to use natural images in psychological studies, or comparing other stimuli in psychophysical research to natural objects, especially when conjectures about evolution of perceptual processes are of interest.


What do you do if your correlation dimension isn’t converging nicely? Are your various fractal dimensions disparate? Is your fractal not very homogeneous? Is that what is getting you down? You need Fractal measure for your multifractal (non-homogeneous or compound fractal) (chapter 13). This involves an extension of the fractal dimensions of the preceding chapter into a spectrum of generalized dimension. Numerical calculations and their limitations are discussed. Alternative characterization of multifractals can be achieved with the similarity spectrum or a dynamical spectrum of entropies. This is a complex but highly enlightening chapter.


Nonchaotic fractal sets (chapter 14) is a discussion of fractal objects generated by systems other than chaotic dynamical systems. These include iterated function systems (the chaos game and affine transformations), which can be used to create images simulating natural objects and to compress images[4]. They can also be used to create patterns from data which give visual clues as to deterministic and stochastic features of the data—the IFS clumpiness test. Fractals include Julia sets, Fatou sets (their complement), their generalizations. The Mandelbrot set is a map of the Julia sets. These objects are far more complex then their simple equations suggest, so their clear and succinct introduction here is very valuable. The chapter includes escape contours, a list of some interesting variants, basins of Newton’s method, and a discussion of methods for achieving computational speed in the demanding calculations necessary for exploring fractals.


Spatiotemporal chaos and complexity is the final chapter (15). Spatiotemporal chaos means that chaos is exhibited over spatial as well as temporal dimensions. Complexity refers to broad class of subjects not unified by any theory, but many of them depend on dynamical system concepts. The term includes not only chaos, fractals, and neural networks, and artificial life, but also complex dynamical systems which in turn includes cellular automata, lattices, and self-organization. Cellular automata, for example, are networks of dynamical systems where nearby neighbors are coupled (share variables). Some special systems that have had widespread deployment in many sciences include self-organized criticality, the Ising model, and percolation, the last being of interest as a complex adaptive system. Coupled lattices (also continuous cellular automata) are a generalization of cellular automata with cells continuous variables. Infinite dimensional systems “with infinitely many lattice points, the discrete models approach the spatially continuous case in the same way maps approach temporally continuous flows.” Several examples are given (Mackey-Glass, Navier-Stokes, and three others). A summary of spatiotemporal models in terms of discrete or continuous spatial, temporal, or state conditions is given, along with consideration of criteria and trade-offs for usage.


His final concluding remarks are of special interest to me, as they reflect the way I used to finish off many of my articles, issues also emphasized by Christine Hardy (1998), and they deal with issues of free will and responsibilities, social, ecological, and for Sprott, aesthetical, for a more beautiful world.


(4)   Conclusion and recommendations


It should be clear by now that I consider this book an indispensable addition to my bookshelf. I intend to use it to get a good foundation in almost all aspects of dynamical systems theory, especially, of course, chaos theory and data analysis. For those with the minimum recommended mathematical background (elementary calculus and some matrix algebra), some of the review might still have sounded a bit forbidding, but as Sprott promises, the explanations and support material will fill in the necessary updating of your mathematics. In most instances the explanations were clear, and covered almost all related aspects of each subject. It was very impressive. Occasionally I found myself struggling with keeping the meanings of indices straight with the matrix algebraic expressions, but much less so than in any other similar book that I have encountered. The later chapters sometimes tried to compress so many topics into them, that the compression demanded some supplementation from the web sources or other books.  I suspect that if you do the exercises and programming projects, that you will find them very challenging and time consuming, but very rewarding. The extensive and evolving website back-up makes the book unique and even more valuable. It stays up to date as the field evolves. The book is thus perfect for self-instruction, or for use as a classroom textbook, and of course, as a reference work for workers in any field of science.


I’d like to end the review with my recommendations for a basic bookshelf. For learning dynamics, the two books I consider most important are this book of Sprott’s, and the book that visualizes most of basic dynamics, that of Ralph Abraham and Christopher Shaw (1992). Between them they give the basic elements of theory and data analysis. For additional details concerning theory, Schroeder (1991), and Thompson & Stewart (1986) are excellent, and for data treatment, I like Abarbanel (1996), Abarbanel et al. (1993), Kantz & Schreiber (1997), Rapp (1994), and Ott, Sauer, & Yorke (1994) which has excellent introductory chapters followed by reprints of many classic papers. I cannot say that there may not be others as good or better, but these are what have found their way onto my bookshelf and proven most useful, and what I use to struggle toward better understanding of dynamics. Of the many exceptional books on fractals, all the Mandelbrot and Peitgen books are great, but the best foundation for me has been Peitgen, Jürgens,  & Saupe (1992). Give me one more lifetime, and I might get it. Thanks, Dr. Sprott.


(4) References


Abarbanel, H.D.I. (1996). Analysis of observed chaotic data. New York: Springer-Verlag.


Abarbanel, H.D.I., Brown, R., Sidorowich, J.J., & Tsimring, L.Sh. (1993). The analysis of observed chaotic data in physical systems. Reviews of Modern Physics, 65. 1331-1392.


Abraham, F.D. (1997). Nonlinear coherence in multivariate research: Invariants and the reconstruction of attractors. Nonlinear dynamics, psychology and Life Sciences, 1, 7-33.


Abraham, R.H., & Shaw, C.D. (1992). Dynamics: The geometry of behavior (2nd ed.). Redwood City: Addison-Wesley.


Abraham, R.H., & Stewart, H.B. (1986). A chaotic blue-sky catastrophe in forced relaxation oscillations. Physica 21D, 394-400.


Barnsley, M.F., & Hurd, L.P. (1993). Fractal image compression. Wellesly: A. K. Peters.


Hardy, C. (1998). Networks of Meaning. Westport: Greenwood/Praeger.[5]


Hirsch, M.W., & Smale, S. (1974). Differential equations, dynamical systems and linear algebra. New York: Academic.


Kantz, H., & Schreiber, T. (1997). Nonlinear time series analysis. Cambridge: Cambridge.


Kennel, M., Brown, R., & Abarbanel, H. (1992). Determining embedding dimension for phase-space reconstruction using a geometrical construction. Physical Review A 45, 3403-3411.


Macey, R., Oster, G., & Zahnley, T. (2000). Berkeley Madonna User’s Guide, v. 8.0. UC Berkeley:


Mpitsos, G.J. (2000), Attractors: Architects of Network Organization? Brain, Behavior, and Evolution; 55, 256-277.


Ott, E., Sauer, T., & Yorke, J.A., (eds.). (1994). Coping with chaos: Analysis of chaotic data and the exploitation of chaotic systems.[6]


Peitgen, H.-O., Jürgens, H., & Saupe, D. (1992). Fractals for the classroom, Parts one and Two. New York: Springer-Verlag.


Rapp, P.E. (1994) Aguide to dynamical analysis. Integrative Physiological and Behavioral Science; 29, 311-327.


Schreiber, T. (1995). Efficient neighbor searching in nonlinear time series analysis. International Journal of Bifurcation and Chaos, 5, 349-358.


Schroeder, M. (1991). Fractals, chaos, power laws: Minutes from an infinite paradise. New York: Freeman.


Smith, L.A. (1992). Comments on the paper of R. Smith, Estimating dimension in noisy chaotic time series. Journal of the Royal Statistical Society Series B — Methodological., 54, 329-352.


Sprott, J.C. (1993). Strange attractors: creating patterns in chaos. New York: M&T Books.


Sprott, J.C., & Rowlands, G. (1995). Chaos data analyzer: the professional version. Raleigh: Physics Academic Software.


Sprott, J.C., & Rowlands, G. (1995). Chaos demonstrations. Raleigh: Physics Academic Software.


Stewart, H.B. (1996). Chaos, dynamical structure and climate variability. In D. Herbert (Ed.), Chaos and the changing nature of science and medicine, an introduction. Conference proceedings, 376, (80-115). Woodbury: American Institute of Physics.


Thompson, J.M.T., & Stewart, H.B. (1986). Nonlinear dynamics and chaos. New York: Wiley.


Vaerhulst, P.F. (1845). Récherches mathématiques sur la loi d’accrossment de la population. Noveaux Memoires de l’Academie Royale des Sciences et Belles-Lettres de Bruxelles, 18, 1-45.





[1] I omit the term ‘phase’ as a qualifier for ‘space’ and ‘portrait’ except when in the state space position of an object is a function of velocity (and other higher moments) or momentum, but common usage does not generally recognize this historical lineage as a restriction.

[2] I follow Sprott’s transliteration, but otherwise use Liapunov as I saw it on his old office at Moscow State University. Both transliterations, and other variants, are correct.

[3] Here is where the reviewer—honestly, I tried to avoid it—shamelessly inserts reference to a paper of his own, which while not original in that it presents the work of Kennel et al., 1992, and Abarbanel et al. 1993, is reasonably clear; Abraham, 1997)


[4] An interesting footnote in this chapter concerns the use of a collage system (Barnsley & Hurd, 1993) which can be used to create .FIF high compression image files, first used by Microsoft enabling the whole Encarta encyclopedia to fit on one CD.

[5] Reviewed at\dynamics\hardy-testimonial.htm

[6] Reviewed at\dynamics\osy.htm