Multivariate Approaches to Nonlinear Coherence: Stewart's Methods Based on False Nearest Neighbors: A Brief Postscript on their Significance

Prepared for: The International Conference on Nonlinear Dynamics &
Brain Functioning, February 5 -11, 1998, Bangalore, India

Frederick David Abraham

Blueberry Brain Institute & Chaos Cooperative, Waterbury Center VT USA 05677
and
Psychology Department, Silliman University, Dumaguete City, Philippines 6200

Attractor construction usually tries to evaluate dimensionality and other attractor invariants by an iterative procedure in which the computation of the statistic handshakes with the construction of the geometry of the attractor. Hopefully the procedure ends up with a state space using the true number of variables participating in the process producing the time series in the experiment (the 'embedding dimension'), with a trajectory that reveals the nature of the interactions of those participating variables, and with a reasonably accurate estimation of the statistic (Abarbanel, Brown, Sidorowich, & Tsimring, 1993; Abraham, 1997; Ott, Sauer, & Yorke, 1994; Rapp, 1994). In traditional linear frequency analysis of, say, EEG, brain cooperativity is revealed in part by the similarities among power spectra and cospectra, phase spectra, and most importantly, by coherence spectra (Abraham et al., 1973, Walter, Rhodes, & Adey, 1967). There is a resurgence of interest in coherence due to the recent influence of nonlinear interests, (e.g., Anderson & Mandell, 1996). There is such an explosion of variables in such research, that in our earlier linear work, we used step-wise discriminant analyses, a canonical method of finding the most important variables from a large number of them (Abraham, 1973, 1997; Walter et al., 1967), not unlike the purpose and nature of factor analysis. It is, of course, redundant to use the term 'coherence' with nonlinear techniques since the dimensionality (characteristic exponents and multipliers, the correlation dimension and its variants, as well as the embedding dimension), the Lyapunov spectrum, and the geometric portrait provided by the constructed trajectory or attractor, are specifically designed to reveal the strength and nature of the cooperativity among the essential variables. The term 'coherence', therefore, is used merely to emphasize an affinity of the nonlinear objectives with the older linear methods of exploring brain cooperativity.

 

In the usual reconstruction of a trajectory (attractor) from a single variable, one alternates between estimating a statistical measure of the attractor and drawing the trajectory in a state space, successively adding one embedding dimension to the state space at each step in the cycle, and proceeding until the statistic becomes asymptotic ('saturates'). Another criterion for knowing when to stop involves evaluating when the attractor achieves its best shape. The false nearest neighbor technique is an algorithmic way to determine when the best shape has arrived. The process proceeds iteratively until there are no nearby points (on an iterative map or Poincaré section) that can be separated significantly (as measured by their local divergence rates) as more variables are added (Abarbanel et al., 1993; Abraham, 1997; Stewart, 1996).

 

For all practical purposes, this procedure works best when the dimensionality of the space is not too great, and when the variable you are measuring is fairly tightly coupled to all relevant underlying processes. In research, to increase the possibility of choosing such a variable, we often may select a few carefully, or pick a large battery of them hoping to figure out afterwards the important variables from the trivial, the wheat from the chafe. Stewart (1996; reviewed in Abraham, 1997) has extended the false nearest neighbor technique to the multivariate case in the expectation that identifying relevant factors can be more efficient and meaningful when reducing from a larger set of measures, than from reconstructing from a single variable. He has shown a small advantage in efficiency, but more importantly, the identification of relevant factors can be especially aided by some interactive, insightful, free play with the variable selection procedure, and not just letting it proceed automatically with a fixed set of measures taken from the experiment.

Here are his criteria for how this technique proceeds. First, one must browse the variables for pairs of variables with the lowest divergence rate (provided they are lower than surrogate values). Here it is possible to fail this surrogate test, but to contribute to the dynamics significantly nonetheless when combined with other variables, the first example of playing with the selection choice, and not adhering to strict algorithmic criteria. Second, one should select among the remaining variables (to add to the first two) that one which increases the divergence the most. Iterate, adding to the embedding dimensions of the state space until the divergence ratio saturates. Again it is possible that judicious playing or reasoning may help select variables that do a better job than blind application of the algorithm, but the basic principles should hold. Third, compare the efficiency of this multivariate technique to that of reconstruction from a single variable to determine if the rate over iterative steps of rejecting false neighbors and if it arrives at a final set sooner (provides a lower dimensionality). In the example Stewart gives, there is a slight advantage of the multivariable method. Stewart's work also shows how the technique can be used to prune redundant variables from a large theory as well as from an empirical study.

As with most dynamical analysis techniques, much use is made of surrogate/Monte Carlo methods, and of calibrating with theoretical dynamical systems. Also, following ideas from the groups of Ott et al. (1994) and Abarbanel et al. (1993), I suggest that in addition to constructions from multiple variables, and reconstructions from single variables, that one considers hybrid sets as well. Select some of the best variables from empirical/theoretical considerations as to which may be most important, select some from those which fare best with single variable reconstruction, select some from the multivariable technique, and then create a new multivariable set from all those selected that way and restart the procedure. This could aid the search for combinations of variables that do the job best and lead to the best insights as to underlying processes.

A final plea is that while it seems tempting or convenient to play with these procedures with a fairly homogeneous set of time series, such as EEG, nonetheless the best studies of brain function and cooperativity will be those that range over the greatest scale of observations, from subatomic to psychological. This combination, has enjoyed, and will continue to have the most promise of significant progress in integrative neuropsychological science. In short, the multivariate approach is slightly more efficient at estimating the degrees of freedom, but its main advantage is that while both contain the measurable information of a system, the single variable reconstruction is more limited in its ability to point the way to significant underlying processes than the variables selected from a wisely chosen multivariate research program.

 

 

REFERENCES

 

Abarbanel, H.D.I., Brown, R., Sidorowich, J.J., & Tsimiring, L.S. (1993). The analysis of observed chaotic data in physical systems. Review of Modern Physics, 65, 1331-1392.

Abraham, F.D. (1997). Nonlinear coherence in multivariate research: Invariants and the reconstruction of attractors. Nonlinear Dynamics, Psychology, and Life Sciences. 1, 7-33.

Abraham, F.D., Bryant, H., Mettler, M., Bergerson, B., Moore, F., Maderdrut, J., Gardiner, M., Walter, D., & Jennrich. R. (1973). Spectrum and discriminant analyses reveal remote rather than local sources for hypothalamic EEG: Could waves affect unit activity. Brain Research, 49, 349-366.

Anderson, C.M., & Mandell, A.J. (1996). Fractal Time and the foundations of consciousness: Vertical convergence of 1/f phenomena from ion channels to behavioral states. In E. MacCormac & M.I. Stamenov (eds.), Advances in Consciousness Research. Amsterdam: Benjamins.

Ott, E., Sauer, T., & Yorke, J.A. (1994). Coping with Chaos. New York: Wiley.

Rapp, P.E., (1994). A guide to dynamical analysis. Integrative Physiological and Behavioral Science, 29, 311-327.

Stewart, H.B. (1996). Chaos, dynamical structure and climate variability. In D. Herbert (ed.), Chaos and the Changing Nature of Science and Medicine, an Introduction, Conference Proceedings, 376, 80-115. Woodbury: American Institute of Physics.

Walter, D.O., Rhodes, J.M., & Adey, W.R. (1967). Discriminating among states of consci0usness by EEG measurements. A study of fours subjects. Electroencephalographic and Clinical Neurophysiology, 22, 22-29.