A SENSE OF WONDER

by Robin Robertson
Paper presented at the 6th Annual Conference of the Society for Chaos Theory in Psychology and the Life Sciences, June 27, 1996, UCBerkeley (© Robin Robertson)


This is the 6th annual conference of the Society for Chaos Theory in Psychology and the Life Sciences. In the early years, everyone was still learning about what chaos theory was, and what possibilities it offered for their fields. Though our original name was simply the Society for Chaos Theory in Psychology, from the beginning our membership included not only psychologists--both experimental and clinical--but everyone from mathematicians and physicists at one end of the spectrum, to artists at the other, with every species of hard and soft scientist in between. Everyone was filled with a sense of wonder over the possibilities that chaos theory offered them personally, and everyone wanted to communicate their excitement to the others. Because chaos theory was still relatively new to almost everyone, it served as a lingua franca that cut across the bewildering variety of professional languages spoken by our members. Since most of us were less sophisticated then, sometimes even that common language got a little mixed-up in the process. But that just added to the excitement.

By the third conference, in Orillia, Canada, the level of sophistication had grown to the point that members from various schools of thought each felt that they knew quite clearly what chaos theory was and was not. Jeff Goldstein led a largely successful attempt to remind everyone that while there was common ground we all shared, there were also equally compelling alternative positions that we could take and still be considered part of the larger community. I think he chose a singularly apt title for his discussion: "The Tower of Babel in Nonlinear Dynamics: Toward a Clarification of Terms." Though one thing that saddened me even then was that already the wonderful symbolic term chaos theory had been transformed into the more accurate, but leaden nonlinear dynamics.

Increasingly, we found things weren't as simple as we had initially thought. Paul Rapp, who was one of the early proponents of the possibility of deterministic chaos in the human nervous system, pulled back from that position as more data came in (Robertson & Combs, 1995, pp. 89-100.) We began to wonder if processes we had regarded as chaotic were not simply stochastic? Our level of caution increased, and we grew less tolerant of those who made wide-ranging claims. In the broad scientific world, chaos theory is often now seen as a passing fad, or at best just as another new tool to be lumped in with other such tools.

In response, we have grown ever more sophisticated and increasingly more concerned with the technical aspects of nonlinear dynamics. Though our membership still ranges over a wide variety of fields, there seems increasingly less room for those concerned with either the metaphorical possibilities of chaos theory, or the philosophical challenges with which it confronts us. Perhaps this is as it should be, and is merely a mark of how any field settles into maturity. But perhaps also it is a sign that when things didn't come our way quickly enough, we settled for too little. Youthful hubris always thinks that when it finds an answer, it has found all answers. Did we really think that deterministic chaos theory would be sufficient to deal with all of reality? Were those who pursued the technical applications on the right path, or has it been those who regarded chaos theory as a wonderful metaphor for much that they encountered in reality?

Speaking for the clinical psychologists, I doubt that any clinical psychologist ever thought that the complexities we encounter in therapy could ever be fully described by the traditional chaos theory path of order breaking down into bifurcations, which lead to chaos, then eventually out to new organization. The actuality has so many more shadings than that physical picture. What was important was that chaos theory offered a better metaphorical model for the process than any other we had to that point. Though we used the shorthand of bifurcations and chaos and attractors so broadly that it made those more concerned with technical accuracy cringe, we didn't intend to be taken literally. Or maybe sometimes, in the excitement of the moment, we did take it literally, but clinical psychologists are necessarily pragmatists, since actual human beings don't fit very well into any theory. For a clinical psychologist, there will always still be a client with problems, and that person takes precedence over any model, however exciting.

CABINETS OF WONDER

One of the more fascinating books of 1995 was Lawrence Weschler's Mr. Wilson's Cabinet of Wonders (Weschler, 1995). It tells about David Wilson's "Museum of Jurassic Technology", a strange and fascinating little museum in Los Angeles, that I've been lucky enough to visit. It's filled with a wonderful miscellany of objects that would seem to defy description, yet each is lovingly presented and described with impeccable scholarship and the latest in technology. Like the strange, often self-referential stories of Jorge Luis Borges, it is very difficult to sort out the true from the fantastic, since fantasy and truth blend inextricably in the Museum of Jurassic Technology.

In order to explain why such a place exists, Weschler records the history of museums, which began as collections of just such varied wonders as those in David Wilson's now atypical museum. Weschler tells us that "by the late sixteenth and early seventeenth centuries, this sort of hoard (the chamber of wonders, in which the word wonder referred both to the objects displayed and the subjective state those objects induced in their respective viewers) was rampant all over Europe" (Weschler, 1995, p. 77).

"The late sixteenth and early seventeenth centuries." This was the same time period when science came into existence. The Renaissance had turned our eyes outward onto the world. The act of observation led inevitably toward the scientific method. But note that first came the sense of wonder. It wasn't yet time to categorize and theorize. The thinkers of the day were more like magpies, picking up every pretty little stone that they saw, not sure which would ultimately be gold.

Probably the most perfect example of the species was the Scotch polyglot James Crichton, immortalized in our language as "the admirable Crichton." When Crichton came to Paris, he posted a challenge to duel intellectually with all challengers "on any science, liberal art, or discipline, whether metaphysical, arithmetical, geometrical, astronomical, musical, optical, cosmographical, trigonometrical, or statistical, whether in Hebrew, Syriac, Arabic, Greek, Latin, Spanish, French, Italian, English, Dutch, Flemish, or Slavonian, in verse or prose at the disputant's choice" (Bishop, 1969, p. 11).

Of course, since this probably apocryphal tale comes from Crichton's hagiographer, Crichton bested all his challengers, and was rewarded with a speech of high praise and a "purse of gold." Unfortunately Crichton died a short while later in 1582, stabbed by a jealous rival for a woman's hand--a royal rival at that, just to finish his tale in a manner befitting his life. He was but 22 at the time. Strange as his story sounds, Crichton was less an anomaly than a representative of the time, a magpie who thought that his pretty little pebbles constituted universal knowledge. That could only occur at a time when, in the words of essayist Adalgisa Lugli, "men of science looked upon wonder or marvel as upon one of the essential components of the study of nature and the unraveling of its secretswonder defined as a form of learning--an intermediate, highly particular state akin to a sort of suspension of the mind between ignorance and enlightenment that marks the end of unknowing and the beginning of knowing" (Weschler, 1995, pp. 89-90). Of course, the opposite attitude was already emerging. By the early seventeenth century, Rene Descartes wrote that "what we commonly call being astonished is an excess of wonder which can never be otherwise than bad" (Weschler, 1995, p. 89)

Perhaps our own time is much like that transitional era. Legendary mathematician John Horton Conway, probably best known for creating the game of Life, is one of an increasing number who no longer agree with Descartes that "an excess of wondercan never be otherwise than bad." "I like things that shine," Conway says, "and that involves quite often that they're a bit trashy. The magpie just picks up a piece of plastic that's covered in gold. I have taste, but I don't exercise it very frequently. So I'm just as likely to be doing something that isn't really worth doing as something that is" (Shulman, 1995, p. 98; also see Rucker, 1983, pp.89-91; Gardner, 1978, pp.16-19). But sometimes "things that shine" are gold: Conway's lighthearted approach led to his discovery of surreal numbers. Using only two simple rules, all the integers emerge, both positive and negative, then rational numbers, then the real numbers that fit between, then all of Cantor's transfinite numbers. And it just begins there. Infinitesimals which are the inverse of the transfinite numbers emerge, then algebraic roots of transfinites and infinitesimals. Of course, the jury is still out on his discovery, as perhaps it still is on chaos theory. After all, how could something really significant emerge from such an unserious attitude?

Is it too much to ask if perhaps it might be best for a while longer to regard ourselves as magpies assembling our own cabinet of wonder? In these days when all the old verities are being called into question, is it too much to again consider a sense of wonder "as a form of learning--an intermediate, highly particular state akin to a sort of suspension of the mind between ignorance and enlightenment that marks the end of unknowing and the beginning of knowing."

If anyone is inclined to scoff at such a request, I'd like to quote someone closer to home. In his foreword to the collection of chaos theory papers which Allan Combs and I edited, Walter Freeman says that "at present we are in the joyous phase of children let out of school, who are free to wander in a garden of delights just to see what is there. The hard work of proof will come soon enough, but it should not be required before our imaginations have taken wing. Tolerance for the play of ideas is all the more important, because there is something very deep going on in this decade, and none of us immersed in this period can have the historical perspective to grasp the enormity of it" (Robertson & Combs, 1995, p. xi).

SELF-REFERENCE AND TIME

Previously I spoke from the viewpoint of a clinical psychologist, but the need to hold on to a sense of wonder should hardly be confined to clinical psychologists. Speaking now also as a mathematician, I can remember how thrilled I was when I first read in Gleick about Feigenbaum numbers. The fact that they emerged over such a wide variety of mathematical functions, and even from actual physical data, sent chills up my spine. I can remember immediately connecting it in my mind with Fourier transforms, not because they had anything mathematically in common, but because Fourier had also discovered a commonality that emerged seemingly magically out of an incredible variety of both physical data and mathematical functions. And later we will bring Fourier transforms into our discussion.

Do you remember Feigenbaum's words in the introduction to his first article on these strange new numbers (Feigenbaum, 1978, pp. 25-52). He said that "the numbershave been computationally determined" [my emphasis], and later: "at present our treatment is heuristic." Since when was mathematics concerned with computationally determined numbers? How could a major mathematical discovery be heuristic? This was such a new way of thinking about reality that it was almost scary, a strange new territory where pure mathematics and experimental science merged. That feeling evoked in me by Feigenbaum's words was the sense of wonder being awakened, the numinous emerging as it always does, from an unexpected area.

Like many of us who are interested in chaos and complexity, I'm multi-disciplinary. My undergraduate work was in an oddball combination of pure mathematics and English literature; I have separate bachelor's degrees in both areas. My actual work for many years was in applied mathematics: actuarial work and computer software design. In mid-life, I went back to school and got a doctorate in clinical psychology, with an emphasis on Jung's analytic psychology. For many years now, I've balanced all those areas: doing consulting work in actuarial or software design, writing or editing books and articles largely on Jungian psychology, especially the philosophical and archetypal underpinnings of mathematics and science. I also try to function occasionally as a therapist, though not for pay.

With my background in both of the "two cultures", as C. P. Snow once termed art and science, I tend to think about chaos/complexity from opposite ends of the spectrum. I'm both interested in how the ideas of chaos/complexity fit real-life psychological situations, and in the deeper philosophical issues that underlie chaos and complexity. Though I'm also a mathematician, at heart I'm a pure not an applied mathematician, so the middle area where the mathematics of chaos/complexity can actually be applied to experimental psychology is territory for others, not for me. At both ends of the spectrum, I respond to a feeling of wonder at the magical possibilities that are revealed.

One area of chaos/complexity that interests me deeply is self-reference. For me, self-reference is the single over-riding factor that unites all the areas we deal with. In Feigenbaum's work, it was iteration (which is just self-reference under another name), that smoothed away the differences in the wide variety of mathematical functions he dealt with, and allowed the magical uniformity of Feigenbaum numbers to emerge. It is self-reference that allows autopoietic structures to retain their degree of autonomy despite a widely varying set of circumstances emerging over time. Of course, self-reference also shows us how such self-referential structures can change into other self-referential structures.

My first exposure to self-referential issues wasn't through chaos/complexity, but in studying Georg Cantor's set theory and Kurt Gödel's famous incompleteness theorem. I've been thinking about the issues both raised for over thirty years now. When I became interested in Jungian psychology seventeen years ago, I began to reconcile Cantor and Gödel with Jung's archetypal psychology. For me, the fit was nearly perfect. In comparison, bringing in chaos and complexity is relatively new to me, only occurring within the last five or six years.

THE PARADOXES OF SELF-REFERENCE

Consider the logical paradoxes that appeared after Cantor developed his set theory late in the nineteenth century. Set theory was the first theory, and still the major theory, which encompassed all of mathematics. It was for mathematicians what a GUT (Grand Unified Theory) would be for physicists if it could ever be developed. Central to Cantor's theory was his definition of an infinite set as a set which could be put into a one-to-one mapping with a sub-set of itself.

Now this strange concept of mapping an infinite set onto a sub-set of itself was hardly new: in the early 1600's, Galileo pointed out that there appear to be the same number of whole numbers and squares, since we can map 1 with 1, 2 with 4, 3 with 9, The problem, he felt, only occurs "when we attempt, with our finite minds, to discuss the infinite, assigning to it those properties which we give to the finite and limited; but this is wrong, for we cannot speak of infinite quantities as being one greater than or less than or equal to another" (Crew & De Salvio, 1914, p. 26).

Cantor's great advance was recognizing that infinity came in more than one size; e.g., the set of real numbers was too large to map onto the set of whole numbers. The turning point was his discovery of the diagonal proof in 1891. Though his first proof was published as early as 1874, it was labored and attacked on a number of grounds. While the diagonal proof didn't silence his critics, it was so clearly presented that it forced mathematicians in general to think more deeply about the concept of self-reference (Dauben, 1979, pp. 165-167).

Paradoxes soon began to appear. Cantor showed that no infinite set could be put into a one-to-one mapping with its power set. (A power set is the set of all combinations of elements of a set; i.e., all the relations between members of a set). But how about the set of all sets. By definition, that had to be bigger than any other set. Yet wouldn't its power set be bigger still? Cantor was comfortable with this paradox, because he accepted the set of all sets as a symbol of the godhead (Dauben, 1979, p. 241).

Later when first Gottlob Frege, then Bertrand Russell and Alfred North Whitehead, shifted the grounds from set theory to logic, the paradox reappeared in the form of the set of all sets which do not contain themselves. The paradoxical question there was whether that set contained itself. And, of course, there was no answer. Within mathematics, David Hilbert tried to evade the paradoxes of self-reference by developing mathematics as a purely axiomatic system from a finite number of axioms. Finally, Kurt Gödel took them all out of their misery by proving that any logical system which is at least complex enough to include arithmetic, contains true statements whose truth or falsity cannot be proved with the system. Gödel's proof involved an extremely clever self-referential mapping of meta-statements about arithmetic onto arithmetic statements, and arithmetic statements onto numbers (now called Gödel numbers).

Having conquered mathematics, the self-referential problem then moved into the physical world, with the Church-Turing Thesis (named after the separate but complementary work of mathematicians Alonzo Church and Alan Turing). Turing took the idea of solving a problem one step at a time to its logical conclusion, with the concept of an idealized machine (i.e., a Turing Machine), which could solve any clearly stated problem one step at a time. The Church-Turing Thesis asserts that Turing Machines are logically possible. Unfortunately, there is no way to evade Gödel's theorem; though such machines are possible, some solutions might take an infinite amount of time! And it is impossible to predict in advance which problems are solvable and which will take an infinite amount of time.

It wasn't long before Turing's idealized machine led, through the work of mathematicians John von Neumann and Norbert Wiener, to the development of the computer. And that, of course, eventually led to the discovery of chaos theory. All because of self-reference!

G. SPENCER-BROWN/FRANCISCO VARELA & SELF-REFERENCE

More recently yet, my sense of wonder has been reawakened by G. Spencer-Brown's Laws of Form (Spencer-Brown, 1972), a brilliant binary calculus that goes back to the late 60's, but which I had dismissed out of my own ignorance until last year. Of interest here is that Spencer-Brown's work, which begins with nothing but the void and a distinction within the void, proceeds along seemingly ineluctable mathematical lines, until it eventually culminates in in an explicitly defined self-reference. Francisco Varela has in turn extended Spencer-Brown's work from a 2-value logic to a 3-value in which self-reference joins the void and distinction as the three primary entities that constitute all reality (Varela, 1979, chaps. 11-13, appendix b).

Varela was attempting to find the simplest possible way to symbolize a reality which explicitly includes self-reference, since self-reference, in his words "is the nerve of the kind of dynamics we have been considering in living systems and autopoieses" (Varela, 1979, pp. 106-107). And, of course, it's also "the nerve of the kind of dynamics" all of us are interested in. A deeper understanding of self-reference is also necessary to escape from logical conundrums of the sort that appeared when self-reference necessarily began to poke its head into science and mathematics in the late nineteenth and early twentieth centuries. Varela comments that:

it is, I suspect, only in a nineteenth-century social science that the abstraction of the dialectics of opposites could have been established. This also applies to the observer's properties.There is mutual reflection between describer and description. But here again we have been used to taking these terms as opposites: observer/observed, subject/object as Hegelian pairs. From my point of view, these poles are not effectively opposed, but moments of a larger unity that sits on a metalevel with respect to both terms (Varela, 1979, p. 101).

Hegel's version of the "dialectics of opposites" was organic. First there was a thesis, which necessarily called into existence its antithesis. Out of the interplay between thesis and antithesis over time, ineluctably emerged a new synthesis of both. Then the cycle would repeat with the emergent synthesis as a new thesis, which created a new antithesis, and so on ad infinitum. The essentially nineteenth century slant of the dialectic was the emphasis on the organic evolution over time. After Darwin, time could never again be ignored in considering such issues. We'll return to the issue a little later. But note that effectively for Hegel, thesis and antithesis are related in a self-referential loop, from which eventually a new synthesis emerges. This is almost exactly like the iterative equations we are so used to, which produce first bifurcations, then chaos. It was simply a little too early in Hegel's time for the mathematics to emerge.

Let me just give one final extended quote from Varela on this issue of self-reference, in this case under the seemingly less fearful physical term of feedback. He comments that:

When [Norbert] Wiener brought the feedback idea to the foreground, not only did it become immediately recognized as a fundamental concept, but it also raised major philosophical questions as to the validity of the cause-effect doctrine.the nature of feedback is that it gives a mechanism, which is independent of particular properties, of components, for constituting a stable unit. And from this mechanism, the appearance of stability gives a rationale to the observed purposive behavior of systems and a possibility of understanding teleology.Since Wiener, the analysis of various types of systems has borne this same generalization: Whenever a whole is identified, its interactions turn out to be circularly interconnected, and cannot be taken as linear cause-effect relationships if one is not to lose the system's characteristics (Varela, 1979, pp.166-167).

There's several important realizations within that statement. "Feedbackgives a mechanism, which is independent of particular properties, of components, for constituting a stable unit." Isn't that what Feigenbaum formalized for us? And consider the follow-up statement that "the appearance of stability gives a rationale to the observed purposive behavior of systems and a possibility of understanding teleology." In other words, cause-and-effect is perhaps an overly crude description of any reality that involves feedback. Feedback enables systems to preserve a personal integrity over time, despite a widely varying set of outer circumstances. Once that self-referential definition of a system is in place, the system is both necessarily purposeful, and its evolution can be considered from a teleological, as well as a causal viewpoint, since the definition of identity is more significant that the causal factors within which it functions.

So we find that whenever we attempt to describe sufficiently complex closed systems, self-reference is necessary in order to explain how those systems remain closed. On the other side of the coin, chaos theory emerges when sufficiently complex, self-referential open systems are considered. Self-reference is the common denominator that underlies both organic closure and change through the stages of chaos.

Therefore, it's easy to understand why first Spencer-Brown, then Varela, wanted to isolate what distinguished self-reference at its most basic. Though Spencer-Brown was dealing with one of the purest (perhaps the single purest) mathematical systems ever developed, its development led him inevitably to self-reference, and that led him to the question of the relationship between form and time. And, as you will recall in our earlier history of self-reference in mathematics, once Cantor had brought the issue of self-reference explicitly into mathematics, paradoxes appeared which Gödel eventually proved were unresolvable. When Church and Turing provided a physical model for Gödel's proof, time entered the picture. In the timeless world of mathematics, the ultimate end-point is a statement whose truth or falsity cannot be determined. In the physical world of the computer, the end point is a statement which cannot be solved within a finite amount of time. Where does that leave us? Is there any way out of this conundrum?

OUR CAPACITY TO ACCESS TIMELESS INFORMATION

Earlier, I said that my own interests lie primarily in attempting to reconcile the issues of self-reference presented by Cantor and Gödel with Jung's analytical psychology. Jung's concept of archetypes are widely misunderstood. Jung's own understanding of archetypes went through three stages. Initially he encountered what he then termed "primordial images" in the dreams and fantasties of his patients. In one famous example, a patient with grandiose religious fantasies, told Jung that the sun had a penis hanging from it, whose swinging creeated the winds. Several years later, in a then recently published book, Jung discovered a description of a Mithraic ritual, which discussed the sun as a divinity with a long tube coming down from it, whose swinging created the winds. The patient's working-class education and background had been far removed from such exotic topics as Mithraic rituals. He had been hospitalized since his early manhood, long before this particular manuscript was ever discovered and translated. As a patient, he had absolutely no way of acquiring this rare, scholarly book. Jung himself, at the time of the original episode with the patient, had no detailed knowledge of mythology. The most likely hypothesis Jung could propose was that the patient somehow tapped a collective memory (Jung, 1967, par. 151).

Gradually, Jung came to separate the concept of an underlying archetype from the image with which it presented itself. The image might vary with with the culture and background of the person experiencing the archetype. The underlying archetype was formless, and takes form only through its manifestation in either the inner world of our dreams and fantasies, or in the outer world through projection (Jung, 1969, par. 70). Late in his life, he came to a third stage in understanding the nature of archetypes. He speculated that the simple counting numbers were the most basic archetypes of order. Further, archetypes were not only formless, but existed in a world in which the normal limits of time and space did not exist. He termed this the unus mundus, the unitary world which underlay both mind and matter (Jung, 1963, pars. 759-777). Hence archetypes might at base be mathematical abstractions much like those of quantum mechanics; their personifications due only to the needs of the human mind.

If this sounds odd, isn't that what Einstein meant by a space-time continuum? Relativity leaves no room for the individual existence of either space or time. Quantum theory added its two cents to the argument with the discovery that at the sub-atomic level, particles have only a statistical existence. Like archetypes, particles can be seen as existing only in potentia, until actualized by an act of observation in the physical world. This representation of the world is probably best-known through David Bohm's model of an implicate world out of which the explicate world we live in unfolds, into which it again enfolds.

Closer to non-linear dynamical thought, Karl Pribram has advanced his holographic model of the human brain, which demonstrates how we are able to experience reality either in the causal mode we consider normal, or acausally through the ability of the brain to spread experience across a wide spectrum in which time and space are collapsed. The brain is able to use Fourier transforms to move back and forth between the separate modes of experience. Pribram argues that underlying both the mental and physical world is structure. The ability of the brain to transform between the causal and acasual modes transcends the usual mind/brain dualism (Pribram, 1985, and elsewhere throughout his writings). In Jung's terms, the archetypes are pure dynamical structure, which can be stored spectrally across all reality, accessed by individual minds, then transformed into forms in which they can be experienced through image or behavior.

Now I realize that in the above I've laid out more problems than I have provided answers. All I've done is to record some of the history-of-ideas that for me revolve around each other when I begin thinking about the nature of self-reference. I don't have any final answers about any of the issues presented, but that seems fine to me. These are numinous areas which should first and foremost evoke a sense of awe. What I do hope I've accomplished is to try and show that there are still mysteries hidden within the areas we are so accustomed to dealing with in chaos theory. In this case, I've taken self-reference because that is dear to my heart, but it is hardly alone. Let's make use of the wonderful tools that non-linear dynamics has provided us with, but let's also keep our minds and hearts open to a sense of wonder.

REFERENCES

Bishop, Morris, The Exotics: Being a collection of unique personalities and remarkable characters, New York: American Heritage Press, 1969.

Crew, Henry and De Salvio, Alfonso, Two New Sciences, New York: Macmillan, 1914.

Dauben, Joseph Warren, Georg Cantor: His Mathematics and Philosophy of the Infinite, Princeton: Princeton University Press, 1979.

Feigenbaum, Mitchell J., "Quantitative Universality for a Class of Nonlinear Transformations," Journal of Statistical Physics, Vol. 19, No. 1, 1978.

Gardner, Martin, Mathematical Magic Show, New York: Vintage Books. 1978.

Jung, C. G., Collected Works, Vol. 5: Symbols of Transformation, 2nd edition, Princeton: Princeton University Press, 1967.

Jung, C. G., Collected Works, Vol. 9i: The Archetypes of the Collective Unconscious, Princeton: Princeton University Press, 1969.

Jung, C. G., Collected Works, Vol. 14: Mysterium Coniunctionis, Princeton: Princeton University Press, 1963.

Pribram, Karl. "The Cognitive Revolution and Mind/Brain Issues", American Psychologist, Summer 1985.

Robertson, Robin and Combs, Allan. Chaos Theory in Psychology and the Life Sciences, 1995.

Rucker, Rudy, Infinity and the Mind, New York: Bantam, 1983

Shulman, Polly, "Infinity Plus one, and Other Surreal Numbers," Discover Magazine, December 1995.

Spencer-Brown, G., Laws of Form, New York: Julian Press, 1972.

Varela, Francisco J., Principles of Biological Autonomy, New York: North Holland, 1979.

Weschler, Lawrence, Mr. Wilson's Cabinet of Wonders, New York: Pantheon, 1995.

Created: 11/2/96 Updated: 1/1/96