The Hebrew Bible describes the condition of the earth before creation as תֹהוּ וָבֹהוּ (*tohu va-bohu*), “formless and void”. Both terms, *tohu* and *bohu*, convey formlessness and emptiness. It’s an interesting pair of ideas. And carrying these ideas beyond their original narrative setting, I’m intrigued by the thought that lack of form, or *structure*, could be understood also as a kind of emptiness, or nothingness.

With this episode I’d like to start what I intend to be a series of episodes about structure, looking at a philosophy of structure. Today I just want to introduce some general ideas and then explore particular examples of structure in more detail in later episodes, looking at structure in music, chemistry, biology, and other fields.

To introduce the subject I’d like to pull together some ideas from different subjects that range from highly technical and quantitatively rigorous to conceptual and qualitative. There are tools in physics and in information theory that can give very specific measures of certain kinds of structures. And those tools are also conceptually instructive. But I don’t think those measures exhaust or cover everything that we mean by or understand structure to be. So all of this will fall into a diverse toolbox of ways to think about structure and to approach the topic.

Going back to the Hebrew Bible and the primordial condition of *tohu va-bohu*, if we think of this condition as a lack of structure, the kind of emptiness or nothingness I imagine in the lack of structure is not absolute nothingness, whatever that might be. But it’s nothingness of the sort of there not being anything very interesting. Even if there’s “stuff” there there’s not really anything going on. Or even if there’s stuff going on, like as a whirling, chaotic mass, it still amounts to uniformity with all the pieces just canceling each other out, adding up to not very much.

Part of the lack is an aesthetic lack. An absence of engaging content. There’s a great literary illustration of this idea in Cixin Liu’s novel *The Three Body Problem*. This is from a scene where one of the characters, who is suffering from a mental illness, is meditating and trying to heal his troubled mind:

“In my mind, the first ‘emptiness’ I created was the infinity of space. There was nothing in it, not even light. But soon I knew that this empty universe could not make me feel peace. Instead, it filled me with a nameless anxiety, like a drowning man wanting to grab on to anything at hand. So I created a sphere in this infinite space for myself: not too big, though possessing mass. My mental state didn’t improve, however. The sphere floated in the middle of ‘emptiness’—in infinite space, anywhere could be the middle. The universe had nothing that could act on it, and it could act on nothing. It hung there, never moving, never changing, like a perfect interpretation for death. I created a second sphere whose mass was equal to the first one’s. Both had perfectly reflective surfaces. They reflected each other’s images, displaying the only existence in the universe other than itself. But the situation didn’t improve much. If the spheres had no initial movement—that is, if I didn’t push them at first—they would be quickly pulled together by their own gravitational attraction. Then the two spheres would stay together and hang there without moving, a symbol for death. If they did have initial movement and didn’t collide, then they would revolve around each other under the influence of gravity. No matter what the initial conditions, the revolutions would eventually stabilize and become unchanging: the dance of death. I then introduced a third sphere, and to my astonishment, the situation changed completely… This third sphere gave ‘emptiness’ life. The three spheres, given initial movements, went through complex, seemingly never-repeating movements. The descriptive equations rained down in a thunderstorm without end. Just like that, I fell asleep. The three spheres continued to dance in my dream, a patternless, never-repeating dance. Yet, in the depths of my mind, the dance did possess a rhythm; it was just that its period of repetition was infinitely long. This mesmerized me. I wanted to describe the whole period, or at least a part of it. The next day I kept on thinking about the three spheres dancing in ‘emptiness.’ My attention had never been so completely engaged.”

And that’s a very imaginative description of what’s known in physics and mathematics as the three-body problem, which unlike a two-body problem has no closed-form solution, which is the reason for the unending, non-repeating motion. What I like about this story is the way the character responds to the increasing structure in his mental space. As structure increases he becomes increasingly engaged. I think this subjective response to structure will have to be an indispensable aspect of any philosophy of structure.

Another literary, or scriptural example, of this idea is in Latter-day Saint scripture in the Book of Mormon. A prophet named Lehi talks about how existence itself depends on the tension between opposites: “For it must needs be, that there is an opposition in all things. If not so… righteousness could not be brought to pass, neither wickedness, neither holiness nor misery, neither good nor bad. Wherefore, all things must needs be a compound in one; wherefore, if it should be one body it must needs remain as dead, having no life neither death, nor corruption nor incorruption, happiness nor misery, neither sense nor insensibility.” (2 Nephi 2:11)

There’s a similar idea here of “death” as with Cixin Liu’s character who finds only death in his static or repetitive mental structures.

Another idea that comes up in both the aesthetic and technical instances of structure is that of distinction. Lehi talks about opposition, setting one thing against another. We could say that the opposing entities endow each other with definition and identity. The Hebrew Bible also contrasts the formlessness and emptiness, the תֹהוּ וָבֹהוּ (*tohu va-bohu*), with separation. Elohim brings order the earth by separating things, the verb in the Bible of separation is בָּדל (*badal*). וַיַּבְדֵּ֣ל אֱלֹהִ֔ים בֵּ֥ין הָאֹ֖ור וּבֵ֥ין הַחֹֽשֶׁךְ (*va-yavdel elohim ben ha-or u-ben ha-choshek)*; “and God separated the light from the darkness” (Genesis 1:4). God separated the sky from the sea, the day from the night. Through separation what was a formless void came to have *structure*.

What are some examples of structure from a more technical side, scientifically and philosophically? Interestingly enough there are actually some concepts that overlap with these literary and scriptural ones, sharing notions of both *distinction* and *form*.

One way to think about a system is by using a **phase space**. A **phase space** is an abstract space in which all possible states of a system are represented, with each possible state corresponding to one unique point in the phase space. This is also called a **state space**. To get the general concept of an abstract space you can think of a graph with a horizontal axis and a vertical axis, each axis representing some property. The points on the graph represent different combinations of the two properties. That’s a kind of phase space.

To give a very simple example, consider a system of 2 particles in 1-dimensional space, which is just a line. The state space containing all possible arrangements of a system of *n* particles in 1-dimensional space will have 1*n* dimensions. Such a space dealing strictly with positions is also called a **configuration space**. So for our 2 particle system the configuration space will have 2 dimensions. We can represent this on a graph using a horizontal axis for one particle and a vertical axis for the second particle. Any point on the graph represents a single combination of positions for particles 1 and 2. That example is nice because it’s visualizable. When we expand to more than 3 dimensions we lose that visualizability but the basic principles still apply.

A classic example of a phase space is for a system of gas particles. Say we have a gas with *n* particles. These particles can have several arrangements. That’s putting it mildly. The collection of all possible arrangements makes up a configuration space of *3n* dimensions, 3 dimensions for every particle. Such a space could have billions upon billions of dimensions. This is not even remotely visualizable but the principles are the same as in our 2 dimensional configuration space above. A single point in this configuration space is one possible arrangement of all *n* particles. It’s like a snapshot of the system at a single instant in time. All the points in the configuration space comprise all possible arrangements of these *n* particles.

To get a more complete picture of the system a phase space will have 3 additional dimensions for the momentum of each particle in the 3 spatial directions. So the phase space will have 6*n* dimensions. A snapshot of the system, with the positions and momenta of every particle at one instant, will occupy a single point on the *6n* dimensional phase space and the entire *6n* dimensional phase space will contain all possible combinations of position and momenta for the system. The evolution of a system through successive states will trace out a path in its phase space. It’s just a mind-bogglingly enormous space. There’s no way we can actually imagine this in any sort of detail. But just the concept is useful.

The sum total of all possible states that a system can take constitutes a tremendous amount of information, but most states in a phase space aren’t especially interesting and I’d suggest that this is because they aren’t very structured. One useful affordance of phase spaces is that we can collect groups of states and categorize them according to criteria of interest to us. The complete information about a single state is called a **microstate**. A microstate is complete and has all the information about that system’s state. So for example, in the case of a system of gas particles the microstate gives the position and momentum of every particle in the system. But for practical purposes that’s *too much* information. To see if there’s anything interesting going on we need to look at the state of a system at a lower resolution, at its **macrostate**. The procedure of moving from the maximal information of microstates to the lower resolution of macrostates is called **coarse graining**. In coarse graining we divide the phase space up into large regions that contain groups of microstates that are macroscopically indistinguishable. We can represent this pictorially as a surface divided up into regions of different sizes. The states in a shared region are not microscopically identical. In the case of a system of particles, the states have different configurations of positions and momenta for the particles composing them. But the states in the shared region are *macroscopically* indistinguishable, meaning that they share some macroscopic property. Examples of such macroscopic properties for a gas are temperature, pressure, volume, and density.

The size of a macrostate is given by the number of microstates included in it. Macrostates of phase spaces can have very different sizes. Some states are very unique and occupy tiny regions of the phase space. Other states are very generic and occupy enormous regions. A smaller macrostate would be one where there are fewer microstates that could produce it. It’s more unique. Larger phase spaces are more generic. An example of an enormous macrostate region is **thermodynamic equilibrium**. Thermodynamic equilibrium is a condition in which the macroscopic properties of a system do not change with time. So, for example, macroscopic properties like temperature, pressure, volume, and density would not change in thermodynamic equilibrium. The reason the space of thermodynamic equilibrium is huge is because it contains a huge number of macroscopically indistinguishable microstates. What this means is that a condition of thermodynamic equilibrium can be realized in an enormous number of ways. In a gas for instance, the particle can have an enormous number of different configurations of positions and momenta that make no difference to the macroscopic properties and that all manifest as a condition of thermodynamic equilibrium. The system will continue to move through different microstates with time, tracing out a curve in phase space. But because the macrostate of thermodynamic equilibrium is so huge the curve will remain in that region. The system is not going to naturally evolve from thermodynamic equilibrium to some more unique state. It is so statistically unlikely to be, for our practical considerations, a non-possibility.

I think of the Biblical תֹהוּ וָבֹהוּ (*tohu va-bohu*) as a kind of thermodynamic equilibrium. It’s not necessarily that there’s nothing there. But in a sense, nothing is happening. Sure, individual particles may be moving around, but not in any concerted way that will produce anything macroscopically interesting.

This thermodynamic equilibrium is the state toward which systems naturally tend. The number of indistinguishable microstates in a macrostate, the size of a macrostate, is quantified as the property called **entropy**. Sometimes we talk about entropy informally as a measure of disorder. And that’s well enough. It also corresponds nicely, albeit inversely, to the notion of structure. More formally, the entropy of a macrostate is correlated (logarithmically) to the number of microstates corresponding to that macrostate. Using the intuition of the informal notion of disorder you might see how a highly structured macrostate would have fewer microstates corresponding to it. There aren’t as many ways to put the pieces together into a highly structured state as there are to put them into a disordered state.

Some notions related to structure, like meaning or function, are fairly informal. But in the case of entropy it’s actually perfectly quantifiable. And there are equations for it in physics. If the number of microstates for a given macrostate is *W*, then the entropy of that macrostate is proportional to the logarithm of *W*, the logarithm of the number of microstates. This is **Boltzmann’s equation for entropy**:

*S = k log W*

in which the constant *k* is the **Boltzmann constant**, 1.381 × 10^{−23} J/K and entropy has units of J/K. This equation for entropy holds when all microstates of a given macrostate are equally probable. But if this is not the case then we need another equation to account for the different probabilities. That equation is:

*S = -k ∑ p*_{i}* log p*_{i}

or equivalently,

*S = k ∑ p*_{i}* log (1/p** _{i}*)

Where p_{i} is the probability of each microstate. This reduces to the first equation if the probabilities of all the microstates are equal and* p*_{i}* = 1/W*.

How might *W* look differently for different macrostates? It’s fairly easy to imagine that a state of thermodynamic equilibrium would have a huge number of indistinguishable microstates. But what if the system has some very unusual macrostate? For example, say all the gas particles in a container were compressed into a tiny region of the available volume. This could still be configured in multiple ways, with many microstates, but far fewer than if they were distributed evenly throughout the entire volume. Under such constraints the particles have far fewer **degrees of freedom** and the entropy of that unusual configuration would be much lower.

Let’s think about the different sizes of macrostates and the significance of those different sizes in another way, using an informal, less technical, literary example. One of my favorite short stories is *La biblioteca de Babel*, “The Library of Babel”, by Argentine author Jorge Luis Borges. Borges was a literary genius and philosophers love his stories. This is probably the story referred to most, and for good reason. In *La biblioteca de Babel* Borges portrays a universe composed of “an indefinite and perhaps infinite number of hexagonal galleries”. This universe is one vast library of cosmic extension. And the library contains *all possible books*. “Each book is of four hundred and ten pages; each page, of forty lines, each line, of some eighty letters… All the books, no matter how diverse they might be, are made up of the same elements: the space, the period, the comma, the twenty-two letters of the alphabet.” So there are bounds set to the states this library or any of its books can take. But this still permits tremendous variability. As an analogy with statistical mechanics we can think of the Library of Babel as a phase space and of each book as a microstate.

Daniel Dennett has referred to the Library of Babel in his philosophy and proposed some of the books that, under the conditions set by Borges, *must* be understood to exist in this library: “Somewhere in the Library of Babel is volume consisting entirely of blank pages, and another volume is all question marks, but the vast majority consist of typographical gibberish; no rules of spelling or grammar, to say nothing of sense, prohibit the inclusion of a volume… It is amusing to think about some of the volumes that must be in the Library of Babel somewhere. One of them is the best, most accurate 500-page biography of you, from the moment of your birth until the moment of your death. Locating it, however, would be all but impossible (that slippery word), since the Library also contains kazillions of volumes that are magnificently accurate biographies of you up till your tenth, twentieth, thirtieth, fortieth… birthday, and completely false about subsequent events… *Moby Dick* is in the Library of Babel, of course, but so are 100,000,000 mutant impostors that differ from the canonical *Moby Dick* by a single typographical error. That’s not yet a Vast number, but the total rises swiftly when we add the variants that differ by 2 or 10 or 1,000 typos.” (*Darwin’s Dangerous Idea*)

A key takeaway from this fantastical story is that only an infinitesimal portion of its volumes are even remotely meaningful to readers. The vast majority of the books are complete nonsense. The Library of Babel is a little easier for me to think about in certain ways than phase space. For many things I’m not sure how to generate a phase space by picking out specific properties and assigning them axes onto which individual states would project with numerical coordinates. A lot of things don’t easily lend themselves to that kind of technical breakdown. But thinking of microstates and macrostates more informally, let’s just take the macrostate of all the books in the Library of Babel that are completely meaningless. This would be a huge macrostate comprising the vast majority of the library, the vast majority of its books, i.e. microstates. As with thermodynamic equilibrium this is the most likely macrostate to be in. And the evolution system, moving from one book to the next, will more than likely never leave it, i.e. will never find a book with any meaningful text.

But the Library of Babel does contain meaningful texts. And we could coarse grain in such a way to assign books to different macrostates based on the amount of meaningful text they contain. After the macrostate containing books of complete nonsense the next largest macrostate will contain books with a few meaningful words. The macrostates for books with more and more meaningful words get successively smaller. And even smaller macrostates when those words are put into meaningful sentences and then paragraphs. The smallest macrostates will have entire books of completely meaningful text. But as any book browser knows, books vary in quality. Even among books of completely meaningful text some will be about as interesting as an online political flame war. The macrostates of literary classics and of interesting nonfiction will be comparatively miniscule indeed.

We can think of a book in the Library of Babel as a kind of message. And this is to start thinking about in terms of another technical field that I think is relevant to a philosophy of structure. And that is **information theory**. Information theory has some interesting parallels to statistical mechanics. And it even makes use of a concept of entropy that is very similar to the thermodynamic concept of entropy. In information theory this is sometimes called **Shannon entropy**, named after the great mathematician Claude Shannon. The Shannon entropy of a random variable is the *average* level of “information”, “surprise”, or “uncertainty” inherent in the variable’s possible outcomes. It’s calculated in a very similar way to thermodynamic entropy. If Shannon entropy is *H* and a discrete random variable *X* has possible outcomes *x*_{1}*,…,x** _{n}*, with probabilities

*P(x*

_{1}*),…,P(x*

_{n}*)*, then the entropy is calculated by the equation:

*H(X) = – ∑ P(x*_{i}*) log*_{b}* (x*_{i}*)*

That equation should look very familiar because it’s identical in form to the equation for thermodynamic entropy, in the case where the microstates have different probabilities. The base of the logarithm is *b*, often base 2, with the resulting entropy being given in units of **bits**. A bit is a basic unit of data that represents a logical state with one of two possible values. So for example, whether a value is 0 or 1 is a single bit of information. Another example is whether a coin toss result is heads or tails.

An equation of this form gives an *average* quantity. The value *-log p** _{i}*, or equivalently

*log (1/p*

_{i}*)*, is the “

**surprise**” for a single outcome and so has a higher value when its probability is lower, which makes sense. More improbable outcomes should be more surprising. When the surprise values for all the outcomes, multiplied by their respective probabilities are summed together, i.e. the average surprise, the total is the Shannon entropy. This average quantity can also be used to calculate the total information the message contains. It’s equal to the entropy of the message per bit, multiplied by the length of the message. That gives the total information content of the message.

This is an extremely useful way to quantify information and this is just a taste of the power of information theory. Even so I don’t think it exhausts our notions of what information is, or can be. Daniel Dennett makes a distinction between **Shannon information** and **semantic information** (*From Bacteria to Bach and Back*). Shannon information is the kind of information studied in information theory. To explore this distinction let’s return to Borges’s library.

One thing I like about *La biblioteca de Babel* is the way it conveys the intense human reaction to semantic meaning, or the incredible lack of it in the case of the library’s inhabitants. The poor souls of the Library of Babel are starving for meaning and tortured by the utter senselessness of their universe, the meaningless strings of letters and symbols they find in book after book, shelf after shelf, floor after floor. There’s a brilliant literary expansion of Borges’s library in the novella *A Short Stay in Hell* by Steven L. Peck, in which one version of Hell itself actually is the Library of Babel, with the single horrific difference that its inhabitants can never die.

In terms of Shannon information most books on the shelves of the Library of Babel contain a lot of information. But almost all the books contain no semantic information whatsoever. This is an evaluation we are only able to make as meaning-seeking creatures. Information theory doesn’t need to make distinctions about semantic meaning. It doesn’t need to and is able to accomplish its scope of work without it. But when we’re thinking about structures, with meaning and functions, in the way I’m trying to, we need that extra level of evaluation that is, at least for now, only available to humans.

That’s not to say there’s an absolute, rigid split between the objective and subjective. Information theory makes use of the subjective phenomena of human perceptions like sight and sound. This is critical to perceptual coding. We’re all beneficiaries of perceptual coding when we use jpeg images, mp3 audio files, and mp4 video files. These are all file types that compress data, with loss, by disposing of information that is determined to be imperceptible to human senses.

That’s getting a little closer to semantic information. Semantic information doesn’t have to be transmitted word for word. Someone can get the “gist” of a message and convey it with quite high fidelity, in a certain sense anyway. The game of telephone notwithstanding, we can share stories with each other without recording and replaying precise scripts of characters or sound wave patterns. We can recreate the stories in our own words at the moment of the telling.

That’s not to say that structure has to be about perception. Something like a musical composition or narrative has a lot to do with perception and aesthetic receptivity. But even compositions and narratives can contain structure that few people or even no people pick up on. And there are also structures in nature and in mathematics that remain hidden from human knowledge until they are discovered.

I think there are some affinities between what I will informally call the *degree of structure* in the hidden and discovered structures in nature and mathematics and the degree to which the outputs of those structures can be compressed. **Data compression** is the process of encoding information using fewer bits than the original representation. How is that possible? Data compression programs exploit *regularities *and *patterns *in the data and produce code to represent the data more efficiently. Such a program creates a new file using a new, shorter code, along with a dictionary for the code so that the original data can be restored. It’s these regularities and patterns that I see being characteristic of structure. This can be quantified in terms of Kolmogorov complexity.

The **Kolmogorov complexity** of an object is the length of the shortest computer program that produces the object as output. For an object with no structure that is completely random the only way for a computer program to produce the object as an output is just to reproduce it in its entirety. Because there are no regularities or patterns to exploit. But for a highly structured object the computer program to produce it can be much shorter. This is especially true if the output is the product of some equation or algorithm.

For example an image of part of the Mandelbrot set fractal might take 1.61 million bytes to store the 24-bit color of each pixel. But the Mandelbrot set is also the output of a simple function that is actually fairly easy to program. It’s not necessary to reproduce the 24-bit color of each pixel. Instead you can just encode the function and the program will produce the exact output. The Mandelbrot set is a good example for illustration because the fractal it produces is very elegant. But the same kind of process will work with any kind of function. Usually the program for a function will be much shorter than the data set of its output.

Often scientific discovery is a matter of finding natural structures by working backward from the outputs to infer the functions that produce them. This is the project of trying to discover the *laws of nature*. Laws are the regularities and patterns at work in nature. The process can be tricky because there are often many confounding factors and variables are rarely isolated. But sorting through all that is part of the scientific process. As a historical example, Johannes Kepler had in his possession a huge collection of astronomical data that had been compiled over decades. Much of it he had even inherited from his mentor Tycho Brahe. What Kepler was ultimately able to do was figure out that the paths traced out by the recorded positions of the planets in space were *ellipses*. The equation for an ellipse is fairly simple. Now knowing that underlying regularity makes it possible not only for us to reproduce Brahe and Kepler’s original data sets. But we can retrodict and predict the positions of planets outside those data sets, because we have the governing equations.

That kind of pattern-finding often works well in discerning natural structures. It’s less relevant to human structures where creativity, novelty, and unpredictability can actually be features of greater aesthetic structure. It’s for reasons like this that my approach to a philosophy of structure is highly varied and somewhat unsystematic, pulling pieces together from several places.

Structure seems especially important in the arts and a philosophy of structure in the arts will necessarily overlap with the study of aesthetics. It’s really creative, artistic structures that I find most interesting of all.

Dieter F. Uchtdorf talked about human creativity in a way that I think touches on the key aesthetic features of structure. He said: “The desire to create is one of the deepest yearnings of the human soul… We each have an inherent wish to create something *that did not exist before*… Creation brings deep satisfaction and fulfillment. We develop ourselves and others when we *take unorganized matter* into our hands and mold it into something of beauty.” (italics added) A number of important ideas here. I’ll focus on two: (1) that creation is bringing into existence something that did not exist before and (2) that creation is a process of taking unorganized matter and molding it into something of beauty. This coheres with the idea I proposed earlier of the Hebrew creation story, that the lack of form, or structure, in the primordial chaos could be understood also as a kind of emptiness, or nothingness. By imposing a new structure onto raw, unorganized materials it’s possible to bring into existence something that did not exist before.

This is similar to Aristotle’s idea of a **formal cause**. In Aristotle’s metaphysics he identified four kinds of causes: material, formal, efficient, and final. We’ll just look at the first two here. The material cause is the raw material that composes whatever is being brought about. If we want to understand how a wooden table is created the material cause is the wood used to make it. That’s the unorganized matter. The formal cause is the form, arrangement, shape, or *structure*, into which this material is fashioned. Clearly the formal cause is just as important to bringing the object about.

The ways we evaluate structure and its aesthetic virtues, its beauty, is a complex subject. Are aesthetic criteria objective or subjective? The aesthetic response is certainly a subjective process. But is the subjective response a consistent and law-like process that correlates to objective features? It’s difficult to say.

David Bentley Hart said of aesthetics: “The very nature of aesthetic enjoyment resists conversion into any kind of calculable economy of personal or special benefits. We cannot even isolate beauty as an object among other objects, or even as a clearly definable property; it transcends every finite description. There have, admittedly, been attempts in various times and places to establish the ‘rules’ that determine whether something is beautiful, but never with very respectable results… Yes, we take pleasure in color, integrity, harmony, radiance, and so on; and yet, as anyone who troubles to consult his or her experience of the world knows, we also frequently find ourselves stirred and moved and delighted by objects whose visible appearances or tones or other qualities violate all of these canons of aesthetic value, and that somehow ‘shine’ with a fuller beauty as a result. Conversely, many objects that possess all these ideal features often bore us, or even appall us, with their banality. At times, the obscure enchants us and the lucid leaves us untouched; plangent dissonances can awaken our imaginations far more delightfully than simple harmonies that quickly become insipid; a face almost wholly devoid of conventionally pleasing features can seem unutterably beautiful to us in its very disproportion, while the most exquisite profile can do no more than charm us… Whatever the beautiful is, it is not simply harmony or symmetry, or consonance or ordonnance or brightness, all of which can become anodyne or vacuous of themselves; the beautiful can be encountered—sometimes shatteringly—precisely where all of these things are deficient or largely absent. Beauty is something other than the visible or audible or conceptual agreement of parts, and the experience of beauty can never be wholly reduced to any set of material constituents. It is something mysterious, prodigal, often unanticipated, even capricious.” (*The Experience of God*)

These are good points. Aesthetic judgment is difficult to systematize. And I can’t say I know of any theory that successfully defines precise evaluative procedures from objective criteria. But neither is that to say that aesthetic judgment is arbitrary. There are easy cases where there is near universal agreement that artistic creations are of high or low quality. And there are also harder cases where appreciation for high quality art requires refined tastes, refined through training and initiation into an artistic practice. Even the best critics are not able to fully articulate their reasons for making the judgments they do. And they may have imprecise vocabulary that is incomprehensible to those outside the practice. Sommeliers and wine tasters, for example, have a vocabulary for their craft that goes completely over my head (and taste buds). But I don’t doubt that the vocabulary is meaningful to them. I believe all these artforms have structures to which we can refer, if only imprecisely.

Having looked briefly in this episode at some general ideas pertaining to structure, what I want to do in following episodes for the series is look closely at examples of structure in more detail, focusing on individual fields, one at a time. Like music, chemistry, biology, language, social and political organizations, and mathematics. I expect that the characteristics of structure in these different cases will be varied. But I hope that as the coverage gets more comprehensive it will give more opportunity for insight into the general nature of structure. I hope through some inductive and abductive reasoning to infer general patterns of structure across these various domains, to understand a general *structure of structure*.