Philosophy of Structure, Part 3: Chemistry

Part 3 in this series on the philosophy of structure looks at examples and general principles of structure in chemistry. Subjects covered include quantum mechanics, the Schrödinger equation, wave functions, orbitals, molecules, functional groups, and the multiple levels of structure in proteins. General principles discussed include the nature of functions, the embedding of lower-level structures into higher-level structures, and the repeated use of a limited number of lower-level structures in the formation of higher-level structures.

In this third episode on the philosophy of structure I’d like to look at examples of structure in the field of chemistry. I’d like to see how some of the general principles of structure discussed in previous episodes apply to chemistry and to see what new general principles we can pick up from the examples of chemical structures. I’ll proceed along different scales, from the smallest and conceptually most fundamental components of chemical structure up to the larger, multicomponent chemical structures. For basic principles I’ll start with quantum mechanics, the Schrödinger  equation, and its wave function solutions, which constitute atomic and molecular orbitals. From there I’ll look at different functional groups that occur repeatedly in molecules. And lastly I’ll look at the multiple levels of structure of proteins, the embedding of chemical structures, and the use of repeatable units in the formation of multicomponent chemical structures.

One aspect from previous discussions that won’t really show up in chemistry is an aesthetic dimension of structure. That’s not to say that chemical structures lack beauty. I find them quite beautiful and the study and application of chemical structures has actually been the primary subject of my academic and professional work. In other words, I probably find chemistry more aesthetically satisfying than most people commonly would. But what I’m coming to think of as the philosophical problem of systematizing the aesthetic dimension of structure, in fields like music, art, and literature, isn’t so directly applicable here. I’ll get back to that problem in future episodes. The aesthetic dimension is not so intrinsic to the nature of the subject in the case of chemistry.

So let’s start getting into chemical structures by looking at the smallest and most conceptually fundamental scale.

Matter Waves

There is, interestingly enough, an intriguing point of commonality between music and chemistry at the most fundamental level; and that is in the importance of waveforms. Recall that the fundamental building block of a musical composition is a sound wave, a propagation of variations in the local pressure in which parts of the air are compacted and parts of the air are rarified. Sound waves are governed by the wave equation, a second order partial differential equation, and its solutions, in which a series of multiple terms are added together in a superposition, with each individual term in that summation representing a particular harmonic or overtone. There are going to be a lot of similarities to this in the basic building of chemical structures.

One of the key insights and discoveries of twentieth century science was that matter also takes the form of waves. This is foundational to quantum mechanics and it is known as the de Broglie hypothesis. This was a surprising and strange realization but it goes a long way in explaining much of what we see in chemistry. Because a particle is a wave it also has a wavelength. 

Recall that in acoustics, with mechanical waves propagating through a medium, wavelength is related to the frequency and speed of the wave’s propagation. That relation is:

λ = v/f

Where λ, is the wavelength, f is frequency, and v is the wave propagation velocity. With this kind of mechanical wave the wave is not a material “thing” but a process, a disturbance, occurring in a material medium.

But with a matter wave the wave is the matter itself. And the wavelength of the matter wave is related to the particle’s momentum, a decidedly material property. A particle’s wavelength is inversely proportional to its momentum. This relation is stated in the de Broglie equation:

λ = h/p

In which λ is the wavelength, h is a constant called Planck’s constant (6.626×10−34 J/s), and p is momentum. Momentum is a product of mass and velocity:

p = mv

Because the wavelength of a matter wave is inversely proportional to momentum the wavelength for the matter waves of macroscopic particles, the kinds of objects we see and interact with in our normal experience, is going to be very, very short, so as to be completely negligible. But for subatomic particles their wavelengths are going to be comparable to the scale of the atom itself, which will make their wave nature very significant to their behavior.

One interesting consequence of the wave nature of matter is that the precision of simultaneous values for momentum and position of a matter wave is limited. This is known as the Uncertainty Principle. There’s actually a similar limit to the precise specification of both wavelength and position for waves in general, i.e. for any and all waves. But because wavelength is related to momentum in matter waves this limitation gets applied to momentum as well.

Recall that with sound waves a musical pitch can be a superposition of multiple frequencies or wavelengths. This superposition is expressed by the multiple terms in a Fourier Series. Any function can be approximated using a Fourier Series, expressed in terms of added sinusoidal (oscillating) waves. A function that is already sinusoidal can be matched quite easily. The Fourier Series can converge on more complicated functions as well but they will require more terms (that’s important). In the case of musical pitches the resulting functions were periodic waves that repeated endlessly. But a Fourier Series can also describe pulses that are localized to specific regions. The catch is that more localized pulses, confined to tighter regions, require progressively more terms in the series, which means a higher number of wavelengths.

Bringing this back to matter waves, these same principles apply. Under the de Broglie formula wavelength is related to momentum. A pure sine wave that repeats endlessly has only one wavelength. But it also covers an infinite region. As a matter wave this would be a perfect specification of momentum with no specification of position. A highly localized pulse is confined to a small region but requires multiple terms and wavelengths in its Fourier Series. So its position is highly precise but its momentum is much less precise.

The limit of the simultaneous specification of momentum and position for matter waves is given by the equation:

σxσp ≥ h/(4π)

Where σx is the standard deviation of position, σp is the standard deviation of momentum, and h is Planck’s constant. The product of these two standard deviations has a lower limit. At this lower limit it’s only possible to decrease the standard deviation of one by increasing the standard deviation of the other. And this is a consequence of the wave nature of matter.

The most important application of these wave properties and quantum mechanical principles in chemistry is with the electron. Protons and neutrons are also important particles in chemistry, and significantly more massive than electrons. But it’s with the electrons where most the action happens. Changes to protons and electrons are the subject of nuclear chemistry, which is interesting but not something we’ll get into this time around. In non-nuclear chemical reactions it’s the electrons that are being arranged into the various formations that make up chemical structures. The behavior of an electron is described by a wave function and the wave equation is governed by the Schrödinger equation.

The Schrödinger equation is quite similar to the classical wave equation that governs sound waves. Recall that the classical wave equation is:

d2u/dx2 = (1/v2) * d2u/dt2 

Where u is the wave displacement from the mean value, x is distance, t is time, and v is velocity. A solution to this equation can be found using a method of separation of variables. The solution u(x,t) can be written as the product of a function of x and a sinusoidal function of time. We can write this solution as:

u(x,t) = ψ(x) * cos (2πft)

Where f is the frequency of the wave in cycles per unit time and ψ(x) is the spatial factor of the amplitude of u(x,t), the spatial amplitude of the wave. Substituting ψ(x) * cos (2πft) into the differential wave equation gives the following equation for the spatial amplitude ψ(x).

d2ψ/dx2 +2f2/v2 * ψ(x) = 0

And since frequency multiplied by wavelength is equal to velocity (fλ = v) we can rewrite this in terms of wavelength, λ:

d2ψ/dx2 +2/λ2 * ψ(x) = 0

So far this is just applicable to waves generally. But where things get especially interesting is the application to matter waves, particularly to electrons. Recall from the de Broglie formula that:

λ = h/p

In which h is a constant called Planck’s constant (6.626×10−34 J/s) and p is momentum. We can express the total energy of a particle in terms of momentum by the equation:

E = p2/2m + V(x)

Where E is total energy, m is mass, and V(x) is potential energy as a function of distance. Using this equation we can also express momentum in these terms:

p = {2m[E – V(x)]]1/2

And since,

λ = h/p

The differential equation becomes

d2ψ/dx2 + 2m/ħ2 * [E – V(x)] ψ(x) = 0

Where

ħ = h/(2π)

This can also be written as

2/2m * d2ψ/dx2 + V(x) ψ(x) = E ψ(x)

This is the Schrödinger equation. Specifically, it’s the time-independent Schrödinger equation. So what do we have here? There’s a similar relationship between the classical wave equation (a differential equation) and its solution u(x,t), which characterizes a mechanical wave. The Schrödinger equation is also a differential equation and its solution, ψ(x), is a wave function that characterizes a matter wave. It describes a particle of mass m moving in a potential field described by V(x). Of special interest to chemistry is the description of an electron moving in the potential field around an atomic nucleus.

Let’s rewrite the Schrödinger equation using a special expression called an operator. An operator is a symbol that tells you to do something to whatever follows the symbol. The operator we’ll use here is called a Hamiltonian operator, which has the form:

H = -ħ2/2m * d2/dx2 + V(x)

Where H is the Hamiltonian operator. It corresponds to the total energy of a system, including terms for both the kinetic and potential energy. We can express the Schrödinger equation much more concisely in terms of the Hamiltonian operator, in the following form:

H ψ(x) = E ψ(x)

There are some special advantages to expressing the Schrödinger equation in this form. One is that this takes the form of what is called an eigenvalue problem. An eigenvalue problem is one in which an operator is applied to an eigenfunction and the result returns the same eigenfunction, multiplied by some constant called the eigenvalue. In this case the operator is the Hamiltonian, H. The eigenfunction is the wave function, ψ(x). And the eigenvalue is the observable energy, E. These are all useful pieces of information to have that relate to each other very nicely, when expressed in this form.

Orbitals

In chemistry the wave functions of electrons in atoms and molecules are called atomic or molecular orbitals. And these are also found using the Schrödinger equation; they are solutions to the Schrödinger equation. The inputs to these wave functions are coordinates for points in space. The output from these wave functions, ψ, is some value, whose meaning is a matter of interpretation. The prevailing interpretation is the Born Rule, which gives a probabilistic interpretation. Under the Born Rule the value of ψ is a probability amplitude and the square modulus of the probability amplitude, |ψ|2, is called a probability density. The probability density defines for each point in space the probability of finding an electron at that point, if measured. So it has a kind of conditional, operational definition. More particularly, we could say, reducing the space to a single dimension, x, that |ψ(x)|2 gives the probability of finding the electron between x and x + dx. Going back to 3 dimensions, the wave function assigns a probability amplitude value, ψ, and a probability density value, |ψ|2, to each point in space. Informally, we might think of the regions of an orbital with the highest probability density as the regions where an electron “spends most of its time”.

Solutions to the Schrödinger equation, electron wavefunctions, can be solved exactly for the hydrogen atom. Other solutions cannot be solved analytically but can be approximated to high precision using methods like the variational method and perturbation theory. And again, we call these wave functions orbitals. I won’t get into the specifics of the methods for finding the exact solutions for the hydrogen atom but I’ll make some general comments. For an atom the Cartesian (x,y,z) coordinates for the three dimensions of space aren’t so convenient so we convert everything to spherical coordinates (r,θ,φ) in which r is a radial distance and θ and φ are angles with respect to Cartesian axes. The term for potential, V(r) in the Hamiltonian operator will be defined by the relation between a proton and an electron. And the mass of the electron also gets plugged into the Hamiltonian. Solving for the wave function makes use of various mathematical tools like spherical harmonics and radial wave functions. Radial wave functions in turn make use of Laguerre polynomials. Then solutions for the hydrogen atom will be expressed in terms of spherical harmonic functions and radial wave functions, with the overall wave function being a function of the variables (r,θ,φ).

Because the orbitals are functions of r, θ,and φ they can be difficult to visualize and represent. But partial representations can still give an idea of their structure. An orbital is often represented as a kind of cloud taking some kind of shape in space; a probability density cloud. The intensity of the cloud’s shading or color represents varying degrees of probability density.

The shapes of these clouds vary by the type of orbital. Classes of orbitals include s-orbitals, p-orbitals, d-orbitals, and f-orbitals. These different kinds of orbitals are grouped by their orbital angular momentum. s-orbitals are sphere shaped, nested shells. p-orbitals have a kind of “dumbbell” shape with lobes running along the x, y, and z axes. d-orbitals are even more unusual, with lobes running along two axes, and one orbital even having a kind of “donut” torus shape. Although we sometimes imagine atoms as microscopic solar systems with electrons orbiting in circles around the nucleus their structure is much more unusual, with these oddly shaped probability clouds all superimposed over each other. The structure of atoms into these orbitals has important implications for the nature of the elements and their arrangements into molecules. But before getting into that let’s pause a moment to reflect on the nature of the structure discussed up to this point.

Reflection on the Structure of the Wave Function

As with a sound wave, the function for an electron wave function is a solution to a differential equation, in this case the Schrödinger equation. This wave function ψ, is a function of position. In spherical coordinates of r, θ, and φ this function is ψ(r,θ,φ). In the most basic terms a function is a rule that assigns elements in a set, or a combination of elements from multiple sets, to a single element in another set. This rule imposes additional structure on relations between these sets. So in our case we have a set for all r values, a set for all θ values, a set for all φ values, and a set for all ψ values. Prior to the imposition of structure by any function we could combine elements from these sets in any way we like. In a four-dimensional (abstract) phase space or state space with axes r, θ, φ, and ψ all points are available, any ordered quadruple (r,θ,φ,ψ) is an option. That’s because an ordered triplet (r,θ,φ) can be associated with any value of ψ. There’s no rule in place limiting which values of ψ the ordered triplet (r,θ,φ) can associate with. The entire phase space is available; all states are available. But with the imposition of the function ψ(r,θ,φ) the region of permissible states conforming to this rule is significantly smaller. An ordered triplet (r,θ,φ) can be assigned to one and only one value of ψ.

It’s useful here to distinguish between logical possibility and physical possibility. In what sense are all ordered quadruples (r,θ,φ,ψ) in the state space “possible”? Most of them are not really physically possible for the electron in an atom because they would violate the laws of physics, the laws of quantum mechanics. That’s because the function, the wave function in fact is imposed. But in the theoretical case that it were not imposed, any ordered quadruple (r,θ,φ,ψ) would be logically possible; there’s no contradiction in such a combination. At least, not until we start to develop the assumptions that lead to the Schrödinger equation and its solutions. But since the actual, physical world follows physical laws only the states satisfying the function ψ(r,θ,φ) are physically possible.

This distinction between logical possibility and physical possibility highlights one, very basic source of structure: structure that arises from physical laws. Atomic orbitals are not man-made structures. There certainly are such things as man-made structures as well. But atomic orbitals are not an example of that. I say all this to justify including atomic orbitals as examples of structure in the first place, since in a physical sense they seem “already there” anyway, or as something that couldn’t be otherwise. But in light of the much more vast state space of logically possible states I think it makes sense to think of even these physically given states as highly structured when compared to the logically limitless states from which they stand apart.

I’d like to make one point of clarification here, especially considering the reputation quantum mechanics has for being something especially inexact or even anti-realist. What is it that the wave function specifies at each point in space, for each ordered triplet (r,θ,φ)? It’s certainly not the position of the electron. That indeed isn’t specified. But what is specified is the amplitude, ψ. And the square modulus of the amplitude, |ψ|2 is the probability of finding the electron at that position, (r,θ,φ). The wave function doesn’t specify the electron’s exact position. Does this mean that chaos reigns for the electron? The electron could, after all, be anywhere in the universe (with the exception of certain nodes). But that infinite extension of possible positions doesn’t mean that chaos reigns or that the electron isn’t bound by structure. The probability density of the electron’s position in space is very precisely defined and governs the way the electron behaves. It’s not the case that just anything goes. Certain regions of space are highly probable and most regions of space are highly improbable.

This is something of a matter of perspective and it’s a philosophical rather than scientific matter. But still just as interesting, for me at least. It pertains to the kinds of properties we should expect to see in different kinds of systems. What kinds of properties should we expect quantum systems to have? What are quantum properties? Do quantum systems have definite properties? I’ve addressed this in another episode on the podcast, drawing on the thought of Sunny Auyang. In her view there’s an important distinction to be made between classical properties and quantum properties. Even if quantum systems don’t have definite classical properties that’s not to say they don’t have properties at all. They just have properties of a different kind, properties that are more removed from the kinds of classical properties we interact with on a daily basis. We’re used to interacting with definite positions and definite momenta at our macroscopic scale of experience. At the quantum level such definite predicates are not found for position and momentum, but they are found for the position representation and momentum representation of a system’s wave function. Quoting Auyang:

“Are there predicates such that we can definitely say of a quantum system, it is such and so? Yes, the wavefunction is one. The wavefunction of a system is a definite predicate for it in the position representation. It is not the unique predicate; a predicate in the momentum representation does equally well. Quantum properties are none other than what the wavefunctions and predicates in other representations describe.” (How Is Quantum Field Theory Possible?)

I think of this as moving our perspective “up a level”, looking not at position itself but at the wave function that gives the probability amplitude, ψ, and probability density, |ψ|2, of position. That is where we find definite values governed by the laws of physics. It’s appropriate to look at this level for these kinds of quantum systems, because of the kind of things that they are. Expecting something else from them would be to expect something from a thing that is not appropriate to expect from the kind of thing that it is.

Molecular Orbitals

Let’s move now to molecules. Molecules are groups of atoms held together by chemical bonds. This makes use of a concept discussed in the last episode that is pertinent to structure generally: that of embedding. Lower-level structures get embedded, as kinds of modules, into higher-level structures. The lower-level structures remain but their combinations make possible a huge proliferation of new kinds of structures. As we move from the level of atoms to molecules the number of possible entities will expand dramatically. There are many more kinds of molecules than there are kinds of atoms. As of 2021 there are 118 different kinds of atoms called elements. That’s impressive. But this is miniscule compared to the number of molecules that can be made from combinations and arrangements of these elements. To give an idea, the Chemical Abstracts Service, which assigns a unique CAS registry number to different chemicals, currently has a database of 177 million different chemical substances. These are just molecules that we’ve found or made. There are many more that will be made and could be made.

Electrons are again key players in the formation of molecules as well. The behavior of electrons, their location probability densities, and wave-like behavior, continue to be defined by mathematical wave functions and abide by the Schrödinger equation. A wave function, ψ, gives a probability amplitude and its square modulus, |ψ|2, gives the probability of finding an electron in a given region. So many of the same principles apply. But the nature of these functions at the molecular level is more complex. In molecules the wave functions take new orbital forms. Orbitals in molecules take two new important forms: hybridized orbitals and molecular orbitals.

Hybridized orbitals are combinations of regular atomic orbitals that combine to form hybrids. So where before we had regular s-type and p-type orbitals these can combine to form hybrids such as sp3, sp2, and sp orbitals. With a carbon atom for instance, in the formation of various organic molecules, the orbitals of the valence electrons will hybridize.

Molecular orbitals are the wave functions for electrons in the chemical bonds between the atoms that make up a molecule. Molecular orbitals are formed by combining atomic orbitals or hybrid atomic orbitals from the atoms in the molecule. The wave functions for molecular orbitals don’t have analytic solutions to the Shrõdinger equation so they are calculated approximately.

A methane molecule is a good example to look at. A methane molecule consists of 5 atoms: 1 carbon atom and 4 hydrogen atoms. It’s chemical formula is CH4.  A carbon atom has 6 electrons with 4 valence electrons that are available to participate in chemical bonds. In the case of a methane molecule these 4 valence electrons will participate in 4 bonds with 4 hydrogen atoms. In its ground state the 4 valence electrons occupy one 2s orbital and two 2p orbitals. In order to form 4 bonds there need to be 4 identical orbitals available. So the one 2s orbital and three 2p orbitals hybridize to form 4 sp3 hybrid orbitals. An sp3 orbital, as a hybrid, is a kind of mixture of an s-type and p-type orbital. The dumbbell shape of an p-orbital combines with the spherical shape of an s-orbital to form a kind of lopsided dumbbell. It’s these hybrid sp3 orbitals that then combine with the 1s orbitals of the hydrogen atoms to form molecular orbitals. In this case the type of molecular orbitals that form are called σ-bonds.

The 2s and 2p orbitals in the carbon atom can also hybridize in other ways to form two or three bonds. For example, a carbon atom can bond with 2 hydrogen atoms and 1 other carbon atom. When it does this the 2s orbital hybridizes with just 2 of the 2p orbitals to form 3 sp2 orbitals, which bond with the 2 hydrogens and the other carbon. The remaining 2p orbital combines with the other carbon atom again, to its corresponding 2p orbital. This makes two sets of orbitals combining into two molecular bonds, a σ-bond and what is called a π-bond. When a σ-bond and a π-bond form between atoms it is called a double bond. Carbon atoms can also form triple bonds in which two sp orbitals are formed from the 2s orbital and one 2p orbital. This leaves two 2p orbitals to combine with their counterparts in another carbon atom to form a triple bond, composed of 1 σ-bond and 2 π-bonds. Single bonds, double bonds, and triple bonds all have their own geometrical properties like bond angles and freedom of rotation. This has effects on the properties of the resulting molecule.

Functional Groups

σ-bonds, π-bonds, single bonds, double bonds, and triple bonds make possible several basic molecular structures called functional groups. Functional groups are specific groupings of atoms within molecules that have their own characteristic properties. What’s useful about functional groups is that they occur in larger molecules and contribute to the overall properties of the parent molecule to which they belong. There are functional groups containing just carbon, but also functional groups containing halogens, oxygen, nitrogen, sulfur, phosphorus, boron, and various metals. Some of the most common functional groups include: alkyls, alkenyls, akynyls, and phenyls (which contain just carbon); fluoros, chloros, and bromos (which contain halogens); hydroxyls, carbonyls, carboxyls, and ethers (which contain oxygen); carboxamides and amines (which contain nitrogen); Sulfhydryls and sulfides (which contain sulfur); phosphates (which contain phosphorus); and so forth.

Repeatable Units

The last subject I’d like to address with all this is the role of repeatable units in the formation of complex chemical structures. Let’s come at this from a different direction, starting at the scale of a complex molecule and work our way down. One of the most complex, sophisticated kinds of molecules is a protein. Proteins are huge by molecular standards. Cytochrome c, for example, has a molecular weight of about 12,000 daltons. (For comparison, methane, discussed previously, has a molecular weight of 16 daltons). What we find with such molecules is that they are highly reducible to a limited number of repeatable units. But we could imagine it being otherwise; a macromolecule being irreducible from its overall macrostructure and not having any discernible repeating components. Let’s imagine a hypothetical, counterfactual case in which a macromolecule of that size is just a chaotic lump. Imagine going to a landfill and gathering a bunch of trash from a heap with all sorts of stuff in it, gathering it all together, rolling it into a ball, and binding it with hundreds of types of unmixed adhesives. Any spatial region or voxel of that lump would have different things in it. You might find some cans and wrappers in one part, computer components in another, shredded office papers in another, etc. We could imagine a macromolecule looking like that; a completely heterogeneous assembly. We could imagine further such a heterogeneous macromolecule being able to perform the kinds of functions that proteins perform. Proteins can in fact be functionally redundant; there’s more than one way to make a protein that performs a given function. So we might imagine a maximally heterogeneous macromolecule that is able to perform all the functions that actually existing proteins perform. But this kind of maximal heterogeneity is not what we see in proteins.

Instead, proteins are composed of just 20 repeatable units, a kind of protein-forming alphabet. These are amino acids. All the diversity we see in protein structure and function comes from different arrangements of these 20 amino acids. Why would proteins be limited to such a small number of basic components? The main reason is that proteins have to be put together and before that they have to be encoded. And it’s much more tractable to build from and encode a smaller number of basic units, as long as it gives you the structural functionality that you’ll need in the final macrostructure. It might be possible in principle to build a macromolecule without such a limited number of repeatable units. But it would never happen. The process to build such a macromolecule would be intractable.

This is an example of a general principle I’d like to highlight that we find in chemistry and in structure generally. And it’s related to embedding. But it’s a slightly different aspect of it. Complex, high-level structures are composed by the embedding of lower-level structures. And the higher-level structures make use of a limited number of lower-level structures that get embedded repeatedly.

In the case of a protein, the protein is the higher-level structure. Amino acids are the lower-level structures. The structures of the amino acids are embedded into the structure of the protein. And the higher-level structure of the protein uses only a limited number of lower-level amino acid structures.

A comparison to writing systems comes to mind here. It’s possible to represent spoken words in written form in various ways. For example, we can give each word its own character. That would take a lot of characters, several hundred and into the thousands. And such a writing system takes several years to be able to use with any competence. But it’s also possible to limit the number of characters used in a writing system by using the same characters for phonemic properties common to all words, like syllables or phonemes. Many alphabets, for example, only have between 20 and 30 characters. And it’s possible to learn to use an alphabet fairly quickly. And here’s the key. There’s no functional representational loss by using such a limited number of characters. The representational “space” is the same. It’s just represented using a much smaller set of basic components.

Biochemists mark out four orders of biomolecular structure: primary, secondary, tertiary, and quaternary. And this is a perfect illustration of structural embedding.

The primary structure of a protein is its amino acid sequence. The primary structure is conceptually linear since there’s no branching. So you can “spell” out a protein’s primary structure using an amino acid alphabet, one amino acid after another. Like, MGDVEK: methionine, glycine, aspartic acid, valine, glutamic acid, and lysine. Those are the first 6 amino acids in the sequence for human Cytochrome c. What’s interesting about amino acids is that they have different functional groups that give them properties that will contribute to the functionality of the protein. We might think of this as a zeroth-level protein structure (though I don’t know of anyone calling it that). Every amino acid has a carboxyl group and an amino group. That’s the same in all of them. But they each have their own side chain or R-group in addition to that. And these can be classified by properties like polarity, charge, and other functional groups they contain. For example, methionine is sulfuric, nonpolar, and neutral; asparagine is an amide, polar, and neutral; phenylalanine is aromatic, nonpolar, and neutral; lysine is basic, polar, and positively charged. These are important properties that contribute to a protein’s higher-level structure.

The secondary structure of a protein consists of three-dimensional, local structural elements. The interesting thing about secondary structures in the context of embedding and repeatable units is that these local structures take common shapes that occur all the time in protein formation. The two most important structural elements are alpha helices and beta sheets. True chemical bonds only occur between the amino acid units of the primary structure but in the higher level structures the electrostatic forces arising from differences in charge distribution throughout the primary structure make certain regions of the primary structure attracted to each other. These kinds of attractions are called hydrogen bonds, in which a hydrogen atom bound to a more electronegative atom or group is attracted to another electronegative atom bearing a lone pair of electrons. In the case of amino acids such hydrogen bonding occurs between the amino hydrogen and carboxyl oxygen atoms in the peptide backbone.

In an alpha helix these hydrogen bonds form in a way that makes the amino acids wrap around in a helical shape. In a beta sheet strands of amino acids will extend linearly for some length and then turn back onto themselves, with the new strand segment extending backward and forming hydrogen bonds with the previous strand. These hydrogen bound strands of amino acids then form planar sheet-like structures. What’s interesting is that these kinds of secondary structures are very common and get used repeatedly, much like amino acids get used repeatedly in primary structures. Secondary structures, like alpha helices and beta sheets (among others), then get embedded in even higher-level structures.

The tertiary structure of a protein is its full three-dimensional structure that incorporates all the lower-level structures. Tertiary structures are often represented using coils for the alpha helix components and thick arrows for the beta sheet components. The way a protein is oriented in three-dimensional space is determined by the properties of its lower level structures all the way down to the functional groups of the amino acids. Recall that the different amino acids can be polar or nonpolar. This is really important because proteins reside in aqueous environments with highly polar water molecules. Nonpolar groups are said to be hydrophobic because conditions in which the surface area of exposure, the contact, between nonpolar groups and polar water molecules is minimized are entropically favored. Because of this polar and nonpolar molecules will appear to repel each other, a hydro-phobic effect. Think of the separation of oil and water as an example. Water is polar and oil is nonpolar. This is the same effect occurring at the scale of individual functional group units in the protein. Proteins can fold in such a way as to minimize the surface area of nonpolar functional groups exposed to water molecules. One way this can happen is that nonpolar amino acid sections fold over onto each other so that they interact with each other, rather than with water molecules and so that water molecules can interact with each other rather than with the nonpolar amino acid sections. These same kinds of effects driven by the properties of functional groups were also the ones bringing about the secondary structures of alpha helices and beta sheets.

Some proteins also have a quaternary structure in which multiple folded protein subunits come together to form a multi-subunit complex. Hemoglobin is an example of this. Hemoglobin is made up of four subunits; two alpha subunits and two beta subunits.

There’s a pattern here of structure building upon structure. But it does so with a limited set of repeatable structures. I’d like to address this again. Why should proteins be built out of only 20 amino acid building blocks. Certainly there could be (at least in theory) a macromolecule that has similar functionality and global structure, using the same functional group properties to get it to fold and form in the needed way, without the use of repeatable lower-level structural units. But that’s not what we see. Why? One important reason is that proteins need to be encoded.

Proteins are made from genes. Genes are sections of DNA that get translated into RNA and then transcribed in proteins. That’s a gene’s primary function: to encode proteins. DNA and RNA have further simplified components: only four types of nucleotides in each: guanine, adenine, cytosine, and thymine in DNA and guanine, adenine, cytosine, and uracil in RNA. These nucleotides have to match up with the proteins that they encode and it’s going to be very difficult to do that without dividing up the protein into units that can be encoded in a systematic way. There’s a complex biochemical process bringing about the process of transcribing an RNA nucleotide sequence into a protein. But since these are, at bottom, automatic chemical processes they have to proceed in systematic, repeatable ways. An entire macromolecule can’t have an entire intracellular biochemical system dedicated to just that macromolecule alone. For one thing, there are too many proteins for that. The same biochemical machinery for transcription has to be able to make any protein. So all proteins have to be made up of the same basic units.

The way this works in transcription is that molecules called transfer RNA (tRNA) are dedicated to specific combinations of the 4 basic RNA nucleotides. These combinations are called codons. A codon is some combination of 3 nucleotides. Since there are 4 kinds of nucleotides and each codon has 3 there are 43, or 64 possible combinations. Different codons correspond to different amino acids. Since they only code for 20 amino acids there is obviously some redundancy, also called degeneracy (which isn’t meant to be insulting by the way). The way that codons get transcribed into an amino acid is that the tRNA molecules that match the nucleotide sequences of the various codons in the RNA also convey their encoded amino acids. These tRNA molecules come together at the point of transcription, called ribosomes, and link the amino acids together into the chains that form the primary structure of the protein. This is just a part of the biochemical machinery of the process. What’s important to note here is that although there are a number of tRNA types it’s not unmanageable. There are at most 64 possible codon sequences. So there doesn’t have to be a unique set of transcription machinery dedicated to each and every kind of protein, which would be insane. The components only have to be dedicated to codon sequences and amino acids, which are much more manageable.

Key Takeaways

I’d like to summarize the foregoing with 4 key takeaways from this analysis of structure in chemistry that I think apply to a general philosophy of structure.

1. Structure can be modeled using functions

Recall that a function is a relation between sets that associates an element in one set or combination of elements from multiple sets to exactly one element in another set. The source sets are called domains and the target sets they map onto are called codomains. One example of a function we’ve looked at in both the previous episode on music and in this episode on chemistry is the waveform function. In chemistry mathematical functions called orbitals assign to each point in space (the domain) an amplitude value (the codomain).

2. Functions occupy only a small portion of a phase space

Functions, by nature, impose limitations. A relation that associates an element in a domain to more than one element in a codomain would not be a function. A function associates the domain element to only one codomain element. In this way functions are very orderly. To give an example, in an orbital a given point in space (a domain element) can have only one amplitude value (the codomain element). This is highly limited. To illustrate this, imagine a phase space of all possible combinations of domain and codomain values. Or to give a simpler comparison, imagine a linear function on an x-y plane; for example, the function y = x. This is a straight line at a 45 degree angle to the x and y axes. The straight line is the function. But the phase space is the entire plane. The plane contains all possible combinations of x and y values. But the function is restricted to only those points where y = x. A similar principle applies to orbitals. The corresponding phase space would be, not a plane, but a 4-dimensional hyperspace with axes r,θ,φ, and ψ. The phase space is the entire hyperspace. But the wave function, or orbital, is restricted to a 3-dimensional space in this 4-dimensional hyperspace. This kind of restriction of functions to small portions of phase spaces is a characteristic feature of structure generally.

3. Structural embedding

Embedding is a feature of structure that came up in music and has come up again in even more obvious form in chemistry. Just looking at proteins the different orders of structure is quite obvious and well known to biochemists, with their conceptual division of proteins into primary, secondary, tertiary, and quaternary protein structures, with each level of structure incorporating the lower level structures embedded into them. Using proteins as an example, even primary structures have embedded into them several layers of additional structure such as functional groups, molecular orbitals, atomic orbitals, and the general structure of the wave function itself. One key feature of such embedding is that properties and functionality of the lower-level structures are taken up and integrated into the higher-level structures into which they are embedded. We saw, for example, how the three-dimensional tertiary structure of a protein takes the form that it does because of the properties of functional groups in the side chains of individual amino acids, in particular polarity and nonpolarity.

4. Repeatable units

A final key takeaway is the use of repeatable units in the process of structural embedding. In retrospect this is certainly something that is applicable to music as well. We see repeatable units in the form of pitches and notes. In chemistry we see repeatable units in macromolecules like polymers and proteins. Polymers, like polyethylene, PVC, ABS, polyester, etc. certainly use repeatable units; in some cases a single repeating unit, or sometimes two or three. Proteins make use of more repeatable units but even there they make use of a limited number: 20 amino acids. We see here an important general principle of structure: that high-level structures tend to be composed through the repeated use of a limited number of lower-level structures rather than by forming as a single, bulk, irreducible macrostructure. The use of lower-level repeatable units in the higher-level structure facilitates the encoding and construction of high-level structures.

And that wraps up this study of structure in chemistry. Thank you for listening.

Quantum Properties

Should we understand quantum systems to have definite properties? In quantum interpretations values are usually taken to be the eigenvalues directly revealed in experiments and quantum systems generally have no definite eigenvalues. However, Sunny Auyang argues that this does not mean that they don’t have definite properties. The conclusion that they don’t arises from a restricted sense of what counts as a property. The conceptual structure of quantum mechanics is much richer and an expanded notion of properties facilitates an understanding of quantum properties that are more descriptive and structurally sophisticated.

One of the philosophical problems prompted by quantum mechanics is the nature of quantum properties and whether quantum systems can even be said to have properties. This is an issue addressed by Sunny Auyang in her book How is Quantum Field Theory Possible? And I will be following her treatment of the subject here.

One of the major contributors to the development of quantum mechanics, physicist Neils Bohr, whose grave I happened to visit when I was in Copenhagen, said: “Atomic systems should not even be thought of as possessing definite properties in the absence of a specific experimental setup designed to measure these properties.” Why is that? A lot of this hinges on what counts as a property, which is a matter of convention. For the kinds of things Bohr had in mind he was certainly right. But Auyang argues that it’s useful retain the notion and instead locate quantum properties in different kinds of things, in a way Bohr very easily could have agreed with.

Why are the kinds of things Bohr had in mind not good candidates as definite quantum properties? The upshot, before getting into the more technical description, is that in quantum systems properties like position don’t seem to have definite values prior to observation. As an example, in chemistry the electrons bound in atoms and molecules are understood to occupy orbitals, which are regions of space with probability densities. Rather than saying that a bound electron is at some position we say it has some probability to be at some position. If we think of a definite property as being something like position you can see why Bohr would say an atomic system doesn’t have definite properties in the absence of some experiment to measure it. Atomic and molecular orbitals don’t give us a definite property like position.

Auyang takes these kinds of failed candidates for definite properties to be what in quantum mechanics are called eigenvalues. And this will require some background. But to give an idea of where we’re going, Auyang wants to say that if we insist that properties are what are represented by eigenvalues then it is true that quantum systems do not have properties. However, she is going to argue that quantum systems do have properties, they are just not their eigenvalues; we have to look elsewhere to for such properties.

In quantum mechanics the characteristics of a quantum system are summarized by a quantum state. This is represented by a state vector or wave function, usually with the letter φ. A vector is a quantity that has both magnitude and direction. Vectors can be represented by arrows on a graph. So in a two dimensional graph the arrow would go from the center origin out into what is called the vector space. In two dimensions you could express the vector in terms of the horizontal and vertical axes; and the vector space would just be the plane these sweep out or span. It’s common to represent this in two, maybe three dimensions, but it’s actually not limited to that number; a vector space can have any number of dimensions. Whatever number of dimensions it has it will have a corresponding number of axes, which are more technically referred to as basis vectors. Quantum mechanics makes use of a special kind of vector space called a Hilbert space. This is also the state space of a quantum system. So recall that the description of the quantum system is its state, and this is represented by a vector. The state space then covers all permissible states that this quantum system can have.

Let’s limit this to two dimensions for the sake of visualization. And we can refer here to the featured image for this episode, which is a figure from Auyang’s book. We have a vector |φ> in a Hilbert Space with the basis, vectors {|α1>, |α2>}. So for this Hilbert Space |α1> and |α2> are basis vectors that serve as a coordinate system for this vector space. This is the system but it’s not what we interact with. For us to get at this system in some way we need to run experiments. And this also has a mathematical representation. What we get out of the system are observables like energy, position, and momentum, to name a few. Mathematically observables are associated with operators. An operator is a kind of linear transformation. Basically an operator transforms the state vector in some way. As a transformation, an operator usually maps one state into another state. But for certain states an operator will only result in the same state multiplied by some scaling factor. So let’s take some operator, upper case A, and have it operate on state |φ>. The result is a factor, lower case α multiplied by the original state |φ>. We can write this as:

A|φ> = α|φ>

In this kind of equation the vector |φ> is called an eigenvector and the factor α is called an eigenvalue. The prefix eigen- is adopted from the German word eigen for “proper”, “characteristic”, “own”, in reference to the fact that the original state or eigenvector is the same on both sides of the equation. In quantum mechanics this eigenvector is also called an eigenstate.

Now, getting back to quantum properties, I mentioned before that Auyang takes the kind of definite properties that quantum systems are understood not to have prior to observation to be eigenvalues. Eigenstates are certainly observed and corresponding eigenvalues measured in experiments. But the issue is of properties of the quantum system itself. Any given eigenvalue has only a certain probability of being measured, among the probabilities of other eigenvalues. So any single eigenvalue can’t be said to be characteristic of the whole quantum system.

Let’s go back to the two-dimensional Hilbert space with state vector |φ> and basis vectors |α1> and |α2>. The key feature of basis vectors is that every vector in the vector space can be written as a linear combination of those vectors. That’s how they act as a coordinate system. So if we take our vector |φ> we can break it down into two orthogonal (right angle) components, in this case the horizontal and vertical components, and then the values for the coefficients for those components will be some factor, ci, of the basis vectors. So for vector |φ> the components will be c11> and c22>. In the more generalized form with an unspecified number of dimensions we can say that the vector |φ> is equal to the sum of cii> for all i.

|φ> = ∑cii>

The complex numbers ci are amplitudes, or probability amplitudes, though we should note that it’s actually the square of the absolute value of ci that is a probability. Specifically, the quantity |ci|2 is the probability that the eigenvalue ai is observed in a measurement of the operator A on the state vector |φ>. This is known as the Born rule. Another way of describing this summation equation is to say that the state of the system is a linear combination, or superposition, of all the eigenstates that compose it and that these eigenstates are “weighted” by their respective probability amplitudes. Eigenstates with higher probability amplitudes are more likely to be observed. And this touches again on the idea that observations of certain eigenstates are probabilistic and that’s the reason that the eigenvalues for these eigenstates are not considered definite properties. Because, they’re not definite; they’re probabilistic.

If we apply operator A to state |φ> we have a new vector A|φ>. In our Hilbert space this new vector’s components are expressible in terms of the coordinates, or basis vectors. If the basis vectors are eigenvectors of A then these components are expressible in terms of the probability amplitude ci. We could say that the application of this operator A to vector |φ> extracts ci and multiplies it by the eigenvalue ai. And this is good because remember eigenvalues are what we actually observe in experiments. So now we can express the state of the systems in terms of things we can observe.

This transformed vector A|φ> is equal to the sum of products of eigenvalue ai, amplitude ci, and eigenvector |αi>, for all i.

A|φ> = ∑aicii>

Now we’re ready to get into what Auyang considers what we can properly consider properties of quantum systems. For some observable A and its operator, the sequence of complex numbers {aici} can be called an A-amplitude and is, using the eigenvalues, expressed in terms of the probability amplitude ci. And this is where Auyang locates the properties of quantum systems. She interprets the probability amplitude ci or the A-amplitude as the definite property or the value of a certain quantum system in a certain state for the property type A. And she makes the point that we shouldn’t try to imagine what the amplitudes and A-amplitudes describe because they are nothing like classical feature; “they are literally unimaginable”. But they are calculable. And that’s their crucial, property-type feature.

We might ask why we should locate definite properties in something that we can’t imagine. Classical properties like classical energy, position, and momentum are more easily envisioned, so these prospective, unimaginable quantum properties might seem unsatisfying. But this touches on Auyang’s general Kantian perspective on the sciences, which is that our understanding of scientific concepts relies on a complex underlying conceptual structure. And in this case that underlying conceptual structure includes things like vectors, Hilbert spaces, bases, eigenvectors, eigenvalues, and amplitudes. If that structure is required to comprehend the system it’s not unreasonable that the system’s definite properties would be expressed in terms of that structure.

With that mathematical overview let’s look at the concept of properties more closely and at our expectations of them. And here I’d like to just quote an extended passage directly from Auyang’s book because this is actually my favorite passage:

“In quantum interpretations, the ‘values’ are usually taken to be eigenvalues or spectral values, which can be directly revealed in experiments, although the revelation may involve some distortion so that the veracity postulate does not hold. It is beyond a reasonable doubt that quantum systems generally have no definite eigenvalues. However, this does not imply that they have no definite properties. The conclusion that they have none arises from the fallacious restriction of properties to classical properties, of which eigenvalues are instances. Sure, quantum systems have no classical properties. But why can’t they have quantum properties? Is it more reasonable to think that quantum mechanics is necessary because the world has properties that are not classical?”

“The no-property fallacy also stems from overlooking the fact that the conceptual structure of quantum mechanics is much richer than that of classical mechanics. In classical mechanics, the properties of a system are represented by the numerical values of functions, which assign real numbers to various states of the system. In quantum mechanics, functions are replaced by operators, which are structurally richer. A function is like a fish with only one swaying tail, its numerical value; an operator is like an octopus with many legs. Quantum mechanics employs the octopus with good reason, and we miss something important if we look only at the one leg that reminds us of the fishy tail. Quantum systems generally do not have definite eigenvalues, but they have other definite values. The stipulation that the values must be directly revealable in measurements confuses the empirical and physical meanings of properties.”

“I argue that we cannot give up the notion of objective properties. If we did, the quantum world would become a phantom and the application of quantum mechanics to practical situations sorcery. Are there predicates such that we can definitely say of a quantum system, it is such and so? Yes, the wavefunction is one. The wavefunction of a system is a definite predicate for it in the position representation. It is not the unique predicate; a predicate in the momentum representation does equally well. Quantum properties are none other than what the wavefunctions and predicates in other representations describe.”

And recall here that a wave function is another way of referring to the state of a quantum system. I think of this was moving things up a level. Or down a level depending on how you want to think of it. Regardless, at one level we have the eigenvalues that pop out with the application of an operator on a state vector. These are not definite properties of the system as a whole. In other words, the definite properties of the quantum system do not reside at this level. Rather they reside at the level prior to this, on which these outcomes depend. In the case of an atomically bound electron we could say that it is the orbital, the probability distribution of the electron’s location, that is a property of the quantum system, rather than any particular position. These sorts of properties have a lot more too them. As Auyang says, they are “structurally richer”. They’re not just values. They are amplitudes, from which we derive probabilities for various values. And what Auyang is saying is that there’s no reason not consider that the definite property of the quantum system.

Still, it is different from out classical notion of properties. So what is it that is common to both classical and quantum properties? Auyang borrows a term from Alfred Landé, proposing that a characteristic has empirical ramification if it is observable or “kickable”:

“Something is kickable if it can be kicked and kicks back, or it can be somehow physically manipulated and the manipulation produces observable effects. Presumably the property is remote and obscure if we must resort to the indirect kickability criterion alone. Thus kickability can only work in a well-developed theory in which the property is clearly defined, for we must be able to say specifically what we are kicking and how it is supposed to kick back.”

In the case of quantum properties we are indeed in a situation where the property is “remote and obscure”. But we also have recourse to “a well-developed theory in which the property is clearly defined”. So that puts us in a good position. Because of this it doesn’t matter if properties are easily visualizable. “Quantum properties are not visualizable, but this will no longer prevent them from being physical”. The physical surpasses what we are able to visualize.

So there is a well-developed conceptual structure that connects observables to the definite properties of the quantum system prior to these observables. To review a little how this structure and cascade of connections works:

We start with the most immediate aspect: what we actually observe, which enter into the conceptual structure as eigenvalues. Eigenvalues of an observable can be regarded as labels of the eigenstates. Eigenstates serve as axes of a coordinate system in the state space. This is an important point, so I’ll repeat it again in another way. As Auyang puts it: “An observable coordinates the quantum world in a particular way with its eigenstates, and formally correlates the quantum coordinate axes to classical indicators, the eigenvalues. An observable introduces a representation of the quantum state space by coordinatizing it.” So we have observations to eigenvalues, to eigenstates, to axes in a state space.

The coordinate system in the state space enables us to determine definite amplitudes. The state space is a vector space and any particular state or quantum system in this state space is a vector in this space. We can break this vector down into components which are expressed in terms of the coordinate system or basis, i.e. the eigenstates. This is the coefficient ci, which is a probability amplitude. This is why we’re able to determine definite amplitudes using the coordinate system. A quantum system has no definite eigenvalues but it does have definite amplitudes. When it’s broken down into its basis components a quantum state is series of eigenstate expansion, multiple terms that are added up to define the vector. Each of these terms has an amplitude associated with an eigenstate that is also associated with some observable. Practically, an indicator in the form of an eigenvalue is somehow triggered in measurements and experiments. And the probability of observing any particular eigenvalue will be defined by its amplitude. Specifically, the quantity |ci|2 is the probability that the eigenvalue ai is observed in a measurement of A on the state |φ>. But it is the probability amplitude ci that is the definite property of the quantum system rather than any particular eigenvalue that happens to be observed. What’s more, this is an objective property of the quantum system even in the absence of any experiment. As Auyang puts it: “Unperformed experiments have no results, but this does not imply that the quantum system on which the experiment might be performed has no properties.” Now to show the more complete cascade of kickability: we have physical observations, to eigenvalues, to eigenstates, to axes in a state space, to a state vector, to vector components, to component coefficients, to probability amplitudes. And it’s the probability amplitudes that are the definite properties of the quantum system.

The question of whether or not quantum systems have definite properties is a philosophical question rather than a question of physics, to the extent that those can be separated. It’s not necessary to engage in the philosophy in order to engage in the physics. One can measure eigenvalues and calculate probability amplitudes without worrying about whether any of them count as properties. But it’s arguably part of the scientific experience to step back on occasion to reflect on the big picture. To ask things like, “What is the nature of the work that we’re doing here?”, “What does all this tell us about the nature of reality?”, “Or about the way we conceptualize scientific theories?” For me one of the most fascinating insights prompted by quantum mechanics is of the necessity of the elaborate conceptual structures that support our understanding of the theory. To put it in Kantian terms, these conceptual structures are “transcendental” in the sense that they constitute the conditions that are presupposed and necessary for us to be able to understand the theory in the first place. And to me that seems quite significant.