How to Use Entropy

Entropy is an important property in science but it can be somewhat challenging. It is commonly understood as “disorder”, which is fine as an analogy but there are better ways to think about it. As with many concepts, especially complex ones, better understanding comes with repeated use and application. Here we look at how to use and quantify entropy in applications with steam and chemical reactions.

Entropy is rather intimidating. It’s important to the sciences of physics and chemistry but it’s also highly abstract. There are, no doubt, more than a couple of students who graduate with college degrees in the physical sciences or in engineering who don’t have much of an understanding of what it is or what to do with it. We know it’s there and that it’s a thing but we’re glad not to have to think about it any more after we’ve crammed for that final exam in thermodynamics. I think one reason for that is because entropy isn’t something that we often use. And using things is how we come to understand them, or at least get used to them.

Ludwig Wittgenstein argued in his later philosophy that the way we learn words is not with definitions or representations but by using them, over and over again. We start to learn “language games” as we play them, whether as babies or as graduate students. I was telling my daughters the other day that we never really learn all the words in a language. There are lots of words we’ll never learn and that, if we happen to hear them, mean nothing to us. To use a metaphor from Wittgenstein again, when we hear these words they’re like wheels that turn without anything else turning with them. I think entropy is sometimes like this. We know it’s a thing but nothing else turns with it. I want to plug it into the mechanism. I think we can understand entropy better by using it to solve physical problems, to see how it interacts (and “turns”) with things like heat, temperature, pressure, and chemical reactions. My theory is that using entropy in this way will help us get used to it and be more comfortable with it. So that maybe it’s a little less intimidating. That’s the object of this episode.

I’ll proceed in three parts.

1. Define what entropy is

2. Apply it to problems using steam

3. Apply it to problems with chemical reactions

What is Entropy?

I’ll start with a technical definition that might be a little jarring but I promise I’ll explain it.

Entropy is a measure of the number of accessible microstates in a system that are macroscopically indistinguishable. The equation for it is:

S = k ln W

Here S is entropy, k is the Boltzmann constant, and W is the number of accessible microstates in a system that are macroscopically indistinguishable.

Most people, if they’ve heard of entropy at all, haven’t heard it described in this way, which is understandable because it’s not especially intuitive. Entropy is often described informally as “disorder”. Like how your bedroom will get progressively messier if you don’t actively keep it clean. That’s probably fine as an analogy but it is only an analogy. I prefer to dispense with the idea of disorder altogether as it relates to entropy. I think it’s generally more confusing than helpful.

But the technical, quantifiable definition of entropy is a measure of the number of accessible microstates in a system that are macroscopically indistinguishable.

S = k ln W

Entropy S has units of energy divided by temperature, I’ll use units of J/K. The Boltzmann constant k is the constant 1.38 x 10-23 J/K. The Boltzmann constant has the same units as entropy so those will cancel, leaving W as just a number with no dimensions.

W is the number of accessible microstates in a system that are macroscopically indistinguishable. So we need to talk about macrostates and microstates. An example of a macrostate is the temperature and pressure of a system. The macrostate is something we can measure with our instruments: temperature with a thermometer and pressure with a pressure gauge. But at the microscopic or molecular level the system is composed of trillions of molecules and it’s the motion of these molecules that produce what we see as temperature and pressure at a macroscopic level. The thermal energy of the system is distributed between its trillions of molecules and every possible, particular distribution of thermal energy between each of these molecules is an individual microstate. The number of ways that thermal energy of a system can be distributed among its molecules is an unfathomably huge number. But the vast majority of them make absolutely no difference at a macroscopic level. The vast majority of the different possible microstates correspond to the same macrostate and are macroscopically indistinguishable.

To dig a little further into what this looks like at the molecular level, the motion of a molecule can take the form of translation, rotation, and vibration. Actually, in monatomic molecules it only takes the form of translation, which is just its movement from one position to another. Polyatomic molecules can also undergo rotation and vibration, with the number of vibrational patterns increasing as the number of atoms increases and shape of the molecule becomes more complicated. All these possibilities for all the molecules in a system are potential microstates. And there’s a huge number of them. Huge, but also finite. A fundamental postulate of quantum mechanics is that energy is quantized. Energy levels are not continuous but actually come in discrete levels. So there is a finite number of accessible microstates, even if it’s a very huge finite number.

For a system like a piston we can set its entropy by setting its energy (U), volume (V), and number of atoms (N); its U-V-N conditions. If we know these conditions we can predict what the entropy of the system is going to be. The reason for this is that these conditions set the number of accessible microstates. The reason that the number of accessible microstates would correlate with the number of atoms and with energy should be clear enough. Obviously having more atoms in a system will make it possible for that system to be in more states. The molecules these atoms make up can undergo translation, rotation, and vibration and more energy makes more of that motion happen. The effect of volume is a little less obvious but it has to do with the amount of energy separating each energy level. When a set number of molecules expand into a larger volume the energy difference between the energy levels decreases. So there are more energy levels accessible for the same amount of energy. So the number of accessible microstates increases.

The entropies for many different substances have been calculated at various temperatures and pressures. There’s especially an abundance of data for steam, which has had the most practical need for such data in industry. Let’s look at some examples with water at standard pressure and temperature conditions. The entropy of

Solid Water (Ice): 41 J/mol-K

Liquid Water: 69.95 J/mol-K

Gas Water (Steam): 188.84 J/mol-K

One mole of water is 18 grams. So how many microstates does 18 grams of water have in each of these cases?

First, solid water (ice):

S = k ln W

41 J/K = 1.38 x 10-23 J/K * ln W

Divide 41 J/K by 1.38 x 10-23 J/K and the units cancel

ln W = 2.97 x 1024

That’s already a big number but we’re not done yet.

Raise e (about 2.718) to the power of both sides

W = 10^(1.29 x 10^24) microstates

W = 101,290,000,000,000,000,000,000,000 microstates

That is an insanely huge number.

Using the same method, the value for liquid water is:

W = 10^(2.2 x 10^24) microstates

W = 102,200,000,000,000,000,000,000,000 microstates

And the value for steam is:

W = 10^(5.94 x 10^24) microstates

W = 105,940,000,000,000,000,000,000,000 microstates

In each case the increased thermal energy makes additional microstates accessible. The fact that these are all really big numbers makes it a little difficult to see that, since these are differences in exponents, each number is astronomically larger than the previous one. Liquid water has 10^(9.1 x 10^23) times as many accessible microstates as ice. And steam has 10^(3.74 x 10^24) times as many accessible microstates as liquid water.

With these numbers in hand let’s stop a moment to think about the connection between entropy and probability. Let’s say we set the U-V-N conditions for a system of water such that it would be in the gas phase. So we have a container of steam. We saw that 18 grams of steam has 10^(5.94 x 10^24) microstates. The overwhelming majority of these microstates are macroscopically indistinguishable. In most of the microstates the distribution of the velocities of the molecules is Gaussian; they’re not all at identical velocity but they are distributed around a mean along each spatial axis. That being said, there are possible microstates with different distributions. For example, there are 10^(1.29 x 10^24) microstates in which that amount of water would be solid ice. That’s a lot! And they’re still accessible. There’s plenty of energy there to access them. And a single microstate for ice is just as probable as a single microstate for steam. But there are 10^(4.65 x 10^24) times as many microstates for steam than there are for ice. It’s not that any one microstate for steam is more probable than any one microstate for ice. It’s just that there are a lot, lot more microstates for steam. The percentage of microstates that take the form of steam is not 99% or 99.99%. It’s much, much closer than that to 100%. Under the U-V-N conditions that make those steam microstates accessible they will absolutely dominate at equilibrium.

What if we start away from equilibrium? Say we start our container with half ice and half steam by mass. But with the same U-V-N conditions for steam. So it has the same amount of energy. What will happen? The initial conditions won’t last. The ice will melt and boil until the system just flips among the vast number of microstates for steam. If the energy of the system remains constant it will never return to ice. Why? It’s not actually absolutely impossible in principle. But it’s just unimaginably improbable.

That’s what’s going on at the molecular level. Macroscopically entropy is a few levels removed from tangible, measured properties. What we see macroscopically are relations between heat flow, temperature, pressure, and volume. But we can calculate the change in entropy between states using various equations expressed in terms of these macroscopic properties that we can measure with our instruments.

For example, we can calculate the change in entropy of an ideal gas using the following equation:

Here s is entropy, cp is heat capacity at constant pressure, T is temperature, R is the ideal gas constant, and P is pressure. We can see from this equation that, all other things being equal, entropy increases with temperature and decreases with pressure. And this matches what we saw earlier. Recall that if the volume of a system of gas increases with a set quantity of material the energy difference between the energy levels decreases and there are more energy levels accessible for the same amount of energy. Under these circumstances pressure would decrease so entropy would decrease with pressure.

For solids and liquids we can assume that they are incompressible and leave off the pressure terms. So the change in entropy for a solid or liquid is given by the equation:

Let’s do an example with liquid water. What’s the change in entropy, and the increase in the number of accessible microstates, that comes from increasing the temperature of liquid water one degree Celsius? Let’s say we’re increasing 1 mole (18 grams) of water from 25 to 26 degrees Celsius. At this temperature the heat capacity of water is 75.3 J/mol-K.

Now that we have the increase in entropy we can find the increase in the number of microstates using the equation

Setting this equal to 0.252 J/mol-K

The increase is not as high as it was with phase changes, but it’s still a very big change.

We’ll wrap up the definition section here but conclude with some general intuitions we can gather from these equations and calculations:

1. All other things being equal, entropy increases with temperature.

2. All other things being equal, entropy decreases with pressure.

3. Entropy increases with phase changes from solid to liquid to gas.

Keeping these intuitions in mind will help as we move to applications with steam

Applications with Steam

The first two examples in this section are thermodynamic cycles. All thermodynamic cycles have 4 processes.

1. Compression

2. Heat addition

3. Expansion

4. Heat rejection

These processes circle back on each other so that the cycle can be repeated. Think, for example, of pistons in a car engine. Each cycle of the piston is going through each of these processes over and over again, several times per second.

There are many kinds of thermodynamic cycles. The idealized cycle is the Carnot cycle, which gives the upper limit on the efficiency of conversion from heat to work. Otto cycles and diesel cycles are the cyles used in gasoline and diesel engines. Our steam examples will be from the Rankine cycle. In a Rankine cycle the 4 processes take the following form:

1. Isentropic compression

2. Isobaric heat addition

3. Isentropic expansion

4. Isobaric heat rejection

An isobaric process is one that occurs at constant pressure. An adiabatic process is one that occurs at constant entropy.

An example of a Rankine cycle is a steam turbine or steam engine. Liquid water passes through a boiler, the steam passes through a turbine, expanding and turning the turbine, The fluid passes through a condenser, and then is pumped back to the boiler, where the cycle repeats. In such problems the fact that entropy is the same before and after expansion through the turbine reduces the number of unknown variables in our equations.

Let’s look at an example problem. Superheated steam at 6 MPa at 600 degrees Celsius expands through a turbine at a rate of 2 kg/s and drops in pressure to 10 kPa. What’s the power output from the turbine?

We can take advantage of the fact that the entropy of the fluid is the same before and after expansion. We just have to look up the entropy of superheated steam in a steam table. The entropy of steam at 6 MPa at 600 degrees Celsius is:

The entropy of the fluid before and after expansion is the same but some of it condenses. This isn’t good for the turbines but it happens nonetheless. Ideally, most of the fluid is still vapor so the ratio of the mass that is saturated vapor to the total fluid mass is called “quality”. The entropies of saturated liquid, sf, and of evaporation, sfg, are very different. So we can use algebra to calculate the quality, x2, of the fluid. The total entropy of the expanded fluid is given by the equation:

s2 we already know because the entropy of the fluid exiting the turbine is the same as that of the fluid entering the turbine. And we can look up the other values in steam tables.

Solving for quality we find that 

Now that we know the quality we can find the work output from the turbine. The equation for the work output of the turbine is:

h1 and h2 and enthalpies before and after expansion. If you’re not familiar with enthalpy don’t worry about it (we’re getting into enough for now). It roughly corresponds to the substance’s energy. We can look up the enthalpy of the superheated steam in a steam table.

For the fluid leaving the turbine we need to calculate the enthalpy using the quality, since it’s part liquid, part vapor. We need the enthalpy of saturated liquid, hf, and of evaporation, hfg. The total enthalpy of the fluid leaving the turbine is given by the formula

From the steam tables

So

And now we can plug this in to get the work output of the turbine.

So here’s an example where we used the value of entropy to calculate other observable quantities in a physical system. Since the entropy was the same before and after expansion we could use that fact to calculate the quality of the fluid leaving the turbine, use quality to calculate the enthalpy of the fluid, and use the enthalpy to calculate the work output of the turbine.

A second example.  Superheated steam at 2 MPa and 400 degrees Celsius expands through a turbine to 10 kPa. What’s the maximum possible efficiency from the cycle? Efficiency is work output divided by heat input. We have to input work as well to compress the fluid with the pump so that will subtract from the work output from the turbine. Let’s calculate the work used by the pump first. Pump work is:

Where v is the specific volume of water, 0.001 m3/kg. Plugging in our pressures in kPa:

So there’s our pump work input.

The enthalpy of saturated liquid is:

Plus the pump work input is:

Now we need heat input. The enthalpy of superheated steam at 2 MPa and 400 degrees Celsius is:

So the heat input required is:

The entropy before and after expansion through the turbine is the entropy of superheated steam at 2 MPa and 400 degrees Celsius is:

As in the last example, we can use this to calculate the quality of the steam with the equation:

Looking up these values in a steam table:

Plugging these in we get:

And

Now we can calculate the enthalpy of the expanded fluid.

And the work output of the turbine.

So we have the work input of the pump, the heat input of the boiler, and the work output of the turbine. The maximum possible efficiency is:

So efficiency is 32.32%.

Again, we used entropy to get quality, quality to get enthalpy, enthalpy to get work, and work to get efficiency. In this example we didn’t even need the mass flux of the system. Everything was on a per kilogram basis. But that was sufficient to calculate efficiency.

One last example with steam. The second law of thermodynamics has various forms. One form is that the entropy of the universe can never decrease. It is certainly not the case that entropy can never decrease at all. Entropy decreases all the time within certain systems. In fact, all the remaining examples in this episode will be cases in which entropy decreases within certain systems. But the total entropy of the universe cannot decrease. Any decrease in entropy must have a corresponding increase in entropy somewhere else. It’s easier to see this in terms of an entropy balance.

The entropy change in a system can be negative but the balance of the change in system entropy, entropy in, entropy out, and entropy of the surroundings will never be negative. We can look at the change of entropy of the universe as a function of the entropy change of a system and the entropy change of the system’s surroundings.

So let’s look at an example. Take 2 kg of superheated steam at 400 degrees Celsius and 600 kPa and condense it by pulling heat out of the system. The surroundings have a constant temperature of 25 degrees Celsius. From steam tables the entropy of the superheated steam and saturated steam are:

With these values we can calculate the change in entropy inside the system using the following equation;

The entropy decreases inside the system. Nothing wrong with this. Entropy can definitely decrease locally. But what happens in the surroundings? We condensed the steam by pulling heat out of the system and into the surroundings. So there is positive heat flow, Q, out into the surroundings. We can find the change in entropy in the surroundings using the equation:

We know the surroundings have a constant temperature, so we know T. We just need the heat flow Q. We can calculate the heat flow into the surroundings by calculating the heat flow out of the system using the equation

So we need the enthalpies of the superheated steam and saturated steam.

And plugging these in

Q = mΔh=(2)3270.2-670.6=5199 J

Now that we have Q we can find the change in entropy in the surroundings:

The entropy of the surroundings increases. And the total entropy change of the universe is:

So even though entropy decreases in the system the total entropy change in the universe is positive.

I like these examples with steam because they’re very readily calculable. The thermodynamics of steam engines have been extensively studied for over 200 years, with scientists and engineers gathering empirical data. So we have abundant data on entropy values for steam in steam tables. I actually think just flipping through steam tables and looking at the patterns is a good way to get a grasp on the way entropy works. Maybe it’s not something you’d do for light reading on the beach but if you’re ever unable to fall asleep you might give it a try.

With these examples we’ve looked at entropy for a single substance, water, at different temperatures, pressures, and phases, and observed the differences of the value of entropy at these different states. 

To review some general observations:

1. All other things being equal, entropy increases with temperature.

2. All other things being equal, entropy decreases with pressure.

3. Entropy increases with phase changes from solid to liquid to gas.

In the next section we’ll look at entropies for changing substances in chemical reactions.

Applications with Chemical Reactions

The most important equation for the thermodynamics of chemical reactions is the Gibbs Free Energy equation:

ΔG=ΔH-TΔS

Where H, T, S are enthalpy, temperature, and entropy. ΔG is the change in Gibbs free energy. Gibbs free energy is a thermodynamic potential. It is minimized when a system reaches chemical equilibrium. For a reaction to be spontaneous the value for ΔG has to be negative, meaning that during the reaction the Gibbs free energy is decreasing and moving closer to equilibrium.

We can see from the Gibbs free energy equation

ΔG=ΔH-TΔS

That the value of the change in Gibbs free energy is influenced by both enthalpy and entropy. The change in enthalpy tells us whether a reaction is exothermic (negative ΔH) or endothermic (positive ΔH). Exothermic reactions release heat while endothermic reactions absorb heat. This has to do with the total change in the chemical bond energies in all the reactants against all the products. In exothermic reactions the energy released from breaking chemical bonds is greater than the energy used to form new chemical bonds. This extra energy is converted to heat. We can see from the Gibbs free energy equation that exothermic reactions are more thermodynamically favored. Nevertheless, entropy can override enthalpy.

The minus sign in front of the TS term tells us that an increase in entropy where ΔS is positive will be more thermodynamically favored. This makes sense with what we know about entropy from the second law of thermodynamics and from statistical mechanics. The effect is proportional to temperature. At low temperatures entropy won’t have much influence and enthalpy will dominate. But at higher temperatures entropy will start to dominate and override enthalpic effects. This makes it possible for endothermic reactions to proceed spontaneously. If the increase in entropy for a chemical reaction is large enough and the temperature is high enough endothermic reactions can proceed spontaneously, even though the energy required to form the chemical bonds of the products is more than the energy released from the chemical bonds in the reactants.

Let’s look at an example. The chemical reaction for the production of water from oxygen and hydrogen is:

We can look up the enthalpies and entropies of the reactants and products in chemical reference literature. What we need are the standard enthalpies of formation and the standard molar entropies of each of the components.

The standard enthalpies of formation of oxygen and hydrogen are both 0 kJ/mol. By definition, all elements in their standard states have a standard enthalpy of formation of zero. The standard enthalpy of formation for water is -241.83 kJ/mol. The total change in enthalpy for this reaction is

It’s negative which means that the reaction is exothermic and enthalpically favored.

The standard molar entropies for hydrogen, oxygen, and water are, respectively, 130.59 J/mol-K, 205.03 J/mol-K, and 188.84 J/mol-K. The total change in entropy for this reaction is

It’s negative so entropy decreases in this reaction, which means the reaction is entropically disfavored. So enthalpy and entropy oppose each other in this reaction. Which will dominate depends on temperature? At 25 degrees Celsius (298 K) the change in Gibbs free energy is

The reaction is thermodynamically favored. Even though entropy is reduced in this reaction, at this temperature that effect is overwhelmed by the favorable reduction in enthalpy as chemical bond energy of the reactants is released as thermal energy.

Where’s the tradeoff point where entropy overtakes enthalpy? This is a question commonly addressed in polymer chemistry with what’s called the ceiling temperature. Polymers are macromolecules in which smaller molecular constituents called monomers are consolidated into larger molecules. We can see intuitively that this kind of molecular consolidation constitutes a reduction in entropy. It corresponds with the rough analogy of greater order from “disorder” as disparate parts are assembled into a more organized totality. And that analogy isn’t bad. So in polymer production it’s important to run polymerization reactions at temperatures where exothermic, enthalpy effects dominate. The upper end of this temperature range is the ceiling temperature.

The ceiling temperature is easily calculable from the Gibbs free energy equation for polymerization

Set ΔGp to zero.

And solve for Tc

At this temperature enthalpic and entropic effects are balanced. Below this temperature polymerization can proceed spontaneously. Above this temperature depolymerization can proceed spontaneously.

Here’s an example using polyethylene. The enthalpies and entropies of polymerization for polyethylene are

Using our equation for the ceiling temperature we find

So for a polyethylene polymerization reaction you want to run the reaction below 610 degrees Celsius so that the exothermic, enthalpic benefit overcomes your decrease in entropy.

Conclusion

A friend and I used to get together on weekends to take turns playing the piano, sight reading music. We were both pretty good at it and could play songs reasonably well on a first pass, even though we’d never played or seen the music before. One time when someone was watching us she asked, “How do you do that?” My friend had a good explanation I think. He explained it as familiarity with the patterns of music and the piano. When you spend years playing songs and practicing scales you just come to know how things work. Another friend of mine said something similar about watching chess games. He could easily memorize entire games of chess because he knew the kinds of moves that players would tend to make. John Von Neumann once said: “In mathematics you don’t understand things. You just get used to them.” I would change that slightly to say that you understand things by getting used to them. Also true for thermodynamics. Entropy is a complex property and one that’s not easy to understand. But I think it’s easiest to get a grasp on it by using it.

Philosophy of Structure, Part 3: Chemistry

Part 3 in this series on the philosophy of structure looks at examples and general principles of structure in chemistry. Subjects covered include quantum mechanics, the Schrödinger equation, wave functions, orbitals, molecules, functional groups, and the multiple levels of structure in proteins. General principles discussed include the nature of functions, the embedding of lower-level structures into higher-level structures, and the repeated use of a limited number of lower-level structures in the formation of higher-level structures.

In this third episode on the philosophy of structure I’d like to look at examples of structure in the field of chemistry. I’d like to see how some of the general principles of structure discussed in previous episodes apply to chemistry and to see what new general principles we can pick up from the examples of chemical structures. I’ll proceed along different scales, from the smallest and conceptually most fundamental components of chemical structure up to the larger, multicomponent chemical structures. For basic principles I’ll start with quantum mechanics, the Schrödinger  equation, and its wave function solutions, which constitute atomic and molecular orbitals. From there I’ll look at different functional groups that occur repeatedly in molecules. And lastly I’ll look at the multiple levels of structure of proteins, the embedding of chemical structures, and the use of repeatable units in the formation of multicomponent chemical structures.

One aspect from previous discussions that won’t really show up in chemistry is an aesthetic dimension of structure. That’s not to say that chemical structures lack beauty. I find them quite beautiful and the study and application of chemical structures has actually been the primary subject of my academic and professional work. In other words, I probably find chemistry more aesthetically satisfying than most people commonly would. But what I’m coming to think of as the philosophical problem of systematizing the aesthetic dimension of structure, in fields like music, art, and literature, isn’t so directly applicable here. I’ll get back to that problem in future episodes. The aesthetic dimension is not so intrinsic to the nature of the subject in the case of chemistry.

So let’s start getting into chemical structures by looking at the smallest and most conceptually fundamental scale.

Matter Waves

There is, interestingly enough, an intriguing point of commonality between music and chemistry at the most fundamental level; and that is in the importance of waveforms. Recall that the fundamental building block of a musical composition is a sound wave, a propagation of variations in the local pressure in which parts of the air are compacted and parts of the air are rarified. Sound waves are governed by the wave equation, a second order partial differential equation, and its solutions, in which a series of multiple terms are added together in a superposition, with each individual term in that summation representing a particular harmonic or overtone. There are going to be a lot of similarities to this in the basic building of chemical structures.

One of the key insights and discoveries of twentieth century science was that matter also takes the form of waves. This is foundational to quantum mechanics and it is known as the de Broglie hypothesis. This was a surprising and strange realization but it goes a long way in explaining much of what we see in chemistry. Because a particle is a wave it also has a wavelength. 

Recall that in acoustics, with mechanical waves propagating through a medium, wavelength is related to the frequency and speed of the wave’s propagation. That relation is:

λ = v/f

Where λ, is the wavelength, f is frequency, and v is the wave propagation velocity. With this kind of mechanical wave the wave is not a material “thing” but a process, a disturbance, occurring in a material medium.

But with a matter wave the wave is the matter itself. And the wavelength of the matter wave is related to the particle’s momentum, a decidedly material property. A particle’s wavelength is inversely proportional to its momentum. This relation is stated in the de Broglie equation:

λ = h/p

In which λ is the wavelength, h is a constant called Planck’s constant (6.626×10−34 J/s), and p is momentum. Momentum is a product of mass and velocity:

p = mv

Because the wavelength of a matter wave is inversely proportional to momentum the wavelength for the matter waves of macroscopic particles, the kinds of objects we see and interact with in our normal experience, is going to be very, very short, so as to be completely negligible. But for subatomic particles their wavelengths are going to be comparable to the scale of the atom itself, which will make their wave nature very significant to their behavior.

One interesting consequence of the wave nature of matter is that the precision of simultaneous values for momentum and position of a matter wave is limited. This is known as the Uncertainty Principle. There’s actually a similar limit to the precise specification of both wavelength and position for waves in general, i.e. for any and all waves. But because wavelength is related to momentum in matter waves this limitation gets applied to momentum as well.

Recall that with sound waves a musical pitch can be a superposition of multiple frequencies or wavelengths. This superposition is expressed by the multiple terms in a Fourier Series. Any function can be approximated using a Fourier Series, expressed in terms of added sinusoidal (oscillating) waves. A function that is already sinusoidal can be matched quite easily. The Fourier Series can converge on more complicated functions as well but they will require more terms (that’s important). In the case of musical pitches the resulting functions were periodic waves that repeated endlessly. But a Fourier Series can also describe pulses that are localized to specific regions. The catch is that more localized pulses, confined to tighter regions, require progressively more terms in the series, which means a higher number of wavelengths.

Bringing this back to matter waves, these same principles apply. Under the de Broglie formula wavelength is related to momentum. A pure sine wave that repeats endlessly has only one wavelength. But it also covers an infinite region. As a matter wave this would be a perfect specification of momentum with no specification of position. A highly localized pulse is confined to a small region but requires multiple terms and wavelengths in its Fourier Series. So its position is highly precise but its momentum is much less precise.

The limit of the simultaneous specification of momentum and position for matter waves is given by the equation:

σxσp ≥ h/(4π)

Where σx is the standard deviation of position, σp is the standard deviation of momentum, and h is Planck’s constant. The product of these two standard deviations has a lower limit. At this lower limit it’s only possible to decrease the standard deviation of one by increasing the standard deviation of the other. And this is a consequence of the wave nature of matter.

The most important application of these wave properties and quantum mechanical principles in chemistry is with the electron. Protons and neutrons are also important particles in chemistry, and significantly more massive than electrons. But it’s with the electrons where most the action happens. Changes to protons and electrons are the subject of nuclear chemistry, which is interesting but not something we’ll get into this time around. In non-nuclear chemical reactions it’s the electrons that are being arranged into the various formations that make up chemical structures. The behavior of an electron is described by a wave function and the wave equation is governed by the Schrödinger equation.

The Schrödinger equation is quite similar to the classical wave equation that governs sound waves. Recall that the classical wave equation is:

d2u/dx2 = (1/v2) * d2u/dt2 

Where u is the wave displacement from the mean value, x is distance, t is time, and v is velocity. A solution to this equation can be found using a method of separation of variables. The solution u(x,t) can be written as the product of a function of x and a sinusoidal function of time. We can write this solution as:

u(x,t) = ψ(x) * cos (2πft)

Where f is the frequency of the wave in cycles per unit time and ψ(x) is the spatial factor of the amplitude of u(x,t), the spatial amplitude of the wave. Substituting ψ(x) * cos (2πft) into the differential wave equation gives the following equation for the spatial amplitude ψ(x).

d2ψ/dx2 +2f2/v2 * ψ(x) = 0

And since frequency multiplied by wavelength is equal to velocity (fλ = v) we can rewrite this in terms of wavelength, λ:

d2ψ/dx2 +2/λ2 * ψ(x) = 0

So far this is just applicable to waves generally. But where things get especially interesting is the application to matter waves, particularly to electrons. Recall from the de Broglie formula that:

λ = h/p

In which h is a constant called Planck’s constant (6.626×10−34 J/s) and p is momentum. We can express the total energy of a particle in terms of momentum by the equation:

E = p2/2m + V(x)

Where E is total energy, m is mass, and V(x) is potential energy as a function of distance. Using this equation we can also express momentum in these terms:

p = {2m[E – V(x)]]1/2

And since,

λ = h/p

The differential equation becomes

d2ψ/dx2 + 2m/ħ2 * [E – V(x)] ψ(x) = 0

Where

ħ = h/(2π)

This can also be written as

2/2m * d2ψ/dx2 + V(x) ψ(x) = E ψ(x)

This is the Schrödinger equation. Specifically, it’s the time-independent Schrödinger equation. So what do we have here? There’s a similar relationship between the classical wave equation (a differential equation) and its solution u(x,t), which characterizes a mechanical wave. The Schrödinger equation is also a differential equation and its solution, ψ(x), is a wave function that characterizes a matter wave. It describes a particle of mass m moving in a potential field described by V(x). Of special interest to chemistry is the description of an electron moving in the potential field around an atomic nucleus.

Let’s rewrite the Schrödinger equation using a special expression called an operator. An operator is a symbol that tells you to do something to whatever follows the symbol. The operator we’ll use here is called a Hamiltonian operator, which has the form:

H = -ħ2/2m * d2/dx2 + V(x)

Where H is the Hamiltonian operator. It corresponds to the total energy of a system, including terms for both the kinetic and potential energy. We can express the Schrödinger equation much more concisely in terms of the Hamiltonian operator, in the following form:

H ψ(x) = E ψ(x)

There are some special advantages to expressing the Schrödinger equation in this form. One is that this takes the form of what is called an eigenvalue problem. An eigenvalue problem is one in which an operator is applied to an eigenfunction and the result returns the same eigenfunction, multiplied by some constant called the eigenvalue. In this case the operator is the Hamiltonian, H. The eigenfunction is the wave function, ψ(x). And the eigenvalue is the observable energy, E. These are all useful pieces of information to have that relate to each other very nicely, when expressed in this form.

Orbitals

In chemistry the wave functions of electrons in atoms and molecules are called atomic or molecular orbitals. And these are also found using the Schrödinger equation; they are solutions to the Schrödinger equation. The inputs to these wave functions are coordinates for points in space. The output from these wave functions, ψ, is some value, whose meaning is a matter of interpretation. The prevailing interpretation is the Born Rule, which gives a probabilistic interpretation. Under the Born Rule the value of ψ is a probability amplitude and the square modulus of the probability amplitude, |ψ|2, is called a probability density. The probability density defines for each point in space the probability of finding an electron at that point, if measured. So it has a kind of conditional, operational definition. More particularly, we could say, reducing the space to a single dimension, x, that |ψ(x)|2 gives the probability of finding the electron between x and x + dx. Going back to 3 dimensions, the wave function assigns a probability amplitude value, ψ, and a probability density value, |ψ|2, to each point in space. Informally, we might think of the regions of an orbital with the highest probability density as the regions where an electron “spends most of its time”.

Solutions to the Schrödinger equation, electron wavefunctions, can be solved exactly for the hydrogen atom. Other solutions cannot be solved analytically but can be approximated to high precision using methods like the variational method and perturbation theory. And again, we call these wave functions orbitals. I won’t get into the specifics of the methods for finding the exact solutions for the hydrogen atom but I’ll make some general comments. For an atom the Cartesian (x,y,z) coordinates for the three dimensions of space aren’t so convenient so we convert everything to spherical coordinates (r,θ,φ) in which r is a radial distance and θ and φ are angles with respect to Cartesian axes. The term for potential, V(r) in the Hamiltonian operator will be defined by the relation between a proton and an electron. And the mass of the electron also gets plugged into the Hamiltonian. Solving for the wave function makes use of various mathematical tools like spherical harmonics and radial wave functions. Radial wave functions in turn make use of Laguerre polynomials. Then solutions for the hydrogen atom will be expressed in terms of spherical harmonic functions and radial wave functions, with the overall wave function being a function of the variables (r,θ,φ).

Because the orbitals are functions of r, θ,and φ they can be difficult to visualize and represent. But partial representations can still give an idea of their structure. An orbital is often represented as a kind of cloud taking some kind of shape in space; a probability density cloud. The intensity of the cloud’s shading or color represents varying degrees of probability density.

The shapes of these clouds vary by the type of orbital. Classes of orbitals include s-orbitals, p-orbitals, d-orbitals, and f-orbitals. These different kinds of orbitals are grouped by their orbital angular momentum. s-orbitals are sphere shaped, nested shells. p-orbitals have a kind of “dumbbell” shape with lobes running along the x, y, and z axes. d-orbitals are even more unusual, with lobes running along two axes, and one orbital even having a kind of “donut” torus shape. Although we sometimes imagine atoms as microscopic solar systems with electrons orbiting in circles around the nucleus their structure is much more unusual, with these oddly shaped probability clouds all superimposed over each other. The structure of atoms into these orbitals has important implications for the nature of the elements and their arrangements into molecules. But before getting into that let’s pause a moment to reflect on the nature of the structure discussed up to this point.

Reflection on the Structure of the Wave Function

As with a sound wave, the function for an electron wave function is a solution to a differential equation, in this case the Schrödinger equation. This wave function ψ, is a function of position. In spherical coordinates of r, θ, and φ this function is ψ(r,θ,φ). In the most basic terms a function is a rule that assigns elements in a set, or a combination of elements from multiple sets, to a single element in another set. This rule imposes additional structure on relations between these sets. So in our case we have a set for all r values, a set for all θ values, a set for all φ values, and a set for all ψ values. Prior to the imposition of structure by any function we could combine elements from these sets in any way we like. In a four-dimensional (abstract) phase space or state space with axes r, θ, φ, and ψ all points are available, any ordered quadruple (r,θ,φ,ψ) is an option. That’s because an ordered triplet (r,θ,φ) can be associated with any value of ψ. There’s no rule in place limiting which values of ψ the ordered triplet (r,θ,φ) can associate with. The entire phase space is available; all states are available. But with the imposition of the function ψ(r,θ,φ) the region of permissible states conforming to this rule is significantly smaller. An ordered triplet (r,θ,φ) can be assigned to one and only one value of ψ.

It’s useful here to distinguish between logical possibility and physical possibility. In what sense are all ordered quadruples (r,θ,φ,ψ) in the state space “possible”? Most of them are not really physically possible for the electron in an atom because they would violate the laws of physics, the laws of quantum mechanics. That’s because the function, the wave function in fact is imposed. But in the theoretical case that it were not imposed, any ordered quadruple (r,θ,φ,ψ) would be logically possible; there’s no contradiction in such a combination. At least, not until we start to develop the assumptions that lead to the Schrödinger equation and its solutions. But since the actual, physical world follows physical laws only the states satisfying the function ψ(r,θ,φ) are physically possible.

This distinction between logical possibility and physical possibility highlights one, very basic source of structure: structure that arises from physical laws. Atomic orbitals are not man-made structures. There certainly are such things as man-made structures as well. But atomic orbitals are not an example of that. I say all this to justify including atomic orbitals as examples of structure in the first place, since in a physical sense they seem “already there” anyway, or as something that couldn’t be otherwise. But in light of the much more vast state space of logically possible states I think it makes sense to think of even these physically given states as highly structured when compared to the logically limitless states from which they stand apart.

I’d like to make one point of clarification here, especially considering the reputation quantum mechanics has for being something especially inexact or even anti-realist. What is it that the wave function specifies at each point in space, for each ordered triplet (r,θ,φ)? It’s certainly not the position of the electron. That indeed isn’t specified. But what is specified is the amplitude, ψ. And the square modulus of the amplitude, |ψ|2 is the probability of finding the electron at that position, (r,θ,φ). The wave function doesn’t specify the electron’s exact position. Does this mean that chaos reigns for the electron? The electron could, after all, be anywhere in the universe (with the exception of certain nodes). But that infinite extension of possible positions doesn’t mean that chaos reigns or that the electron isn’t bound by structure. The probability density of the electron’s position in space is very precisely defined and governs the way the electron behaves. It’s not the case that just anything goes. Certain regions of space are highly probable and most regions of space are highly improbable.

This is something of a matter of perspective and it’s a philosophical rather than scientific matter. But still just as interesting, for me at least. It pertains to the kinds of properties we should expect to see in different kinds of systems. What kinds of properties should we expect quantum systems to have? What are quantum properties? Do quantum systems have definite properties? I’ve addressed this in another episode on the podcast, drawing on the thought of Sunny Auyang. In her view there’s an important distinction to be made between classical properties and quantum properties. Even if quantum systems don’t have definite classical properties that’s not to say they don’t have properties at all. They just have properties of a different kind, properties that are more removed from the kinds of classical properties we interact with on a daily basis. We’re used to interacting with definite positions and definite momenta at our macroscopic scale of experience. At the quantum level such definite predicates are not found for position and momentum, but they are found for the position representation and momentum representation of a system’s wave function. Quoting Auyang:

“Are there predicates such that we can definitely say of a quantum system, it is such and so? Yes, the wavefunction is one. The wavefunction of a system is a definite predicate for it in the position representation. It is not the unique predicate; a predicate in the momentum representation does equally well. Quantum properties are none other than what the wavefunctions and predicates in other representations describe.” (How Is Quantum Field Theory Possible?)

I think of this as moving our perspective “up a level”, looking not at position itself but at the wave function that gives the probability amplitude, ψ, and probability density, |ψ|2, of position. That is where we find definite values governed by the laws of physics. It’s appropriate to look at this level for these kinds of quantum systems, because of the kind of things that they are. Expecting something else from them would be to expect something from a thing that is not appropriate to expect from the kind of thing that it is.

Molecular Orbitals

Let’s move now to molecules. Molecules are groups of atoms held together by chemical bonds. This makes use of a concept discussed in the last episode that is pertinent to structure generally: that of embedding. Lower-level structures get embedded, as kinds of modules, into higher-level structures. The lower-level structures remain but their combinations make possible a huge proliferation of new kinds of structures. As we move from the level of atoms to molecules the number of possible entities will expand dramatically. There are many more kinds of molecules than there are kinds of atoms. As of 2021 there are 118 different kinds of atoms called elements. That’s impressive. But this is miniscule compared to the number of molecules that can be made from combinations and arrangements of these elements. To give an idea, the Chemical Abstracts Service, which assigns a unique CAS registry number to different chemicals, currently has a database of 177 million different chemical substances. These are just molecules that we’ve found or made. There are many more that will be made and could be made.

Electrons are again key players in the formation of molecules as well. The behavior of electrons, their location probability densities, and wave-like behavior, continue to be defined by mathematical wave functions and abide by the Schrödinger equation. A wave function, ψ, gives a probability amplitude and its square modulus, |ψ|2, gives the probability of finding an electron in a given region. So many of the same principles apply. But the nature of these functions at the molecular level is more complex. In molecules the wave functions take new orbital forms. Orbitals in molecules take two new important forms: hybridized orbitals and molecular orbitals.

Hybridized orbitals are combinations of regular atomic orbitals that combine to form hybrids. So where before we had regular s-type and p-type orbitals these can combine to form hybrids such as sp3, sp2, and sp orbitals. With a carbon atom for instance, in the formation of various organic molecules, the orbitals of the valence electrons will hybridize.

Molecular orbitals are the wave functions for electrons in the chemical bonds between the atoms that make up a molecule. Molecular orbitals are formed by combining atomic orbitals or hybrid atomic orbitals from the atoms in the molecule. The wave functions for molecular orbitals don’t have analytic solutions to the Shrõdinger equation so they are calculated approximately.

A methane molecule is a good example to look at. A methane molecule consists of 5 atoms: 1 carbon atom and 4 hydrogen atoms. It’s chemical formula is CH4.  A carbon atom has 6 electrons with 4 valence electrons that are available to participate in chemical bonds. In the case of a methane molecule these 4 valence electrons will participate in 4 bonds with 4 hydrogen atoms. In its ground state the 4 valence electrons occupy one 2s orbital and two 2p orbitals. In order to form 4 bonds there need to be 4 identical orbitals available. So the one 2s orbital and three 2p orbitals hybridize to form 4 sp3 hybrid orbitals. An sp3 orbital, as a hybrid, is a kind of mixture of an s-type and p-type orbital. The dumbbell shape of an p-orbital combines with the spherical shape of an s-orbital to form a kind of lopsided dumbbell. It’s these hybrid sp3 orbitals that then combine with the 1s orbitals of the hydrogen atoms to form molecular orbitals. In this case the type of molecular orbitals that form are called σ-bonds.

The 2s and 2p orbitals in the carbon atom can also hybridize in other ways to form two or three bonds. For example, a carbon atom can bond with 2 hydrogen atoms and 1 other carbon atom. When it does this the 2s orbital hybridizes with just 2 of the 2p orbitals to form 3 sp2 orbitals, which bond with the 2 hydrogens and the other carbon. The remaining 2p orbital combines with the other carbon atom again, to its corresponding 2p orbital. This makes two sets of orbitals combining into two molecular bonds, a σ-bond and what is called a π-bond. When a σ-bond and a π-bond form between atoms it is called a double bond. Carbon atoms can also form triple bonds in which two sp orbitals are formed from the 2s orbital and one 2p orbital. This leaves two 2p orbitals to combine with their counterparts in another carbon atom to form a triple bond, composed of 1 σ-bond and 2 π-bonds. Single bonds, double bonds, and triple bonds all have their own geometrical properties like bond angles and freedom of rotation. This has effects on the properties of the resulting molecule.

Functional Groups

σ-bonds, π-bonds, single bonds, double bonds, and triple bonds make possible several basic molecular structures called functional groups. Functional groups are specific groupings of atoms within molecules that have their own characteristic properties. What’s useful about functional groups is that they occur in larger molecules and contribute to the overall properties of the parent molecule to which they belong. There are functional groups containing just carbon, but also functional groups containing halogens, oxygen, nitrogen, sulfur, phosphorus, boron, and various metals. Some of the most common functional groups include: alkyls, alkenyls, akynyls, and phenyls (which contain just carbon); fluoros, chloros, and bromos (which contain halogens); hydroxyls, carbonyls, carboxyls, and ethers (which contain oxygen); carboxamides and amines (which contain nitrogen); Sulfhydryls and sulfides (which contain sulfur); phosphates (which contain phosphorus); and so forth.

Repeatable Units

The last subject I’d like to address with all this is the role of repeatable units in the formation of complex chemical structures. Let’s come at this from a different direction, starting at the scale of a complex molecule and work our way down. One of the most complex, sophisticated kinds of molecules is a protein. Proteins are huge by molecular standards. Cytochrome c, for example, has a molecular weight of about 12,000 daltons. (For comparison, methane, discussed previously, has a molecular weight of 16 daltons). What we find with such molecules is that they are highly reducible to a limited number of repeatable units. But we could imagine it being otherwise; a macromolecule being irreducible from its overall macrostructure and not having any discernible repeating components. Let’s imagine a hypothetical, counterfactual case in which a macromolecule of that size is just a chaotic lump. Imagine going to a landfill and gathering a bunch of trash from a heap with all sorts of stuff in it, gathering it all together, rolling it into a ball, and binding it with hundreds of types of unmixed adhesives. Any spatial region or voxel of that lump would have different things in it. You might find some cans and wrappers in one part, computer components in another, shredded office papers in another, etc. We could imagine a macromolecule looking like that; a completely heterogeneous assembly. We could imagine further such a heterogeneous macromolecule being able to perform the kinds of functions that proteins perform. Proteins can in fact be functionally redundant; there’s more than one way to make a protein that performs a given function. So we might imagine a maximally heterogeneous macromolecule that is able to perform all the functions that actually existing proteins perform. But this kind of maximal heterogeneity is not what we see in proteins.

Instead, proteins are composed of just 20 repeatable units, a kind of protein-forming alphabet. These are amino acids. All the diversity we see in protein structure and function comes from different arrangements of these 20 amino acids. Why would proteins be limited to such a small number of basic components? The main reason is that proteins have to be put together and before that they have to be encoded. And it’s much more tractable to build from and encode a smaller number of basic units, as long as it gives you the structural functionality that you’ll need in the final macrostructure. It might be possible in principle to build a macromolecule without such a limited number of repeatable units. But it would never happen. The process to build such a macromolecule would be intractable.

This is an example of a general principle I’d like to highlight that we find in chemistry and in structure generally. And it’s related to embedding. But it’s a slightly different aspect of it. Complex, high-level structures are composed by the embedding of lower-level structures. And the higher-level structures make use of a limited number of lower-level structures that get embedded repeatedly.

In the case of a protein, the protein is the higher-level structure. Amino acids are the lower-level structures. The structures of the amino acids are embedded into the structure of the protein. And the higher-level structure of the protein uses only a limited number of lower-level amino acid structures.

A comparison to writing systems comes to mind here. It’s possible to represent spoken words in written form in various ways. For example, we can give each word its own character. That would take a lot of characters, several hundred and into the thousands. And such a writing system takes several years to be able to use with any competence. But it’s also possible to limit the number of characters used in a writing system by using the same characters for phonemic properties common to all words, like syllables or phonemes. Many alphabets, for example, only have between 20 and 30 characters. And it’s possible to learn to use an alphabet fairly quickly. And here’s the key. There’s no functional representational loss by using such a limited number of characters. The representational “space” is the same. It’s just represented using a much smaller set of basic components.

Biochemists mark out four orders of biomolecular structure: primary, secondary, tertiary, and quaternary. And this is a perfect illustration of structural embedding.

The primary structure of a protein is its amino acid sequence. The primary structure is conceptually linear since there’s no branching. So you can “spell” out a protein’s primary structure using an amino acid alphabet, one amino acid after another. Like, MGDVEK: methionine, glycine, aspartic acid, valine, glutamic acid, and lysine. Those are the first 6 amino acids in the sequence for human Cytochrome c. What’s interesting about amino acids is that they have different functional groups that give them properties that will contribute to the functionality of the protein. We might think of this as a zeroth-level protein structure (though I don’t know of anyone calling it that). Every amino acid has a carboxyl group and an amino group. That’s the same in all of them. But they each have their own side chain or R-group in addition to that. And these can be classified by properties like polarity, charge, and other functional groups they contain. For example, methionine is sulfuric, nonpolar, and neutral; asparagine is an amide, polar, and neutral; phenylalanine is aromatic, nonpolar, and neutral; lysine is basic, polar, and positively charged. These are important properties that contribute to a protein’s higher-level structure.

The secondary structure of a protein consists of three-dimensional, local structural elements. The interesting thing about secondary structures in the context of embedding and repeatable units is that these local structures take common shapes that occur all the time in protein formation. The two most important structural elements are alpha helices and beta sheets. True chemical bonds only occur between the amino acid units of the primary structure but in the higher level structures the electrostatic forces arising from differences in charge distribution throughout the primary structure make certain regions of the primary structure attracted to each other. These kinds of attractions are called hydrogen bonds, in which a hydrogen atom bound to a more electronegative atom or group is attracted to another electronegative atom bearing a lone pair of electrons. In the case of amino acids such hydrogen bonding occurs between the amino hydrogen and carboxyl oxygen atoms in the peptide backbone.

In an alpha helix these hydrogen bonds form in a way that makes the amino acids wrap around in a helical shape. In a beta sheet strands of amino acids will extend linearly for some length and then turn back onto themselves, with the new strand segment extending backward and forming hydrogen bonds with the previous strand. These hydrogen bound strands of amino acids then form planar sheet-like structures. What’s interesting is that these kinds of secondary structures are very common and get used repeatedly, much like amino acids get used repeatedly in primary structures. Secondary structures, like alpha helices and beta sheets (among others), then get embedded in even higher-level structures.

The tertiary structure of a protein is its full three-dimensional structure that incorporates all the lower-level structures. Tertiary structures are often represented using coils for the alpha helix components and thick arrows for the beta sheet components. The way a protein is oriented in three-dimensional space is determined by the properties of its lower level structures all the way down to the functional groups of the amino acids. Recall that the different amino acids can be polar or nonpolar. This is really important because proteins reside in aqueous environments with highly polar water molecules. Nonpolar groups are said to be hydrophobic because conditions in which the surface area of exposure, the contact, between nonpolar groups and polar water molecules is minimized are entropically favored. Because of this polar and nonpolar molecules will appear to repel each other, a hydro-phobic effect. Think of the separation of oil and water as an example. Water is polar and oil is nonpolar. This is the same effect occurring at the scale of individual functional group units in the protein. Proteins can fold in such a way as to minimize the surface area of nonpolar functional groups exposed to water molecules. One way this can happen is that nonpolar amino acid sections fold over onto each other so that they interact with each other, rather than with water molecules and so that water molecules can interact with each other rather than with the nonpolar amino acid sections. These same kinds of effects driven by the properties of functional groups were also the ones bringing about the secondary structures of alpha helices and beta sheets.

Some proteins also have a quaternary structure in which multiple folded protein subunits come together to form a multi-subunit complex. Hemoglobin is an example of this. Hemoglobin is made up of four subunits; two alpha subunits and two beta subunits.

There’s a pattern here of structure building upon structure. But it does so with a limited set of repeatable structures. I’d like to address this again. Why should proteins be built out of only 20 amino acid building blocks. Certainly there could be (at least in theory) a macromolecule that has similar functionality and global structure, using the same functional group properties to get it to fold and form in the needed way, without the use of repeatable lower-level structural units. But that’s not what we see. Why? One important reason is that proteins need to be encoded.

Proteins are made from genes. Genes are sections of DNA that get translated into RNA and then transcribed in proteins. That’s a gene’s primary function: to encode proteins. DNA and RNA have further simplified components: only four types of nucleotides in each: guanine, adenine, cytosine, and thymine in DNA and guanine, adenine, cytosine, and uracil in RNA. These nucleotides have to match up with the proteins that they encode and it’s going to be very difficult to do that without dividing up the protein into units that can be encoded in a systematic way. There’s a complex biochemical process bringing about the process of transcribing an RNA nucleotide sequence into a protein. But since these are, at bottom, automatic chemical processes they have to proceed in systematic, repeatable ways. An entire macromolecule can’t have an entire intracellular biochemical system dedicated to just that macromolecule alone. For one thing, there are too many proteins for that. The same biochemical machinery for transcription has to be able to make any protein. So all proteins have to be made up of the same basic units.

The way this works in transcription is that molecules called transfer RNA (tRNA) are dedicated to specific combinations of the 4 basic RNA nucleotides. These combinations are called codons. A codon is some combination of 3 nucleotides. Since there are 4 kinds of nucleotides and each codon has 3 there are 43, or 64 possible combinations. Different codons correspond to different amino acids. Since they only code for 20 amino acids there is obviously some redundancy, also called degeneracy (which isn’t meant to be insulting by the way). The way that codons get transcribed into an amino acid is that the tRNA molecules that match the nucleotide sequences of the various codons in the RNA also convey their encoded amino acids. These tRNA molecules come together at the point of transcription, called ribosomes, and link the amino acids together into the chains that form the primary structure of the protein. This is just a part of the biochemical machinery of the process. What’s important to note here is that although there are a number of tRNA types it’s not unmanageable. There are at most 64 possible codon sequences. So there doesn’t have to be a unique set of transcription machinery dedicated to each and every kind of protein, which would be insane. The components only have to be dedicated to codon sequences and amino acids, which are much more manageable.

Key Takeaways

I’d like to summarize the foregoing with 4 key takeaways from this analysis of structure in chemistry that I think apply to a general philosophy of structure.

1. Structure can be modeled using functions

Recall that a function is a relation between sets that associates an element in one set or combination of elements from multiple sets to exactly one element in another set. The source sets are called domains and the target sets they map onto are called codomains. One example of a function we’ve looked at in both the previous episode on music and in this episode on chemistry is the waveform function. In chemistry mathematical functions called orbitals assign to each point in space (the domain) an amplitude value (the codomain).

2. Functions occupy only a small portion of a phase space

Functions, by nature, impose limitations. A relation that associates an element in a domain to more than one element in a codomain would not be a function. A function associates the domain element to only one codomain element. In this way functions are very orderly. To give an example, in an orbital a given point in space (a domain element) can have only one amplitude value (the codomain element). This is highly limited. To illustrate this, imagine a phase space of all possible combinations of domain and codomain values. Or to give a simpler comparison, imagine a linear function on an x-y plane; for example, the function y = x. This is a straight line at a 45 degree angle to the x and y axes. The straight line is the function. But the phase space is the entire plane. The plane contains all possible combinations of x and y values. But the function is restricted to only those points where y = x. A similar principle applies to orbitals. The corresponding phase space would be, not a plane, but a 4-dimensional hyperspace with axes r,θ,φ, and ψ. The phase space is the entire hyperspace. But the wave function, or orbital, is restricted to a 3-dimensional space in this 4-dimensional hyperspace. This kind of restriction of functions to small portions of phase spaces is a characteristic feature of structure generally.

3. Structural embedding

Embedding is a feature of structure that came up in music and has come up again in even more obvious form in chemistry. Just looking at proteins the different orders of structure is quite obvious and well known to biochemists, with their conceptual division of proteins into primary, secondary, tertiary, and quaternary protein structures, with each level of structure incorporating the lower level structures embedded into them. Using proteins as an example, even primary structures have embedded into them several layers of additional structure such as functional groups, molecular orbitals, atomic orbitals, and the general structure of the wave function itself. One key feature of such embedding is that properties and functionality of the lower-level structures are taken up and integrated into the higher-level structures into which they are embedded. We saw, for example, how the three-dimensional tertiary structure of a protein takes the form that it does because of the properties of functional groups in the side chains of individual amino acids, in particular polarity and nonpolarity.

4. Repeatable units

A final key takeaway is the use of repeatable units in the process of structural embedding. In retrospect this is certainly something that is applicable to music as well. We see repeatable units in the form of pitches and notes. In chemistry we see repeatable units in macromolecules like polymers and proteins. Polymers, like polyethylene, PVC, ABS, polyester, etc. certainly use repeatable units; in some cases a single repeating unit, or sometimes two or three. Proteins make use of more repeatable units but even there they make use of a limited number: 20 amino acids. We see here an important general principle of structure: that high-level structures tend to be composed through the repeated use of a limited number of lower-level structures rather than by forming as a single, bulk, irreducible macrostructure. The use of lower-level repeatable units in the higher-level structure facilitates the encoding and construction of high-level structures.

And that wraps up this study of structure in chemistry. Thank you for listening.