An infinite series is a sum of infinitely many numbers or terms, related in a given way and listed in a given order. They can be used to calculate the values of irrational numbers, like pi, out to trillions of decimal places. Or to calculate values of trigonometric and exponential functions. And of greatest interest, they can be used to see non-obvious relations between different areas of mathematics like integers, fractions, irrational numbers, complex numbers, geometry, trigonometric functions, and exponential functions.
I’d like to start things off with a joke. A math joke.
An infinite number of mathematicians walk into a bar. The first orders a beer, the second orders half a beer, the third orders a quarter of a beer, and so on. After the seventh order, the bartender pours two beers and says, “You fellas ought to know your limits.”
OK, I tried to start things off in a cute way. Anyway, the joke is that if they keep following that pattern to infinity the total approaches the equivalent of two beers. That is the limit of the series. In mathematical terms this is an infinite series. An infinite series is a sum of infinite terms. With the series in the joke the series is:
1 + 1/2 + 1/4 + 1/8 + 1/16 + … = 2
Each term in the series is half the previous term. And if you continue this out to infinity (whatever that means) it ends up adding up to to 2.
Infinite series can be either convergent or divergent. The series just mentioned is convergent because it adds up to a finite number. But others end up blowing up to infinity. And those are divergent.
Convergent series fascinate me. Their aesthetics remind me of cyclopean masonry, famous among the Inca, where all the stone pieces fit together perfectly, as if part of a single block. These series reveal infinite structure within numbers.
I’d like to share some of my favorite examples of infinite series. The previously mentioned series adding up to 2 is interesting. It shows infinite structure within this simple integer, 2. But I’m especially interested in infinite series for irrational numbers. Irrational numbers, like pi, e, and logarithms, have infinite, non-repeating, decimal places. How can we find the values for all these digits? This is where infinite series became extremely useful.
Pi, approximately 3.14159, is the ratio of circumference to diameter. How can we find its value? Maybe make a really big circle and keep on making measurements with more and more accurate tape measures? No, that won’t go very far. Fortunately, there are infinite series that add up to pi or ratios of pi. And what’s fascinating is that these series seem to have no obvious relation to circles, diameters, or circumferences. Here are some of those series:
1/1^2 + 1/2^2 + 1/3^2 + 1/4^2 + … = π^2/6
1/1 – 1/3 + 1/5 – 1/7 + 1/9 – … = π/4
You look at something like that and wonder. What on Earth is pi doing there? Where did that come from? Irrational numbers, by definition, cannot be expressed as fractions. But they can be expressed as infinite sums of fractions. You can get an irrational number like pi just by adding up these simple fractions. At least, by adding them up an infinite number of times. Of course, we can’t actually do that. But we can still add up many, many terms. And with computers we can add up millions of terms to get millions of digits. But different series will converge on the accurate value for a given number of digits more quickly than others. If we’re actually trying to get as many digits as possible as quickly as possible we’ll want a quickly converging series. A great example of such a modern series for pi is the Ramanujan series:
This series computes a further eight decimal places of pi with each term in the series. Extremely useful.
But back to the earlier question. What is pi doing here? How do these sums, that don’t seem to have anything to do with the geometry of circles, spit out pi?
Let’s just look at that series that adds up to pi/4. This is known as the Leibniz formula. pi/4 is the solution to arctan(1), a trigonometric function. In calculus the derivative of arctan(x) is 1/(1+x^2). 1//(1+x^2) can also be represented by a power series of the form:
1/(1+x^2) = 1 – x^2 + x^4 – x^6 + …
Since this is the derivative of arctan(x) we can integrate it and get a series that is equivalent to arctan(x).
arctan(x) = x – x^3/3 + x^5/5 – x^7/7 + …
This is quite useful. Since it’s a function we can plug in different values for x and get the result to whatever accuracy we want by expanding out the series as many terms as we want. To get the Leibniz formula we plug in 1. So arctan(1) is:
1/1 – 1/3 + 1/5 – 1/7 + 1/9 – … = arctan(1)
And we already know that arctan(1) is equal to pi/4. So this gives us a way to calculate the value of pi/4 and, by just multiplying the result by 4, the value of pi. We can calculate pi as accurately as we want by expanding the series out as many terms as we want to. Though in the case of pi, we’ll do this with a faster converging series like the Ramanujan series. But I picked the Leibniz series as an example because it’s easiest to show why it converges on pi, or specifically pi/4. You can see a little bit here how different areas of mathematics overlap: geometry relating to trigonometry, calculus, and infinite series. Steven Strogatz has made the point that infinite series are actually a great way to see the unity of all mathematics.
So that’s pi. Let’s look at some other irrational numbers that can be calculated by series.
To calculate e, approximately 2.71828, we can use the following series:
1/0! + 1/1! + 1/2! + 1/3! + 1/4! + … = e
Or to calculate ln(2), approximately 0.693, we can use the following series:
1/1 – 1/2 + 1/3 – 1/4 + … = ln(2)
I find these similarly elegant in their simplicity. We’re just adding up fractions of integers and converging to irrational numbers. I think that’s remarkable.
We saw earlier with the arctan function that it’s also possible to write a series not only with numbers but with variables so that the series becomes a function where we can plug in different numbers. There are series for sine and cosine functions in trigonometry. That’s good because not all outputs from these functions are “nice” values that we can figure out using other principles of geometry, like the Pythagorean Theorem. What are some of these “nice” values? We’ll use radians, where pi/2 radians equals 90 degrees.
These are angles of special triangles with angles of 45, 30, 60 degrees, where the ratios of the different side lengths work out to ratios that we can figure out using the Pythagorean Theorem. But these are special cases. If we want to calculate a trigonometric function for other values we need some other method.
Fortunately, there are infinite series for these functions. The infinite series for the sine and cosine functions are:
sin(x) = x – x^3/3! + x^5/5! – x^7/7! + …
cos(x) = 1 – x^2/2! + x^4/4! – x^6/6! + …
Again, we have this kind of surprising result where we get trigonometric functions just from the sum of fractions of integers and factorials, which don’t seemingly have much to do with each other. Where is this coming from?
These trigonometric functions are infinitely differentiable. You can take the derivative of a sine function over and over again and no matter how many times you do it the result will be either a sine function or a cosine function. Same for the cosine function. They just keep circling back on themselves when we differentiate them. These series come from their Taylor series representations, or specifically their Maclaurin series representations. The Maclaurin series for a function f(x) is:
What happens if we apply this to sin(x)? Let’s take repeated derivatives of sin(x):
First derivative: cos(x) Second derivative: -sin(x) Third derivative: -cos(x) Fourth derivative: sin(x)
So by the fourth derivative we’re back where we started. And so on. What are the values of these derivatives at 0?
sin(0) = 0 cos(0) = 1 -sin(0) = 0 -cos(0) = -1
So the terms with the second and fourth derivatives will go to 0 and disappear. The remaining terms will alternate between positive and negative. In the case of sin(x) each term in the resulting series will have x raised to odd integers divided by odd factorials, with alternating signs. The result being:
sin(x) = x – x^3/3! + x^5/5! – x^7/7! + …
And for cos(x) it will be similar but with x raised to even integers divided by even factorials, with alternating signs. The result being:
cos(x) = 1 – x^2/2! + x^4/4! – x^6/6! + …
Using these series we can now calculate sine and cosine for any value. Not just the special angles of “nice” triangles.
The Maclaurin series also gives an infinite series for another important infinitely differentiable function: the exponential function e^x. The derivative of e^x is just itself, e^x, forever and ever. So in this case the Maclaurin series is quite simple. No skips or alterations.
e^x = 1 + x + x^2/2! + x^3/3! + …
We already saw one solution to this equation where x is set equal to 1, which is simply the number e.
These three series – for sin(x), cos(x), and e^x – allow us to see an interesting relation between exponential functions and trigonometric functions. The series for e^x has all the terms from the series for both sin(x) and cos(x). But in the series for e^x all the terms are positive. Is there a way to combine these three? Yes, there is. And it will connect it all to another area of mathematics: complex numbers. Complex numbers include the imaginary number i, which is defined as the square root of -1. The number i has the following properties:
i^2 = -1 i^3 = -i i^4 = 1 i^5 = i
And the cycle repeats from there. It turns out that if we plug ix into the series for e^x all the positive and negative sines work out to match those of the series for cos(x) and i*sin(x). With the result that:
e^(ix) = cos(x) + i * sin(x)
To make things really interesting let’s also bring pi into this and substitute pi for x. In that case, cos(π) = -1 and sin(π) = 0. So we get the equation:
e^(iπ) + 1 = 0
Steven Strogatz said of this result:
“It connects a handful of the most celebrated numbers in mathematics: 0, 1, π, i and e. Each symbolizes an entire branch of math, and in that way the equation can be seen as a glorious confluence, a testament to the unity of math. Zero represents nothingness, the void, and yet it is not the absence of number — it is the number that makes our whole system of writing numbers possible. Then there’s 1, the unit, the beginning, the bedrock of counting and numbers and, by extension, all of elementary school math. Next comes π, the symbol of circles and perfection, yet with a mysterious dark side, hinting at infinity in the cryptic pattern of its digits, never-ending, inscrutable. There’s i, the imaginary number, an icon of algebra, embodying the leaps of creative imagination that allowed number to break the shackles of mere magnitude. And finally e, the mascot of calculus, a symbol of motion and change.”
He also said, speaking of infinite series generally:
“The most compelling reason for learning about infinite series (or so I tell my students) is that they’re stunning connectors. They reveal ties between different areas of mathematics, unexpected links between everything that came before. It’s only when you get to this part of calculus that the true structure of math — all of math — finally starts to emerge.”
I think we can see that effect in some of the relations between some of my favorite infinite series that I’ve shared here.
Having looked at all this I’d like to make a couple philosophical observations.
One is on the possibility of objectivity in mathematics, mathematical realism, or mathematical platonism. Infinite series enable us to calculate digits for irrational numbers, which have infinite digits. We find the values millions and billions of decimal places out and we will always be able to keep going. Last I checked, as of 2021 pi had been calculated out to 62.8 trillion digits. What of the next 100 trillion digits? Or the next quadrillion digits? Well, I think that they are already there waiting to be calculated, whether we end up ever calculating them or not. And they always have been. Those 62.8 trillion digits that we’ve calculated so far have been there since the days of the dinosaurs and since the Big Bang. There’s a philosophical question of whether mathematical conclusions are discovered or created. You can tell I believe they’re discovered. And part of the reason for that is because of these kinds of calculations with infinite series. No matter how deep into infinity you go there’s always more there. And you don’t know what’s there until you do the calculations. You can’t decide for yourself what’s there. You have to do the work to find out and get the right answer. Roger Penrose had a similar line of thinking with the infinite structure of the Mandelbrot set.
Now, I do think there’s a certain degree of human activity in the process. Like in deciding what kinds of questions to ask. For example, geometry looks different whether you’re working in Euclidean, hyperbolic, or elliptic geometry. Answers depend on assumptions and conditions. I like a line that I heard from Alex Kontorovich: “The questions that are being asked are an invention. The answers are a discovery.”
The other philosophical question is: What does it actually mean to say that an infinite sum equals a certain value or converges to a certain value? We can never actually add up infinite terms. Nevertheless, we can see and sometimes even prove where a convergent series is headed. And this is where that concept of limits comes up. I don’t know how to answer that question. There are different ways to interpret that. Presently, the way I’m inclined to put it is this: The limits of infinite series are values toward which series tend. They never actually reach them because infinity is not actual. But the tendency of an infinite series is real, such that, as you continue to add up more terms in the series the sum will continue to get closer to the value of convergence.
This episode in the philosophy of structure series looks at the way structure inheres in material substances. Starting with Aristotle’s concept of hylomorphism (matter + form) we look at contemporary studies of this perspective in the thought of Kathrin Koslicki and Edward Feser. Most basic to this perspective is the idea of structures as the sorts of entities which make available positions or “slots” for other objects to occupy.
It’s been a while since I’ve done an episode for the philosophy of structure series. My interests and reading have taken some other directions over the last few months, dealing especially with theology. But they’ve circled back to the philosophy of structure, by way of theology interestingly enough, particularly in the scholastic theology of Thomas Aquinas. I’ve been wanting to get into some ideas in Aristotelian and Thomistic thought, but in order to do that I feel like there’s some groundwork I want to lay down first, that happens to pass through the philosophy of structure.
This discussion of structure will be a little more general; not as particular as the applications in music and chemistry discussed in previous episodes but more like the first episode on structure as such. But I’ll still refer to some examples in chemistry.
In addition to pre-modern philosophers like Aristotle and Aquinas, some living philosophers I’ve been reading recently are Edward Feser and Kathrin Koslicki. Feser in his books Scholastic Metaphysics: A Contemporary Introduction and Aristotle’s Revenge: The Metaphysical Foundations of Physical and Biological Science. And Koslicki in her book The Structure of Objects. Both Feser and Koslicki argue for a hylomorphic model of material substances. Hylomorphism, from the Greek words ὕλη, hyle, “matter”, and μορφή, morphē, “form”, is the view that physical objects are products of both matter and form. It’s also the view promoted by Aristotle and Aquinas. I take a note from Koslicki and also use the term “structure” to refer to the classical notion of form.
Koslicki gives the following definition of structure:
“Structures are precisely the sorts of entities which make available positions or places for other objects to occupy, provided that these occupants satisfy the type restrictions imposed by the structure on the positions in question; as a result of occupying these positions, the objects in question will exhibit a particular configuration or arrangement imposed on them by the structure.”
We can see the hylomorphic understanding at work here. The formal, structural component of an object is a set of open “slots” that stand in defined relations to each other. Or put another way, it’s the defined relations between these open slots that constitute the structure. But since this is hylomorphism and not just “morphism” these slots need to be filled to produce the substance. That’s the material component.
This is analogous, I think, to functions with variables. Like mathematical functions or functions in a computer program. In a linear function like y=mx+b, where m and b are constants you can see on a graph all the different values of the dependent variable y that correspond to all the different values of the independent variable x. You can substitute any number for x. That’s what makes it a variable. Or in a computer program you can define characters as variables, basically saying, “I don’t want to say exactly what this is now but I want to be able to insert values into it later.” Variables are, by nature, not fixed. They can have different values or be occupied by different objects. The function gives the general structure. But to get the output of a function you need to input specific values or objects for the variables. Similarly, for material substances structures can host various kinds of material components.
Chemical compounds are an instructive example of this kind of structural-material complex. For example, one kind of chemical structure is a tetrahedral molecular geometry, a central atom is located at the center with four substituents that are located at the corners of a tetrahedron. We could think of this as a structure with five open slots, which can be filled with different kinds of atoms. With a carbon in the center slot and hydrogens in each of the four corners the resulting compound is methane. With chlorine in the middle and oxygen in the corners it is perchlorate. With sulfur in the middle and oxygen in the corners it is sulfate. With phosphorus in the middle and oxygen in the corners it is phosphate. In all these cases the molecules have the same geometry and bond angles of 109.5°. It’s also possible to have a third kind of atom in the molecule as with thiazyl trifluoride, in which the center atom is sulfur and corners are occupied by three fluorines and one nitrogen. The takeaway from all this is that with this particular formal component, the structure of the tetrahedral molecular geometry, we have these open slots that can be filled by various kinds of atoms. But it’s the combination of this structure, plus the atoms, i.e. the material component, that makes the physical chemical compound. That’s the hylomorphic description.
Those are examples of a single formal/structural component with varying material elements. But it can also work the other way, with various chemical compounds having common material elements but varying structure.
Koslicki says the following:
“In chemistry, the notion of structure is employed in the following two central ways: the chemical structure of a compound is given by stating (i) the types of constituents of which it consists, i.e. its formula; as well as (ii) the spatial (i.e., geometrical or topological) configuration exhibited by these constituents… the three-dimensional arrangement into which these constituents enter, is equally crucial in characterizing the chemically relevant behavior of a compound. This became apparent in the history of chemistry in connection with the phenomenon or isomers or chiral (“handed”) molecules, compounds which consist of the same constituents, i.e. have the same chemical formula, but whose constituents are differently arranged and which, as a result of this difference in arrangement, behave quite differently in specific circumstances.”
As an example, there are three different chemical compounds with the basic formula C3H4: three carbons and four hydrogens. There’s propadiene, which has a rigid, linear structure with two double bonds and the pairs of hydrogen atoms at each end on planes at right angles to each other. There’s propyne, which has a rigid triple bond and a freely rotating single bond. And there’s cyclopropene, which has a double bond and a ring structure. In this case all the material components are the same but the open slots that they occupy are arranged differently. So the overall structural-material complex is distinct in each case. Again, this is understood most comprehensively from a hylomorphic description.
One reason both material and structural components of a substance are important is that both contribute to its properties. For molecules that have a common structure it makes a difference what atoms fill the available slot. For one thing, different atoms have different masses so a compound with more massive atoms filling those positions is going to be a more massive compound overall. Xenon tetroxide is much more massive than methane, having molecular weights of 195 g/mol and 16 g/mol, respectively, even though they have the same tetrahedral structure. For this same reason xenon tetroxide also has a higher boiling point than methane, 0 and -161.5 degrees Celsius, respectively. Then for isomers, molecules that have the same atoms but different structure, those structural differences can impart important differences in properties. For example, several compounds have the formula C3H6O: three carbons, six hydrogens, and one oxygen. But two are alcohols (allyl alcohol and cyclopropanol), one is an aldehyde (propionaldehyde), one is a ketone (acetone), one is an epoxide (propylene oxide), and one is an ether (methyl vinyl ether). These have boiling points ranging from 6 degrees Celsius for methyl vinyl ether to 101 degrees Celsius for cyclopropanol; quite a range. And all these isomers have slightly different heats of combustion, meaning they release different amounts of heat when they burn.
An important consideration with all of this is that the structural and material components of a substance also have explanatory power regarding the nature of that substance. Not only do we know from experiments that certain substances have different boiling points, heats of combustion, dipole moments, acidities, vapor pressures, and such, though we certainly do obtain such information from experiments. But knowledge about the structural and material components of substances also helps us understand why they have the properties that they do. That’s what we’re really after in science anyway. We’re not just trying to make a giant catalogue of properties. We want to understand the underlying reasons for things.
Beyond particular kinds of substances we’re also interested in the laws of nature that govern the behavior of many or all kinds of substances generally. In a way laws of nature are an even higher order of structure than that of the structural component of substance in the structural-material complex. In the structure of a substance there are open slots that can be filled with material elements that meet the structural requirements. Laws of nature are similarly structural but instead of slots being filled by material elements they are filled by events and conditions. These also behave like functions, associating elements between sets, like inputs to outputs. The outputs are the things that actually occur. But the underlying reasons are represented in the function itself. In the case of physical sciences the natures and propensities of substances are expressed in the laws of nature. It is these laws that dictate what will result from a given set of inputs. We can’t observe these laws directly. We can only observe events. But given large sets of inputs and outputs we can try to figure out what the underlying laws, natures, and propensities must be.
At some point such repeated, higher-order abstraction moves beyond scientific practice itself to something more meta-scientific, reflection on the nature of the physical sciences as such. And this is intrinsically metaphysical. Metaphysics doesn’t replace physics but it can give deeper understanding of it. Many scientists are also metaphysicians, though they may not use that label. Aristotle distinguished between what he called experience and art. I think of these as corresponding to data and theory, physics and metaphysics. Here’s Aristotle in his Metaphysics:
“All men by nature desire to know… But yet we think that knowledge and understanding belong to art rather than to experience, and we suppose artists to be wiser than men of experience (which implies that Wisdom depends in all cases rather on knowledge); and this because the former know the cause, but the latter do not. For men of experience know that the thing is so, but do not know why, while the others know the ‘why’ and the cause. Again, we do not regard any of the senses as Wisdom; yet surely these give the most authoritative knowledge of particulars. But they do not tell us the ‘why’ of anything-e.g. why fire is hot; they only say that it is hot… Since we are seeking this knowledge, we must inquire of what kind are the causes and the principles, the knowledge of which is Wisdom.”
This was Aristotle’s justification for metaphysics. In contrast to this, Auguste Comte, a nineteenth century positivist, saw such metaphysics as something to overcome. He saw history progressing in three stages: theological, metaphysical, and positivistic. Each successive stage would shed the extraneous baggage of the former. In his view, even though educated people of his day had shed the superstitions of religion they still retained ideas of abstractions and invisible forces like gravity and magnetism that looked beyond the bare positive facts of the material world. Comte believed that metaphysics would eventually die out and we’d be left with only empirical data, just what happens, without making any kind of metaphysical inferences about it or even trying to give any kind of explanation for it.
I think Comte got the order of comprehension conceptually backwards, the reason being that structure is ineliminable from an intelligible account of the material world. A more sophisticated form of positivism, one that at least attempted explanation without recourse to metaphysics, reached an apex in the first half of the twentieth century. But it ran into insuperable difficulties, even though it still retains some popular support. But at the end of the day you really can’t have just data by itself. At least not if you’re after a satisfactory account of reality. Underlying, immaterial structures, like laws of nature, are indispensable to make it at all intelligible. Positivism has to give way to metaphysics.
I also happen to think, continuing in the opposite direction as Comte, that further investigation of metaphysics of this kind ultimately demonstrates that certain concepts found in the traditions of religious theology and philosophy are also indispensable to an adequate understanding of reality. My focus with this episode is on the philosophy of structure so I don’t mean to sneak in too much missionary work. But in full disclosure I do in fact think that’s where the logic of all this leads.
Reality is composed of multiple layers of structure. The hylomorphic model of substance is that material substances themselves are most intelligibly understood as structural-material complexes. That’s the best way to think about substances having the kind of properties that they have, with reasons for having the properties they do. Further, physical reality, with its various substances and objects, proceeds according to the natures and propensities inherent in its substances. Physical reality is most intelligibly understood as conforming to certain laws that govern the kinds of events that occur. These laws impose structure on everything around us.
Jakob and Todd talk about set theory, its historical origins, Georg Cantor, trigonometric series, cardinalities of number systems, the continuum hypothesis, cardinalities of infinite sets, set theory as a foundation for mathematics, Cantor’s paradox, Russell’s paradox, axiomatization, the Zermelo–Fraenkel axiomatic system, the axiom of choice, and the understanding of mathematical objects as “sets with structure”.
Part 3 in this series on the philosophy of structure looks at examples and general principles of structure in chemistry. Subjects covered include quantum mechanics, the Schrödinger equation, wave functions, orbitals, molecules, functional groups, and the multiple levels of structure in proteins. General principles discussed include the nature of functions, the embedding of lower-level structures into higher-level structures, and the repeated use of a limited number of lower-level structures in the formation of higher-level structures.
In this third episode on the philosophy of structure I’d like to look at examples of structure in the field of chemistry. I’d like to see how some of the general principles of structure discussed in previous episodes apply to chemistry and to see what new general principles we can pick up from the examples of chemical structures. I’ll proceed along different scales, from the smallest and conceptually most fundamental components of chemical structure up to the larger, multicomponent chemical structures. For basic principles I’ll start with quantum mechanics, the Schrödinger equation, and its wave function solutions, which constitute atomic and molecular orbitals. From there I’ll look at different functional groups that occur repeatedly in molecules. And lastly I’ll look at the multiple levels of structure of proteins, the embedding of chemical structures, and the use of repeatable units in the formation of multicomponent chemical structures.
One aspect from previous discussions that won’t really show up in chemistry is an aesthetic dimension of structure. That’s not to say that chemical structures lack beauty. I find them quite beautiful and the study and application of chemical structures has actually been the primary subject of my academic and professional work. In other words, I probably find chemistry more aesthetically satisfying than most people commonly would. But what I’m coming to think of as the philosophical problem of systematizing the aesthetic dimension of structure, in fields like music, art, and literature, isn’t so directly applicable here. I’ll get back to that problem in future episodes. The aesthetic dimension is not so intrinsic to the nature of the subject in the case of chemistry.
So let’s start getting into chemical structures by looking at the smallest and most conceptually fundamental scale.
Matter Waves
There is, interestingly enough, an intriguing point of commonality between music and chemistry at the most fundamental level; and that is in the importance of waveforms. Recall that the fundamental building block of a musical composition is a sound wave, a propagation of variations in the local pressure in which parts of the air are compacted and parts of the air are rarified. Sound waves are governed by the wave equation, a second order partial differential equation, and its solutions, in which a series of multiple terms are added together in a superposition, with each individual term in that summation representing a particular harmonic or overtone. There are going to be a lot of similarities to this in the basic building of chemical structures.
One of the key insights and discoveries of twentieth century science was that matter also takes the form of waves. This is foundational to quantum mechanics and it is known as the de Broglie hypothesis. This was a surprising and strange realization but it goes a long way in explaining much of what we see in chemistry. Because a particle is a wave it also has a wavelength.
Recall that in acoustics, with mechanical waves propagating through a medium, wavelength is related to the frequency and speed of the wave’s propagation. That relation is:
λ = v/f
Where λ, is the wavelength, f is frequency, and v is the wave propagation velocity. With this kind of mechanical wave the wave is not a material “thing” but a process, a disturbance, occurring in a material medium.
But with a matter wave the wave is the matter itself. And the wavelength of the matter wave is related to the particle’s momentum, a decidedly material property. A particle’s wavelength is inversely proportional to its momentum. This relation is stated in the de Broglie equation:
λ = h/p
In which λ is the wavelength, h is a constant called Planck’s constant (6.626×10−34 J/s), and p is momentum. Momentum is a product of mass and velocity:
p = mv
Because the wavelength of a matter wave is inversely proportional to momentum the wavelength for the matter waves of macroscopic particles, the kinds of objects we see and interact with in our normal experience, is going to be very, very short, so as to be completely negligible. But for subatomic particles their wavelengths are going to be comparable to the scale of the atom itself, which will make their wave nature very significant to their behavior.
One interesting consequence of the wave nature of matter is that the precision of simultaneous values for momentum and position of a matter wave is limited. This is known as the Uncertainty Principle. There’s actually a similar limit to the precise specification of both wavelength and position for waves in general, i.e. for any and all waves. But because wavelength is related to momentum in matter waves this limitation gets applied to momentum as well.
Recall that with sound waves a musical pitch can be a superposition of multiple frequencies or wavelengths. This superposition is expressed by the multiple terms in a Fourier Series. Any function can be approximated using a Fourier Series, expressed in terms of added sinusoidal (oscillating) waves. A function that is already sinusoidal can be matched quite easily. The Fourier Series can converge on more complicated functions as well but they will require more terms (that’s important). In the case of musical pitches the resulting functions were periodic waves that repeated endlessly. But a Fourier Series can also describe pulses that are localized to specific regions. The catch is that more localized pulses, confined to tighter regions, require progressively more terms in the series, which means a higher number of wavelengths.
Bringing this back to matter waves, these same principles apply. Under the de Broglie formula wavelength is related to momentum. A pure sine wave that repeats endlessly has only one wavelength. But it also covers an infinite region. As a matter wave this would be a perfect specification of momentum with no specification of position. A highly localized pulse is confined to a small region but requires multiple terms and wavelengths in its Fourier Series. So its position is highly precise but its momentum is much less precise.
The limit of the simultaneous specification of momentum and position for matter waves is given by the equation:
σxσp ≥ h/(4π)
Where σx is the standard deviation of position, σp is the standard deviation of momentum, and h is Planck’s constant. The product of these two standard deviations has a lower limit. At this lower limit it’s only possible to decrease the standard deviation of one by increasing the standard deviation of the other. And this is a consequence of the wave nature of matter.
The most important application of these wave properties and quantum mechanical principles in chemistry is with the electron. Protons and neutrons are also important particles in chemistry, and significantly more massive than electrons. But it’s with the electrons where most the action happens. Changes to protons and electrons are the subject of nuclear chemistry, which is interesting but not something we’ll get into this time around. In non-nuclear chemical reactions it’s the electrons that are being arranged into the various formations that make up chemical structures. The behavior of an electron is described by a wave function and the wave equation is governed by the Schrödinger equation.
The Schrödinger equation is quite similar to the classical wave equation that governs sound waves. Recall that the classical wave equation is:
d2u/dx2 = (1/v2) * d2u/dt2
Where u is the wave displacement from the mean value, x is distance, t is time, and v is velocity. A solution to this equation can be found using a method of separation of variables. The solution u(x,t) can be written as the product of a function of x and a sinusoidal function of time. We can write this solution as:
u(x,t) = ψ(x) * cos (2πft)
Where f is the frequency of the wave in cycles per unit time and ψ(x) is the spatial factor of the amplitude of u(x,t), the spatial amplitude of the wave. Substituting ψ(x) * cos (2πft) into the differential wave equation gives the following equation for the spatial amplitude ψ(x).
d2ψ/dx2 + 4π2f2/v2 * ψ(x) = 0
And since frequency multiplied by wavelength is equal to velocity (fλ = v) we can rewrite this in terms of wavelength, λ:
d2ψ/dx2 + 4π2/λ2 * ψ(x) = 0
So far this is just applicable to waves generally. But where things get especially interesting is the application to matter waves, particularly to electrons. Recall from the de Broglie formula that:
λ = h/p
In which h is a constant called Planck’s constant (6.626×10−34 J/s) and p is momentum. We can express the total energy of a particle in terms of momentum by the equation:
E = p2/2m + V(x)
Where E is total energy, m is mass, and V(x) is potential energy as a function of distance. Using this equation we can also express momentum in these terms:
p = {2m[E – V(x)]]1/2
And since,
λ = h/p
The differential equation becomes
d2ψ/dx2 + 2m/ħ2 * [E – V(x)] ψ(x) = 0
Where
ħ = h/(2π)
This can also be written as
-ħ2/2m * d2ψ/dx2 + V(x) ψ(x) = E ψ(x)
This is the Schrödinger equation. Specifically, it’s the time-independent Schrödinger equation. So what do we have here? There’s a similar relationship between the classical wave equation (a differential equation) and its solution u(x,t), which characterizes a mechanical wave. The Schrödinger equation is also a differential equation and its solution, ψ(x), is a wave function that characterizes a matter wave. It describes a particle of mass m moving in a potential field described by V(x). Of special interest to chemistry is the description of an electron moving in the potential field around an atomic nucleus.
Let’s rewrite the Schrödinger equation using a special expression called an operator. An operator is a symbol that tells you to do something to whatever follows the symbol. The operator we’ll use here is called a Hamiltonian operator, which has the form:
H = -ħ2/2m * d2/dx2 + V(x)
Where H is the Hamiltonian operator. It corresponds to the total energy of a system, including terms for both the kinetic and potential energy. We can express the Schrödinger equation much more concisely in terms of the Hamiltonian operator, in the following form:
H ψ(x) = E ψ(x)
There are some special advantages to expressing the Schrödinger equation in this form. One is that this takes the form of what is called an eigenvalue problem. An eigenvalue problem is one in which an operator is applied to an eigenfunction and the result returns the same eigenfunction, multiplied by some constant called the eigenvalue. In this case the operator is the Hamiltonian, H. The eigenfunction is the wave function, ψ(x). And the eigenvalue is the observable energy, E. These are all useful pieces of information to have that relate to each other very nicely, when expressed in this form.
Orbitals
In chemistry the wave functions of electrons in atoms and molecules are called atomic or molecular orbitals. And these are also found using the Schrödinger equation; they are solutions to the Schrödinger equation. The inputs to these wave functions are coordinates for points in space. The output from these wave functions, ψ, is some value, whose meaning is a matter of interpretation. The prevailing interpretation is the Born Rule, which gives a probabilistic interpretation. Under the Born Rule the value of ψ is a probability amplitude and the square modulus of the probability amplitude, |ψ|2, is called a probability density. The probability density defines for each point in space the probability of finding an electron at that point, if measured. So it has a kind of conditional, operational definition. More particularly, we could say, reducing the space to a single dimension, x, that |ψ(x)|2 gives the probability of finding the electron between x and x + dx. Going back to 3 dimensions, the wave function assigns a probability amplitude value, ψ, and a probability density value, |ψ|2, to each point in space. Informally, we might think of the regions of an orbital with the highest probability density as the regions where an electron “spends most of its time”.
Solutions to the Schrödinger equation, electron wavefunctions, can be solved exactly for the hydrogen atom. Other solutions cannot be solved analytically but can be approximated to high precision using methods like the variational method and perturbation theory. And again, we call these wave functions orbitals. I won’t get into the specifics of the methods for finding the exact solutions for the hydrogen atom but I’ll make some general comments. For an atom the Cartesian (x,y,z) coordinates for the three dimensions of space aren’t so convenient so we convert everything to spherical coordinates (r,θ,φ) in which r is a radial distance and θ and φ are angles with respect to Cartesian axes. The term for potential, V(r) in the Hamiltonian operator will be defined by the relation between a proton and an electron. And the mass of the electron also gets plugged into the Hamiltonian. Solving for the wave function makes use of various mathematical tools like spherical harmonics and radial wave functions. Radial wave functions in turn make use of Laguerre polynomials. Then solutions for the hydrogen atom will be expressed in terms of spherical harmonic functions and radial wave functions, with the overall wave function being a function of the variables (r,θ,φ).
Because the orbitals are functions of r, θ,and φ they can be difficult to visualize and represent. But partial representations can still give an idea of their structure. An orbital is often represented as a kind of cloud taking some kind of shape in space; a probability density cloud. The intensity of the cloud’s shading or color represents varying degrees of probability density.
The shapes of these clouds vary by the type of orbital. Classes of orbitals include s-orbitals, p-orbitals, d-orbitals, and f-orbitals. These different kinds of orbitals are grouped by their orbital angular momentum. s-orbitals are sphere shaped, nested shells. p-orbitals have a kind of “dumbbell” shape with lobes running along the x, y, and z axes. d-orbitals are even more unusual, with lobes running along two axes, and one orbital even having a kind of “donut” torus shape. Although we sometimes imagine atoms as microscopic solar systems with electrons orbiting in circles around the nucleus their structure is much more unusual, with these oddly shaped probability clouds all superimposed over each other. The structure of atoms into these orbitals has important implications for the nature of the elements and their arrangements into molecules. But before getting into that let’s pause a moment to reflect on the nature of the structure discussed up to this point.
Reflection on the Structure of the Wave Function
As with a sound wave, the function for an electron wave function is a solution to a differential equation, in this case the Schrödinger equation. This wave function ψ, is a function of position. In spherical coordinates of r, θ, and φ this function is ψ(r,θ,φ). In the most basic terms a function is a rule that assigns elements in a set, or a combination of elements from multiple sets, to a single element in another set. This rule imposes additional structure on relations between these sets. So in our case we have a set for all r values, a set for all θ values, a set for all φ values, and a set for all ψ values. Prior to the imposition of structure by any function we could combine elements from these sets in any way we like. In a four-dimensional (abstract) phase space or state space with axes r, θ, φ, and ψ all points are available, any ordered quadruple (r,θ,φ,ψ) is an option. That’s because an ordered triplet (r,θ,φ) can be associated with any value of ψ. There’s no rule in place limiting which values of ψ the ordered triplet (r,θ,φ) can associate with. The entire phase space is available; all states are available. But with the imposition of the function ψ(r,θ,φ) the region of permissible states conforming to this rule is significantly smaller. An ordered triplet (r,θ,φ) can be assigned to one and only one value of ψ.
It’s useful here to distinguish between logical possibility and physical possibility. In what sense are all ordered quadruples (r,θ,φ,ψ) in the state space “possible”? Most of them are not really physically possible for the electron in an atom because they would violate the laws of physics, the laws of quantum mechanics. That’s because the function, the wave function in fact is imposed. But in the theoretical case that it were not imposed, any ordered quadruple (r,θ,φ,ψ) would be logically possible; there’s no contradiction in such a combination. At least, not until we start to develop the assumptions that lead to the Schrödinger equation and its solutions. But since the actual, physical world follows physical laws only the states satisfying the function ψ(r,θ,φ) are physically possible.
This distinction between logical possibility and physical possibility highlights one, very basic source of structure: structure that arises from physical laws. Atomic orbitals are not man-made structures. There certainly are such things as man-made structures as well. But atomic orbitals are not an example of that. I say all this to justify including atomic orbitals as examples of structure in the first place, since in a physical sense they seem “already there” anyway, or as something that couldn’t be otherwise. But in light of the much more vast state space of logically possible states I think it makes sense to think of even these physically given states as highly structured when compared to the logically limitless states from which they stand apart.
I’d like to make one point of clarification here, especially considering the reputation quantum mechanics has for being something especially inexact or even anti-realist. What is it that the wave function specifies at each point in space, for each ordered triplet (r,θ,φ)? It’s certainly not the position of the electron. That indeed isn’t specified. But what is specified is the amplitude, ψ. And the square modulus of the amplitude, |ψ|2 is the probability of finding the electron at that position, (r,θ,φ). The wave function doesn’t specify the electron’s exact position. Does this mean that chaos reigns for the electron? The electron could, after all, be anywhere in the universe(with the exception of certain nodes). But that infinite extension of possible positions doesn’t mean that chaos reigns or that the electron isn’t bound by structure. The probability density of the electron’s position in space is very precisely defined and governs the way the electron behaves. It’s not the case that just anything goes. Certain regions of space are highly probable and most regions of space are highly improbable.
This is something of a matter of perspective and it’s a philosophical rather than scientific matter. But still just as interesting, for me at least. It pertains to the kinds of properties we should expect to see in different kinds of systems. What kinds of properties should we expect quantum systems to have? What are quantum properties? Do quantum systems have definite properties? I’ve addressed this in another episode on the podcast, drawing on the thought of Sunny Auyang. In her view there’s an important distinction to be made between classical properties and quantum properties. Even if quantum systems don’t have definite classical properties that’s not to say they don’t have properties at all. They just have properties of a different kind, properties that are more removed from the kinds of classical properties we interact with on a daily basis. We’re used to interacting with definite positions and definite momenta at our macroscopic scale of experience. At the quantum level such definite predicates are not found for position and momentum, but they are found for the position representation and momentum representation of a system’s wave function. Quoting Auyang:
“Are there predicates such that we can definitely say of a quantum system, it is such and so? Yes, the wavefunction is one. The wavefunction of a system is a definite predicate for it in the position representation. It is not the unique predicate; a predicate in the momentum representation does equally well. Quantum properties are none other than what the wavefunctions and predicates in other representations describe.” (How Is Quantum Field Theory Possible?)
I think of this as moving our perspective “up a level”, looking not at position itself but at the wave function that gives the probability amplitude, ψ, and probability density, |ψ|2, of position. That is where we find definite values governed by the laws of physics. It’s appropriate to look at this level for these kinds of quantum systems, because of the kind of things that they are. Expecting something else from them would be to expect something from a thing that is not appropriate to expect from the kind of thing that it is.
Molecular Orbitals
Let’s move now to molecules. Molecules are groups of atoms held together by chemical bonds. This makes use of a concept discussed in the last episode that is pertinent to structure generally: that of embedding. Lower-level structures get embedded, as kinds of modules, into higher-level structures. The lower-level structures remain but their combinations make possible a huge proliferation of new kinds of structures. As we move from the level of atoms to molecules the number of possible entities will expand dramatically. There are many more kinds of molecules than there are kinds of atoms. As of 2021 there are 118 different kinds of atoms called elements. That’s impressive. But this is miniscule compared to the number of molecules that can be made from combinations and arrangements of these elements. To give an idea, the Chemical Abstracts Service, which assigns a unique CAS registry number to different chemicals, currently has a database of 177 million different chemical substances. These are just molecules that we’ve found or made. There are many more that will be made and could be made.
Electrons are again key players in the formation of molecules as well. The behavior of electrons, their location probability densities, and wave-like behavior, continue to be defined by mathematical wave functions and abide by the Schrödinger equation. A wave function, ψ, gives a probability amplitude and its square modulus, |ψ|2, gives the probability of finding an electron in a given region. So many of the same principles apply. But the nature of these functions at the molecular level is more complex. In molecules the wave functions take new orbital forms. Orbitals in molecules take two new important forms: hybridized orbitals and molecular orbitals.
Hybridized orbitals are combinations of regular atomic orbitals that combine to form hybrids. So where before we had regular s-type and p-type orbitals these can combine to form hybrids such as sp3, sp2, and sp orbitals. With a carbon atom for instance, in the formation of various organic molecules, the orbitals of the valence electrons will hybridize.
Molecular orbitals are the wave functions for electrons in the chemical bonds between the atoms that make up a molecule. Molecular orbitals are formed by combining atomic orbitals or hybrid atomic orbitals from the atoms in the molecule. The wave functions for molecular orbitals don’t have analytic solutions to the Shrõdinger equation so they are calculated approximately.
A methane molecule is a good example to look at. A methane molecule consists of 5 atoms: 1 carbon atom and 4 hydrogen atoms. It’s chemical formula is CH4. A carbon atom has 6 electrons with 4 valence electrons that are available to participate in chemical bonds. In the case of a methane molecule these 4 valence electrons will participate in 4 bonds with 4 hydrogen atoms. In its ground state the 4 valence electrons occupy one 2s orbital and two 2p orbitals. In order to form 4 bonds there need to be 4 identical orbitals available. So the one 2s orbital and three 2p orbitals hybridize to form 4 sp3 hybrid orbitals. An sp3 orbital, as a hybrid, is a kind of mixture of an s-type and p-type orbital. The dumbbell shape of an p-orbital combines with the spherical shape of an s-orbital to form a kind of lopsided dumbbell. It’s these hybrid sp3 orbitals that then combine with the 1s orbitals of the hydrogen atoms to form molecular orbitals. In this case the type of molecular orbitals that form are called σ-bonds.
The 2s and 2p orbitals in the carbon atom can also hybridize in other ways to form two or three bonds. For example, a carbon atom can bond with 2 hydrogen atoms and 1 other carbon atom. When it does this the 2s orbital hybridizes with just 2 of the 2p orbitals to form 3 sp2 orbitals, which bond with the 2 hydrogens and the other carbon. The remaining 2p orbital combines with the other carbon atom again, to its corresponding 2p orbital. This makes two sets of orbitals combining into two molecular bonds, a σ-bond and what is called a π-bond. When a σ-bond and a π-bond form between atoms it is called a double bond. Carbon atoms can also form triple bonds in which two sp orbitals are formed from the 2s orbital and one 2p orbital. This leaves two 2p orbitals to combine with their counterparts in another carbon atom to form a triple bond, composed of 1 σ-bond and 2 π-bonds. Single bonds, double bonds, and triple bonds all have their own geometrical properties like bond angles and freedom of rotation. This has effects on the properties of the resulting molecule.
Functional Groups
σ-bonds, π-bonds, single bonds, double bonds, and triple bonds make possible several basic molecular structures called functional groups. Functional groups are specific groupings of atoms within molecules that have their own characteristic properties. What’s useful about functional groups is that they occur in larger molecules and contribute to the overall properties of the parent molecule to which they belong. There are functional groups containing just carbon, but also functional groups containing halogens, oxygen, nitrogen, sulfur, phosphorus, boron, and various metals. Some of the most common functional groups include: alkyls, alkenyls, akynyls, and phenyls (which contain just carbon); fluoros, chloros, and bromos (which contain halogens); hydroxyls, carbonyls, carboxyls, and ethers (which contain oxygen); carboxamides and amines (which contain nitrogen); Sulfhydryls and sulfides (which contain sulfur); phosphates (which contain phosphorus); and so forth.
Repeatable Units
The last subject I’d like to address with all this is the role of repeatable units in the formation of complex chemical structures. Let’s come at this from a different direction, starting at the scale of a complex molecule and work our way down. One of the most complex, sophisticated kinds of molecules is a protein. Proteins are huge by molecular standards. Cytochrome c, for example, has a molecular weight of about 12,000 daltons. (For comparison, methane, discussed previously, has a molecular weight of 16 daltons). What we find with such molecules is that they are highly reducible to a limited number of repeatable units. But we could imagine it being otherwise; a macromolecule being irreducible from its overall macrostructure and not having any discernible repeating components. Let’s imagine a hypothetical, counterfactual case in which a macromolecule of that size is just a chaotic lump. Imagine going to a landfill and gathering a bunch of trash from a heap with all sorts of stuff in it, gathering it all together, rolling it into a ball, and binding it with hundreds of types of unmixed adhesives. Any spatial region or voxel of that lump would have different things in it. You might find some cans and wrappers in one part, computer components in another, shredded office papers in another, etc. We could imagine a macromolecule looking like that; a completely heterogeneous assembly. We could imagine further such a heterogeneous macromolecule being able to perform the kinds of functions that proteins perform. Proteins can in fact be functionally redundant; there’s more than one way to make a protein that performs a given function. So we might imagine a maximally heterogeneous macromolecule that is able to perform all the functions that actually existing proteins perform. But this kind of maximal heterogeneity is not what we see in proteins.
Instead, proteins are composed of just 20 repeatable units, a kind of protein-forming alphabet. These are amino acids. All the diversity we see in protein structure and function comes from different arrangements of these 20 amino acids. Why would proteins be limited to such a small number of basic components? The main reason is that proteins have to be put together and before that they have to be encoded. And it’s much more tractable to build from and encode a smaller number of basic units, as long as it gives you the structural functionality that you’ll need in the final macrostructure. It might be possible in principle to build a macromolecule without such a limited number of repeatable units. But it would never happen. The process to build such a macromolecule would be intractable.
This is an example of a general principle I’d like to highlight that we find in chemistry and in structure generally. And it’s related to embedding. But it’s a slightly different aspect of it. Complex, high-level structures are composed by the embedding of lower-level structures. And the higher-level structures make use of a limited number of lower-level structures that get embedded repeatedly.
In the case of a protein, the protein is the higher-level structure. Amino acids are the lower-level structures. The structures of the amino acids are embedded into the structure of the protein. And the higher-level structure of the protein uses only a limited number of lower-level amino acid structures.
A comparison to writing systems comes to mind here. It’s possible to represent spoken words in written form in various ways. For example, we can give each word its own character. That would take a lot of characters, several hundred and into the thousands. And such a writing system takes several years to be able to use with any competence. But it’s also possible to limit the number of characters used in a writing system by using the same characters for phonemic properties common to all words, like syllables or phonemes. Many alphabets, for example, only have between 20 and 30 characters. And it’s possible to learn to use an alphabet fairly quickly. And here’s the key. There’s no functional representational loss by using such a limited number of characters. The representational “space” is the same. It’s just represented using a much smaller set of basic components.
Biochemists mark out four orders of biomolecular structure: primary, secondary, tertiary, and quaternary. And this is a perfect illustration of structural embedding.
The primary structure of a protein is its amino acid sequence. The primary structure is conceptually linear since there’s no branching. So you can “spell” out a protein’s primary structure using an amino acid alphabet, one amino acid after another. Like, MGDVEK: methionine, glycine, aspartic acid, valine, glutamic acid, and lysine. Those are the first 6 amino acids in the sequence for human Cytochrome c. What’s interesting about amino acids is that they have different functional groups that give them properties that will contribute to the functionality of the protein. We might think of this as a zeroth-level protein structure (though I don’t know of anyone calling it that). Every amino acid has a carboxyl group and an amino group. That’s the same in all of them. But they each have their own side chain or R-group in addition to that. And these can be classified by properties like polarity, charge, and other functional groups they contain. For example, methionine is sulfuric, nonpolar, and neutral; asparagine is an amide, polar, and neutral; phenylalanine is aromatic, nonpolar, and neutral; lysine is basic, polar, and positively charged. These are important properties that contribute to a protein’s higher-level structure.
The secondary structure of a protein consists of three-dimensional, local structural elements. The interesting thing about secondary structures in the context of embedding and repeatable units is that these local structures take common shapes that occur all the time in protein formation. The two most important structural elements are alpha helices and beta sheets. True chemical bonds only occur between the amino acid units of the primary structure but in the higher level structures the electrostatic forces arising from differences in charge distribution throughout the primary structure make certain regions of the primary structure attracted to each other. These kinds of attractions are called hydrogen bonds, in which a hydrogen atom bound to a more electronegative atom or group is attracted to another electronegative atom bearing a lone pair of electrons. In the case of amino acids such hydrogen bonding occurs between the amino hydrogen and carboxyl oxygen atoms in the peptide backbone.
In an alpha helix these hydrogen bonds form in a way that makes the amino acids wrap around in a helical shape. In a beta sheet strands of amino acids will extend linearly for some length and then turn back onto themselves, with the new strand segment extending backward and forming hydrogen bonds with the previous strand. These hydrogen bound strands of amino acids then form planar sheet-like structures. What’s interesting is that these kinds of secondary structures are very common and get used repeatedly, much like amino acids get used repeatedly in primary structures. Secondary structures, like alpha helices and beta sheets (among others), then get embedded in even higher-level structures.
The tertiary structure of a protein is its full three-dimensional structure that incorporates all the lower-level structures. Tertiary structures are often represented using coils for the alpha helix components and thick arrows for the beta sheet components. The way a protein is oriented in three-dimensional space is determined by the properties of its lower level structures all the way down to the functional groups of the amino acids. Recall that the different amino acids can be polar or nonpolar. This is really important because proteins reside in aqueous environments with highly polar water molecules. Nonpolar groups are said to be hydrophobic because conditions in which the surface area of exposure, the contact, between nonpolar groups and polar water molecules is minimized are entropically favored. Because of this polar and nonpolar molecules will appear to repel each other, a hydro-phobic effect. Think of the separation of oil and water as an example. Water is polar and oil is nonpolar. This is the same effect occurring at the scale of individual functional group units in the protein. Proteins can fold in such a way as to minimize the surface area of nonpolar functional groups exposed to water molecules. One way this can happen is that nonpolar amino acid sections fold over onto each other so that they interact with each other, rather than with water molecules and so that water molecules can interact with each other rather than with the nonpolar amino acid sections. These same kinds of effects driven by the properties of functional groups were also the ones bringing about the secondary structures of alpha helices and beta sheets.
Some proteins also have a quaternary structure in which multiple folded protein subunits come together to form a multi-subunit complex. Hemoglobin is an example of this. Hemoglobin is made up of four subunits; two alpha subunits and two beta subunits.
There’s a pattern here of structure building upon structure. But it does so with a limited set of repeatable structures. I’d like to address this again. Why should proteins be built out of only 20 amino acid building blocks. Certainly there could be (at least in theory) a macromolecule that has similar functionality and global structure, using the same functional group properties to get it to fold and form in the needed way, without the use of repeatable lower-level structural units. But that’s not what we see. Why? One important reason is that proteins need to be encoded.
Proteins are made from genes. Genes are sections of DNA that get translated into RNA and then transcribed in proteins. That’s a gene’s primary function: to encode proteins. DNA and RNA have further simplified components: only four types of nucleotides in each: guanine, adenine, cytosine, and thymine in DNA and guanine, adenine, cytosine, and uracil in RNA. These nucleotides have to match up with the proteins that they encode and it’s going to be very difficult to do that without dividing up the protein into units that can be encoded in a systematic way. There’s a complex biochemical process bringing about the process of transcribing an RNA nucleotide sequence into a protein. But since these are, at bottom, automatic chemical processes they have to proceed in systematic, repeatable ways. An entire macromolecule can’t have an entire intracellular biochemical system dedicated to just that macromolecule alone. For one thing, there are too many proteins for that. The same biochemical machinery for transcription has to be able to make any protein. So all proteins have to be made up of the same basic units.
The way this works in transcription is that molecules called transfer RNA (tRNA) are dedicated to specific combinations of the 4 basic RNA nucleotides. These combinations are called codons. A codon is some combination of 3 nucleotides. Since there are 4 kinds of nucleotides and each codon has 3 there are 43, or 64 possible combinations. Different codons correspond to different amino acids. Since they only code for 20 amino acids there is obviously some redundancy, also called degeneracy (which isn’t meant to be insulting by the way). The way that codons get transcribed into an amino acid is that the tRNA molecules that match the nucleotide sequences of the various codons in the RNA also convey their encoded amino acids. These tRNA molecules come together at the point of transcription, called ribosomes, and link the amino acids together into the chains that form the primary structure of the protein. This is just a part of the biochemical machinery of the process. What’s important to note here is that although there are a number of tRNA types it’s not unmanageable. There are at most 64 possible codon sequences. So there doesn’t have to be a unique set of transcription machinery dedicated to each and every kind of protein, which would be insane. The components only have to be dedicated to codon sequences and amino acids, which are much more manageable.
Key Takeaways
I’d like to summarize the foregoing with 4 key takeaways from this analysis of structure in chemistry that I think apply to a general philosophy of structure.
1. Structure can be modeled using functions
Recall that a function is a relation between sets that associates an element in one set or combination of elements from multiple sets to exactly one element in another set. The source sets are called domains and the target sets they map onto are called codomains. One example of a function we’ve looked at in both the previous episode on music and in this episode on chemistry is the waveform function. In chemistry mathematical functions called orbitals assign to each point in space (the domain) an amplitude value (the codomain).
2. Functions occupy only a small portion of a phase space
Functions, by nature, impose limitations. A relation that associates an element in a domain to more than one element in a codomain would not be a function. A function associates the domain element to only one codomain element. In this way functions are very orderly. To give an example, in an orbital a given point in space (a domain element) can have only one amplitude value (the codomain element). This is highly limited. To illustrate this, imagine a phase space of all possible combinations of domain and codomain values. Or to give a simpler comparison, imagine a linear function on an x-y plane; for example, the function y = x. This is a straight line at a 45 degree angle to the x and y axes. The straight line is the function. But the phase space is the entire plane. The plane contains all possible combinations of x and y values. But the function is restricted to only those points where y = x. A similar principle applies to orbitals. The corresponding phase space would be, not a plane, but a 4-dimensional hyperspace with axes r,θ,φ, and ψ. The phase space is the entire hyperspace. But the wave function, or orbital, is restricted to a 3-dimensional space in this 4-dimensional hyperspace. This kind of restriction of functions to small portions of phase spaces is a characteristic feature of structure generally.
3. Structural embedding
Embedding is a feature of structure that came up in music and has come up again in even more obvious form in chemistry. Just looking at proteins the different orders of structure is quite obvious and well known to biochemists, with their conceptual division of proteins into primary, secondary, tertiary, and quaternary protein structures, with each level of structure incorporating the lower level structures embedded into them. Using proteins as an example, even primary structures have embedded into them several layers of additional structure such as functional groups, molecular orbitals, atomic orbitals, and the general structure of the wave function itself. One key feature of such embedding is that properties and functionality of the lower-level structures are taken up and integrated into the higher-level structures into which they are embedded. We saw, for example, how the three-dimensional tertiary structure of a protein takes the form that it does because of the properties of functional groups in the side chains of individual amino acids, in particular polarity and nonpolarity.
4. Repeatable units
A final key takeaway is the use of repeatable units in the process of structural embedding. In retrospect this is certainly something that is applicable to music as well. We see repeatable units in the form of pitches and notes. In chemistry we see repeatable units in macromolecules like polymers and proteins. Polymers, like polyethylene, PVC, ABS, polyester, etc. certainly use repeatable units; in some cases a single repeating unit, or sometimes two or three. Proteins make use of more repeatable units but even there they make use of a limited number: 20 amino acids. We see here an important general principle of structure: that high-level structures tend to be composed through the repeated use of a limited number of lower-level structures rather than by forming as a single, bulk, irreducible macrostructure. The use of lower-level repeatable units in the higher-level structure facilitates the encoding and construction of high-level structures.
And that wraps up this study of structure in chemistry. Thank you for listening.