The Unintelligible Remainder

Could anything truly exist in such a fashion that it could never be either perceived or thought of, even if only in principle? How would such a reality be distinct from absolute nothingness? A look into the philosophical issues of being and knowing with John Locke, Immanuel Kant, Martin Heidegger, Joseph Ratzinger, and David Bentley Hart.

“Could anything truly exist in such a fashion that it could never be either perceived or thought of, even if only in principle? How would such a reality be distinct from absolute nothingness?”

This is a question posed by David Bentley Hart in his recent book You Are Gods: On Nature and Supernature. I think it’s an interesting question and it touches on some of the most foundational issues in philosophy.

I’ll call that which could never be either perceived or thought of the “intelligible remainder”. It’s that which is left unperceived and unknown in all our perception and knowledge of things because it is intrinsically imperceptible, unknowable, and unintelligible to intelligent beings. To frame this idea it’s helpful to refer to the philosophy of John Locke and Immanuel Kant. The concepts of subject and object are important to both. Philosophically, a subject is a being who has a unique consciousness and unique personal experiences. An object is something that the subject observes, perceives, or relates to in some way. Both Locke and Kant concerned themselves with how thinking subjects relate to the objects of their experience, and in particular the limitations, or unintelligible remainder, of the subject’s grasp of the object.

In An Essay Concerning Human Understanding John Locke introduced what he called the primary and secondary qualities of things. As an example, for a light wave or a sound wave one primary quality would be its wavelength. Those are things that are in the objects themselves, independent of our perceptions of them. A secondary quality, by contrast, would be like the color of light or the pitch of a sound. These secondary qualities are not in the objects themselves but are products of our modes of perception. Secondary qualities are our own quirky human ways of perceiving things.

Immanuel Kant had some similar ideas. Instead of primary and secondary qualities, in his Critique of Pure Reason he used the terms noumena and phenomena. The noumenon is the thing-in-itself, the object as it really is, independent of our perception. The phenomenon is what we perceive of it. Kant stressed that we cannot know the noumena, the things themselves as they really are. We can only know the phenomena. Our knowledge of the world outside our heads is necessarily filtered or mediated.

Sometimes you might hear this in the form of the claim that we never actually see things themselves. What’s really happening is our brains are responding to a series of physical processes and biochemical reactions, as photons impinge on our retina and induce phototransduction in photoreceptor cells, resulting in a cascade of signals carried via the optic nerve to the visual cortex, and so on. In effect we are several layers of mediation removed from the world outside our heads. And a lot is left out in the process of translation.

What I call the unintelligible remainder is a feature of this kind of philosophy in which there is a gulf between things in themselves and our perceptions of them. There’s always something inaccessible to us. A remainder that is inaccessible or unintelligible. To put it in the form of a conceptual equation.

Things In Themselves – Our Perceptions of the Them = The Unintelligible Remainder

The unintelligible remainder is what’s left over; the aspect of things that remains inaccessible and unintelligible to us. What would that unintelligible remainder be? Well, it’s impossible to say because it’s intrinsically inaccessible and unintelligible to us. But then there’s another question. Why should we think that there is such an intelligible remainder? Why should we think that any such remainder exists if it’s something we can never really know anything about?

Let’s break such remainders down into two different types:

1. Things that we don’t know about but could know about
2. Things that we don’t know about and never could know about

How different are these? Maybe the difference is slight. Or maybe it’s huge, even ontological. 

We can reason inductively that there are a lot of things that we don’t know about but that we could know about because in the past there have been things that we didn’t know about at one point but later came to know about.

For example, even though we’ve always been able to see light and color we weren’t always aware of the quantifiable spectrum of wavelengths, and that it extended into wavelengths that we can’t see with our eyes, like with infrared and ultraviolet wavelengths. But we can quantify and detect those wavelengths now. The fields of optics and quantum mechanics have further increased our understanding of light.

We can reason that we will continue to come to know about more things that we don’t currently know about. For example, we’ll certainly continue to learn more about the nature of light. Such things are obviously knowable and intelligible because we have come to know about them.

But we can’t reason inductively in the same way about things that we could never know about. Trivially, we’ve never come to know about something that is unknowable. Obviously. Why should we think that such unknowable things exist as an unintelligible remainder?

I think the reasoning about these two kinds of remainders is quite different so I want to dwell on this difference for a bit.

In the case of things that we don’t know about but could know about, we can reason that such things exist through inductive reasoning. We know this is how things have worked in the past. There have been aspects of things that we didn’t know about before that we’ve come to know about later. For any particular thing we can’t conclude deductively that there’s nothing left about it that we don’t know. But we kind of expect that there’s more there because that’s how it’s always been before.

But this kind of inductive reasoning doesn’t work for things that we don’t know about and never could know about. Why is that? Because we’ve never come to know about something that we could never know about. So it’s completely different.

But we kind of want to say still that things exist that we could never know about. Or that there are aspects of things that we could never know about. Why is that? Part of it may be a spill-over effect of our inductive reasoning about things that we didn’t know about but later came to know about. It seems like if there’s all this unknown stuff there should be stuff that we could never know about. And maybe there is a lot of stuff that we never will know about. But that’s different from stuff that we never could know about. Maybe another reason is humility, recognition of our own finitude and limited capacities. Humility is certainly admirable. But I’m not sure it’s enough to make that kind of positive claim. The only way I can see that we could really conclude that there do exist such unknowables would be some kind of indirect argument of impossibility, similar to the halting problem or Gödel’s incompleteness theorems. I don’t know of any such argument of impossibility for unintelligible remainders but it’s an intriguing possibility.

What about the alternative possibility that there is no unintelligible remainder? That everything that exists is intrinsically intelligible and could be perceived and thought of? Are there reasons to think that could be the case?

We can call the foregoing picture with Locke and Kant one of the “Cartesian subject”, which owes its name to the philosopher Rene Descartes. The basic model is of me here “inside” my head as a thinking subject, receiving sense data from objects “outside” in the world. So there’s this stark division between subject and object. This model of the Cartesian subject is quite powerful and intuitive. And it fits with the idea that there is an unintelligible remainder to the objects in the outside world, albeit inconclusively as previously discussed. But there are alternatives. I’ll talk about two: the philosophy of Martin Heidegger and the classical philosophy of Logos.

Martin Heidegger was working out of the field of phenomenology, which is the philosophical study of the structures of experience and consciousness. But his primary focus was ontology, the philosophy of being. His work was an effort to explain the meaning of being, what it means for a thing to be. In Being and Time he first approached this question through the being of human beings, what he called “Dasein”, a German neologism for “there being”. He discarded the concept of the Cartesian subject, a subject separated from the world of objects, with its split between subject and object. Instead, for Heidegger we are “being in the world”.

The philosophy of Being and Time and Heidegger’s later philosophy is extremely vast so I’m only sticking to a few key points related to my topic. One way he describes being is as disclosure, as things being revealed. Many of his circumlocutions have the effect of keeping the active role away from any kind of Cartesian subject. Instead of us as subjects perceiving objects there is disclosure and being revealing things. Another interesting concept of his is the “clearing”, like a clearing in the woods. In the dense forest it is dark and obscure but in the clearing there is space to see things. I am like a clearing in the woods, a site of disclosure and revealing, where things are revealed around me. It’s a very unusual way of speaking but these circumlocutions have the aim of directing our thinking away from the subject-object split. 

Another important Heideggerian idea is that the disclosure of being to us comes in terms of our projects and interests. Things like tools are disclosed to us in the first place as tools rather than as atomic facts that we then deduce to be tools in a secondary way. Heidegger’s example is a hammer. In the Kantian view we’d receive raw sense data, percepts, that our minds would use “categories”, sort of like mental modules, to process into concepts. We’d see the raw sense data first and then our minds would process that it is a hammer. But Heidegger rejects that idea. For Heidegger we’re not isolated in our own minds looking out at the world, receiving raw sense impressions. We’re already in the world. We’re already in the workshop, smelling the sawdust, engaged in the activity of building something. The hammer is a tool for hammering as part of our project. We may not even “see” it when we’re using it if we’re really in the zone. It’s just part of a seamless flow of activity. This is a very different way of thinking.

One of the fascinating things about this is that it has very tangible implications in the field of artificial intelligence. If you think about the different approaches I’ve described here you can imagine that it will make a really big difference whether you approach AI in a Lockean, Kantian way versus a Heideggerian way. And I think this is actually one of the best ways to approach Heidegger’s thought. One of the major players in 20th century artificial intelligence was the Heideggerian philosopher Hubert Dreyfus. Here’s his account:

“In 1963 I was invited by the RAND Corporation to evaluate the pioneering work of Alan Newell and Herbert Simon in a new field called Cognitive Simulation (CS). Newell and Simon claimed that both digital computers and the human mind could be understood as physical symbol systems, using strings of bits or streams of neuron pulses as symbols representing the external world. Intelligence, they claimed, merely required making the appropriate inferences from these internal representations… As I studied the RAND papers and memos, I found to my surprise that, far from replacing philosophy, the pioneers in CS had learned a lot, directly and indirectly from the philosophers. They had taken over Hobbes’ claim that reasoning was calculating, Descartes’ mental representations, Leibniz’s idea of a ‘universal characteristic’ – a set of primitives in which all knowledge could be expressed, – Kant’s claim that concepts were rules, Frege’s formalization of such rules, and Russell’s postulation of logical atoms as the building blocks of reality. In short, without realizing it, AI researchers were hard at work turning rationalist philosophy into a research program.”

“…I began to suspect that the critical insights formulated in existentialist armchairs, especially Heidegger’s and Merleau-Ponty’s, were bad news for those working in AI laboratories – that, by combining rationalism, representationalism, conceptualism, formalism, and logical atomism into a research program, AI researchers had condemned their enterprise to reenact a failure.”

“…To say a hammer has the function of being for hammering leaves out the defining relation of hammers to nails and other equipment, to the point of building things, and to the skills required when actually using the hammer – all of which reveal the way of being of the hammer which Heidegger called readiness-to-hand.”

“…It seemed to me, however, that the deep problem wasn’t storing millions of facts; it was knowing which facts were relevant in any given situation. One version of this relevance problem was called ‘the frame problem.’ If the computer is running a representation of the current state of the world and something in the world changes, how does the program determine which of its represented facts can be assumed to have stayed the same, and which would have to be updated?”

I think that’s quite fascinating and one of the best examples I’m aware of where we can see that the opaque writing of a Continental philosopher is not just meaningless gibberish or gratuitous navel gazing without any actual implications. If we ever end up creating artificial intelligence with true self-consciousness – and I think we will – I suspect that one of these approaches will work and the other will not. And in the process that will tell us a lot about the generalized nature of self-consciousness as such, including the nature of our own self-consciousness. It may also tell us about the nature of being itself, what it means for things to be.

How does this relate to the question of the unintelligible remainder? I don’t think Heideggerian ontology addresses that as much as the approach I’ll be talking about next but I think there are some interesting things here to think about. What I see with Heidegger isn’t as much the elimination of a remainder as much as the presence of certain indispensables. And these are indispensables that in other frameworks seem less real or fundamental to the being of things; in other words, quite dispensable. We might think that what a hammer “really” is is a meaningless collocation of atoms. But in Heidegger’s ontology this is not how the being of the hammer is revealed to us. Far from it. That may not seem like a big deal. Why should the way we see things be so important or say anything about the way things really are? But here I’d go back to AI. For a self-conscious AI certain things are going to be indispensable for it to make its way around in the world. AI without the indispensables won’t work. And I’d say that’s because it won’t approach the world correctly. A self-conscious AI will have to see the world in terms of projects, activities, and interests, populated with things in terms of these interests. Those are the indispensables that make up the reality of our world. So in a reverse sort of way it may be that the Lockean-Kantian approach does have a remainder that the Heiderggerian approach is able to account for. 

The second alternative to the Cartesian subject I’d like to talk about is the classical philosophy of Logos. I talked about this in some detail in a previous episode, “Logos: The Intellectual Structure of Being”. Logos has its roots in Greek philosophy but has since been most developed in Christian philosophy. The two philosopher-theologians I’ll refer to here are Joseph Ratzinger and David Bentley Hart.

Joseph Ratzinger, later Pope Benedict XVI, gave an excellent overview of Logos in his book Introduction to Christianity, in which he calls Logos the “intellectual structure of being”. He says, “All being is a product of thought and, indeed, in its innermost structure is itself thought.” What implication does this have for the way we perceive and understand things? Ratzinger says: “There is also expressed the perception that even matter is not simply non-sense that eludes understanding, that it too bears in itself truth and comprehensibility that makes intellectual comprehension possible.” That’s the key. With the Logos all of reality is intellectual or, in other words, thought. There can be no unintelligible remainder to things when all of reality is itself thought in its innermost structure.

The process of perceiving the world in this view is not one of processing mere matter with our mental faculties. It’s a process that is parallel to the structure of reality itself. As Ratzinger says: “All our thinking is, indeed, only a rethinking of what in reality has already been thought out beforehand.” As we conceive of the world through thought we are retracing the thought that comprises its essence. “The intellectual structure that being possesses and that we can re-think is the expression of a creative pre-meditation, to which they owe their existence.”

Does this kind of intellectual structure to all of reality entail the existence of God? Ultimately it may. But I think there are a couple other ways to think about it. Consider three possibilities:

1. The rationality of reality is a conditional property, conditional on there being intelligent beings in reality.
2. The rationality of reality is independent of any intelligent beings.
3. The rationality of reality is the rationality of a mind that grounds reality.

Only the third requires God.

In the first option the rationality of reality is a conditional feature, a feature that reality would have if certain conditions were met, even if they are not otherwise actualized. Something of the form:

1. IF there are intelligent beings in reality.
2. AND IF any existing intelligent beings obtain some degree of accurate understanding of reality.
3. THEN such intelligent beings will find reality to be intelligible and rational.

This is probably the option that seems most immediately plausible and straightforward.

The second option moves away from a subjective understanding of rationality to an objective understanding. We can think of this just as consistency. For intelligent beings instrumental rationality is consistency between actions and intentions. But apart from intelligent beings we could think of consistency between states of affairs. At a most basic level, noncontradiction. For some state of affairs, S, it won’t be the case that both S and not-S.

Ratzinger calls this kind of objective consistency “objective mind”. There is at least an “as-if” quality to the intelligibility of reality. It is structured “as if” rationally constructed. I think it’s possible to work within that framework. But ultimately I follow Ratzinger in his view that “objective mind is the product of subjective mind and can only exist at all as the declension of it, that, in other words, being-thought (as we find it present in the structure of the world) is not possible without thinking.”

Let’s turn now to David Bentley Hart and his discussion of this in his book You Are Gods. He says:

“We are accustomed, here in modernity’s evening twilight, to conceive of our knowledge of the world principally as a regime of representation, according to which sensory intuitions are transformed into symbolic images by some kind of neurological and perceptual metabolism, and then subjected to whatever formal conceptual determinations our transcendental apperception and apparatus of perception might permit.”

This is a restatement of the fundamental problem at hand. As a thinking subject, I’m stuck inside my head, separated from the world out there, receiving and processing raw sense data, and trying to come up with a picture of the objects out in the world as best I can. But that picture is always incomplete and eludes intelligibility. As Hart says:

“Being in itself possesses an occult adversity or resistance to being known. All that we experience in experiencing the world, then, is an obscure, logically inexplicable, but unremitting correspondence between mind and world, one whose ontological basis is not a presumed primordial identity between them, but rather something like a pre-established harmony or purely fortuitous synchrony—or inexplicably coherent illusion.”

Some opaque language here but I’ll explain. What Hart calls the “occult adversity or resistance to being known” is what I’m calling the unintelligible remainder. As I sit isolated inside my head looking out into the world putting a picture of it together, the picture that I see has order and regularity. But why? Ratzinger says it’s because the world is intrinsically rational. If that were not the case the order and regularity would be remarkable indeed. This is what Hart means when he says it would be “purely fortuitous synchrony–or inexplicably coherent illusion.” But Hart rejects that idea and, like Ratzinger, sees reality as intrinsically rational. Like Ratzinger he understands our perception and knowledge of things to be a process that is parallel to the structure of reality itself.

“Mind and world must belong to one another from the first, as flowing from and continuously participating in a single source.”

“Being and knowing must, then, coincide in some principle of form.”

Being and knowing are fundamentally linked in such a way that ontology, the philosophy of being, and epistemology, the philosophy of knowledge, “coincide as a single event of manifestation, of Being’s disclosure, which is to say also, of the full existence of what is made manifest.” There are some interesting similarities here with Heidegger in Hart’s idea of the “disclosure” of Being. In Hart’s view, being and knowing are ultimately one and the same. He’s very skeptical of the idea that the way things “really” are is something intrinsically unintelligible that we could never access or perceive.

“Under the regime of representation, the intelligible is a veil drawn before the abyss of the unintelligible, and the unintelligible is more real than the intelligible.”

This is the view he is going to criticize. That the unintelligible is more real than the intelligible.

“But what would it really mean to say that something exists that is, of its nature, alien to intelligibility? Can Being and knowing be wholly severed from one another without creating an intolerable contradiction? Could anything truly exist in such a fashion that it could never be either perceived or thought of, even if only in principle?”

“In principle” is a modifier that should not be overused but I think it’s appropriate here. The issue is not whether something currently is or can be perceived and thought of by finite human beings. As I said before, there’s been a lot of stuff that we haven’t been able to perceive or know about in the past that we’ve since gained the ability to perceive or know about by extending the reach of our innate capacities. Our innate capacities are the same as those of our ancestors 10,000 years ago. The things that are, in fact, perceivable and knowable to us were, in principle, perceivable and knowable to them. By analogy, there are things that are, in principle, perceivable and knowable to us that are not currently perceivable and knowable to us, in fact. With that in mind, Hart is asking if, with this most expansive possible understanding of the perceptive and intellectual capacities of intelligent beings, could anything exist that eludes them? That would be the unintelligible reminder. And he asks:

“How would such a reality be distinct from absolute nothingness?”

I’ll bring up again my distinction between things that we don’t know about but could know about and things that we don’t know about and never could know about. Certainly the first of these is distinct from absolute nothingness. We can reasonably conclude by inductive reasoning that lots of things exist that we don’t know about. But we cannot conclude with that same kind of inductive logic that there are things that exist that we never could know about. We might want to say that there are such unknowables out of humility. Or maybe we can reason toward their existence through some kind of argument of impossibility. But Hart thinks that: “The more rational assumption is… that in fact mind and world must belong to one another from the first, as flowing from and continuously participating in a single source.”

“It certainly seems reasonable to assume that Being must also be manifestation, that real subsistence must also be real disclosure, that to exist is to be perceptible, conceivable, knowable, and that to exist fully is to be manifest to consciousness.”

Why is that the more rational assumption? Hart doesn’t really explain that but I don’t disagree. Everything we do know about the world indicates that it is rationally structured and we have no knowledge of anything that isn’t. That’s not an absolutely conclusive reason but I think it’s a compelling reason to think that everything that exists is rationally structured, perceivable, and intelligible.

“So long as any absolute qualitative disproportion remains between Being and knowing, then, Being cannot become manifest, and so is not. Being must be intelligible, or even intelligibility itself. The perfectly unintelligible is a logical and ontological contradiction.”

There are some interesting ideas here that I think could use some further development. If the perfectly unintelligible, what I’ve been calling the unintelligible remainder, really is a logical and ontological contradiction that would be a compelling refutation of the existence of the unintelligible remainder. It looks like that argument for such logical and ontological contradiction would involve a demonstration of the necessary connection between being and manifestation, or being and disclosure as Heidegger put it. That what it means for something to be is a process of unconcealment and disclosure.

So going back to the opening question. “Could anything truly exist in such a fashion that it could never be either perceived or thought of, even if only in principle?” Is there an ineliminable, unintelligible remainder to all our knowledge and perception? I don’t think there is. I suspect that a great deal falls into the class of things that we don’t know about. Probably the vast majority of the things that make up reality. Nevertheless, I think they are all things that we don’t know about but could know about because all of reality is rationally structured and mind and world, thought and being, flow in parallel from the same source.

Classical Theism

A brief introduction to classical theism. Classical theism is a systematic understanding of God shared among many Christian, Jewish, Pagan, Muslim, and Hindu thinkers throughout history. It is primarily philosophical rather than scriptural in origin, but it also opens up an intellectual space for understanding theism as a plausible and reasonable way to see reality. And so it makes for a useful point of entry into the world of scripture and religious experience.

With this episode I would like to do some systematic theology and focus on the most foundational subject of theology: God. Systematic theology is theology that pursues an orderly, rational, and coherent method. There are benefits to the systematic, orderly approach, which I want to take advantage of here. But it is admittedly not characteristic of the texts of scripture, which are often disorderly, uncanny, and occasionally contradictory. The systematic approach is a convenient way to understand and analyze theological concepts, but it’s usually not the way we actually encounter these things in religious experience. I’m reminded here of Blaise Pascal’s statement: “God of Abraham, God of Isaac, God of Jacob, not of the philosophers.” There’s much to be said for that sentiment. Nevertheless the systematic approach still has significant utility for comprehension and analysis. In talking about God in this systematic way the understanding of God I will take is that of classical theism.

In what follows I just want to lay out what classical theism is. I won’t get too much into arguments or proofs for God or for classical theism. That’s another topic. But I hope that just presenting what classical theism is will show it to be a very plausible and reasonable thing to believe. Even before taking any steps to argue for it or prove it.

First some definitions. Theism is the belief in the existence of God or gods. Monotheism is the belief that there is only one God. Classical theism is the belief that God is the source of all things. In more technical terms classical theism is the belief that God is metaphysically absolute. Classical theism is a form of monotheism but it’s more theoretically developed. It takes the belief that there is only one God and analyzes what that means, the way in which there is only one God, what this one God must be like. This is what makes it systematic, theological, and philosophical.

What does it mean for God to be metaphysically absolute, the source of all things? There are two major ways for there to be only one God. They are quite different and imply very different things about God’s nature. One way is for there to be a pre-existing reality in which God exists, a reality that is independent of God and prior to God. There’s a universe that happens to have a God in it and there’s only one God. The other way, the way of classical theism, is for God to be prior to everything. There is nothing without God. All reality depends on God for its existence. We could think of these loosely as God being inside all reality versus God being outside or beyond all reality.

In classical theism all of reality derives from God and depends on God. It’s even possible for God to be the only thing that exists. But it’s actually not possible for God not to exist. This is to say that God is absolutely necessary. Nothing else is necessary in this way. Everything else is contingent. It is possible for everything else not to exist. But it is not possible for God not to exist.

Classical theism tends to be philosophical, trans-religious, and trans-scriptural, meaning that it spans many religions and the texts of many religious traditions. Throughout history classical theists have been Christian, Pagan, Jewish, Muslim, and Hindu. Obviously classical theists in each of these traditions disagree on a lot. But they tend to agree in their classical theism and in their understanding of God’s primary attributes, even if they disagree on the specific things they believe God to have done in human history. Pagan classical theists include Plotinus and Proclus. Jewish classical theists include Philo of Alexandria and Maimonides. Christian classical theists include Augustine, Pseudo-Dionysius, Anselm, and Thomas Aquinas. Muslim classical theists include Ibn Sina, and Ibd Rushd. I also think that many of the ideas of Hindu thinkers like Shankara and Ramanuja have much in common with classical theism.

What’s interesting about classical theism is that it basically starts from the premise of God’s metaphysically absolute nature and derives God’s attributes from there. These attributes often coincide with scripture, albeit not always perfectly, which is an important theological issue. But that’s also a topic for another time. The attributes of God in classical theism include the following:

Aseity
Necessity
Simplicity
Eternity
Immutability
Immateriality
Omnipotence
Omniscience
Perfect Goodness

Aseity is not a well-known term but it’s very important to the topic. The word comes from Latin “a se” meaning “from self”. Aseity is the property by which a being exists of and from itself, and not from anything else. God’s aseity means that God does not depend on anything else for his existence; not on the universe, not on anything it all.

Necessity is when something cannot fail to be the case. For example, logical truths are generally considered to be necessarily true. An example would be the proposition “If p and q, then p”. This would seem to be necessarily true. It couldn’t be otherwise. Philosophers might still debate that but it should at least be clear what we’re talking about with necessity. God’s necessity means that God cannot not exist. Understanding why that is and arguing for it is a bigger topic. But understanding the claim that God is necessary is key to understanding what classical theism is.

Simplicity means not having any parts. According to classical theism God is simple in this way. God is not composed of parts. Put another way, God is not composite. Composite is the opposite of simple. Many philosophers consider divine simplicity to be the most important concept of classical theism and hold that all of classical theism derives from it and is ultimately equivalent to it. To understand some of the motivation behind this, anything that is composite, made up of parts, has to be put together in the way that it is put together. But composition of this kind makes it dependent on whatever it is that puts it together. So it wouldn’t be the first or source of all things.

Eternity refers to what exists outside of time. Eternity, as understood in classical philosophy, is different from how the word is commonly understood. There is the notion of things being everlasting, existing within time but lasting forever, for an infinite duration. But this is different from the kind of eternity in classical theism. God’s eternity is his existence outside of time itself. Time, in fact, would be one of the things created by God. We can imagine God looking at the passage of time as we look at the passage of time for characters in a book. For the characters in a story, if they were real, they would experience time sequentially. But for us as readers we can look at the story as a whole, all at once, because we are outside of the time of that story. Like the characters in that story, we experience our time sequentially. The past is behind us. The future is ahead of us. Only the present is before us. But for God it is all present and equally before him.

Immutability is the impossibility of changing. There’s definitely a relation here to eternity. God could hardly change across time since he exists outside of time itself. This brings up an interesting question about whether God, being immutable, will seem the same to us at all times. Not necessarily. Even if God doesn’t change, we do. For example, God is perfectly good and that doesn’t change. But our morality varies significantly. The way we perceive God will vary significantly depending on whether our conduct is mostly moral versus mostly immoral.

Immateriality, as the term suggests, is the quality of not being material. Even without a technical definition I think we all have a good intuition what materiality is. In fact, it’s more difficult to think of anything that isn’t material. It’s the material that makes up our immediate experience. Matter is the stuff that, when you kick it, it kicks back. Material things exist in time and space. If we refer to more modern chemistry and physics, matter is composed of particles, waves, and fields. Particles like protons, neutrons, and electrons have mass, particles like photons do not. But they’re all material. Material things interact with each other. They exchange momentum; they attract or repel each other through electric change. Photons induce chemical reactions. But God, being immaterial is not like any of these things.

How could any thing be immaterial? This was a question that Augustine had. He was finally able to conceptualize immaterial entities by way of Platonist and Neo-Platonist philosophy, which have a lot to say about immaterial forms. Today we most commonly come across immateriality in the form of abstract, mathematical, and logical objects. The philosopher Phillip Cary uses the example of the Pythagorean theorem. The Pythagorean theorem is not something that exists in space and time. It’s eternal, necessary, and omnipresent. It didn’t ever start being true and it will never stop being true. It cannot not be true. And it’s true everywhere. It’s not made up of particles, waves, or fields. It’s not something you handle or that kicks back. That gives an idea of what an immaterial thing can be like.

God is not an abstract, mathematical, or logical object. But he is immaterial in classical theism. He’s more like an abstract, mathematical, or logical object than he is like an electron, proton, or magnetic field.

Omnipotence is the quality of having unlimited power. This is very related to God’s nature as metaphysically absolute, the source of all things. All things come from God and are the way they are because of God. There is no other source for all that is and no other power in serious competition with God. God is able to do anything that it is possible to do. What kind of constraints does that condition impose? What would be impossible for God? Contradiction certainly. Even God cannot make something to be the case and not be the case. You’ve probably heard the question, often asked in jest, “Could God make a stone so heavy that he couldn’t lift it?” Well, no. That would be a contradiction. Other constraints imposed by consistency may be more subtle. Like, why does God permit human history to proceed in certain ways, especially ways that we would much prefer that they didn’t? Here again, self-consistency probably plays an important role. Human free will is an important constraint. And there are likely other, unknown constraints, resulting from God’s unrevealed purposes.

Omniscience is the quality of knowing everything. This is also very related to being metaphysically absolute, the source of all things. As the cause of all things God also has knowledge of all things. If we imagine all things that can be known as a book God knows all things in that book, not only because he has read it, but also because he wrote it. He is the author of all that is. Many of the foregoing points about omnipotence apply here as well. There’s a classic concern about the conflict between divine omniscience and human free will. If God knows everything, including everything that we will ever do, can we really be said to freely choose to do those things? That’s a complicated problem and a whole topic in itself. Without actually resolving that question I’ll just make an observation using the analogy of the author. There is a sense in which the author of a story is constrained by the story itself. Authors can arbitrarily impose nonsensical decisions on their characters. But good authors don’t. Good authors follow their stories where they naturally lead. Their characters, even though they’re fictional, have a kind of free will of their own. That’s just an analogy but I think something similar applies to God’s authorship of all things and his knowledge of them. On the one hand he is the author and cause of all things. But this authorship and resulting knowledge is not just arbitrary. The evolution of all things, especially of human history, make sense and have a narrative coherence to them.

Finally, God is perfectly good. In Plato’s Republic, Socrates actually placed “the form of the Good” at the highest point on his spectrum of entities, the Divided Line. Goodness is not incidental to God’s nature but is absolutely intrinsic to who he is. One of the oldest problems in moral philosophy is whether God decrees what is good because it is good or whether it is good because he decrees it. This is a form of the Euthyphro Dilemma, based on another of Plato’s dialogues. Put another way, the question is whether God is prior to goodness or goodness prior to God. But in classical theism this is a false dilemma. God and the Good are not distinct at all. God is the Good.

Apart from classical theism the great worry with the Euthyphro Dilemma is that if goodness is merely whatever God decrees it to be then God could decree horrendous evils to be good. And they would have to be good. But under classical theism this is not possible. God is the Good. Neither God nor the Good are arbitrary. Horrendous evil cannot be made good and God cannot and will not decree them so. To do so would be to contradict his own nature.

All of the foregoing is principally philosophical rather than scriptural or based on revelatory religious experience. Though it has been most developed by Christians the foundations come largely from Platonist and Neo-Platonist philosophy, for example from Plotinus’s Enneads and Proclus’s Elements of Theology. Whether that is a weakness or a strength is a matter of perspective. I think it’s a strength but it also means that for Christian theology classical theism is a starting point rather than an end point. But I also consider it a great strength to see that classical theism spans so many traditions and schools of thought.

One of the best modern books on classical theism is David Bentley Hart’s The Experience of God. In that book he makes the following point:

“Certainly the definition of God I offer below is one that, allowing for a number of largely accidental variations, can be found in Judaism, Christianity, Islam, Vedantic and Bhaktic Hinduism, Sikhism, various late antique paganisms, and so forth (it even applies in many respects to various Mahayana formulations of, say, the Buddha Consciousness or the Buddha Nature, or even to the earliest Buddhist conception of the Unconditioned, or to certain aspects of the Tao…” (p. 4)

I find the Hindu convergences especially fascinating. Shankara (circa 700 – 750) was an interpreter of Vedantic Hinduism, Advaita Vedanta to be specific. A central concept in that tradition is Brahman, the highest universal principle, the ultimate reality, the cause of all that exists. In Advaita Vedanta this is identical to the substance of Atman, the Self or self-existent essence of individuals. Ramanuja (1017 – 1137) had a different interpretation called “qualified non-dualism” which makes greater distinction between Atman and Brahman. But Brahman, the ultimate reality behind all that exists, is central to the thought of both.

There are four modern authors on classical theism that I really like. These are David Bentley Hart, Edward Feser, James Dolezal, and Matthew Barrett.

I already mentioned David Bentley Hart’s book The Experience of God: Being, Consciousness, Bliss. Hart is an Orthodox Christian and also has an interesting affinity for Hinduism. In fact, the subtitle to his book – “Being, Consciousness, Bliss” – is a nod to the Hindu concept of Satcitananda, a Sanskrit term for the subjective experience of Brahman, the ultimate unchanging reality. Satcitananda is a compound word consisting of “sat”, “chit”, and “ananda”: being, consciousness, and bliss. These three are considered inseparable from Brahman.

Edward Feser’s book Five Proofs for the Existence of God goes through five proofs that he reworks from the ideas of five individuals: Aristotle, Plotinus, Augustine, Aquinas, and Leibniz. Each of the five proofs is classically theistic in nature. Later chapters in the book also go over the classical theist understanding of God’s nature in great detail.

James Dolezal’s major book on this subject is All That Is in God: Evangelical Theology and the Challenge of Classical Christian Theism. Dolezal pushes back on what he perceives as some drift away from classical theism in Evangelical theology. I mentioned earlier that some theologians place simplicity foremost among God’s attributes. Dolezal is one of these. Simplicity is central to his thought.

Matthew Barrett is a delightful theologian to read. He is editor of Credo Magazine and host of the Credo podcast. One of his common themes on Twitter is the need for Protestants and especially Evangelicals to take seriously the thought of Aquinas, the Church Fathers, and classical theism. His major book on the subject is None Greater: The Undomesticated Attributes of God.

Why talk about classical theism? To lay all my cards on the table, I desire for all to believe in God the Father, his Son Jesus Christ, and in the Holy Spirit. I am enthusiastically Christian and desire for all to be so as well, because I believe it is true. One of the first steps in this direction is belief in God. But in modernity belief in God is hardly a given. It might even seem implausible. How is believing in God any different from believing in Santa Claus, the Tooth Fairy, or the Flying Spaghetti Monster? Well, it’s actually extremely different. And I think that to really understand classical theism is to understand this difference.

God is not just an invisible being that we have to believe in, just because. Blind faith. Classical theism is much more philosophically reflective than that. To think about God is to think about and have some interest and curiosity about everything that exists, why it exists, and why it is as it is. It is maximally inquisitive and critically so. I believe that classical theism is very plausible and reasonable. That’s not actually why I believe in God or in Christianity. I attribute my belief to revelation from the Spirit. But intellectual openness and receptivity preceded that Spiritual revelation. Seeing classical theism to be a plausible and reasonable way to understand reality broke down intellectual and cultural barriers to spiritual receptivity. And that’s why I think it’s a topic worth talking about.

Star Trek: Rapture

Rick and Todd discuss the Star Trek: Deep Space Nine episode “Rapture” in which Captain Sisko, Emissary to the Bajoran Prophets, receives a series of dramatic visions. We discuss the interpretive frameworks of spiritual and secular worldviews, the high costs of prophecy, the reliability or trustworthiness of powerful entities, the interaction of spiritual experiences and the brain, and the importance of the visions in the Deep Space Nine narrative arc.

Evolutionary Biology With Molecular Precision

Evolutionary biology benefits from a non-reductionist focus on real biological systems at the macroscopic level of their natural and historical contexts. This high-level approach makes sense since selection pressures operate at the level of phenotypes, the observed physical traits of organisms. Still, it is understood that these traits are inherited in the form of molecular gene sequences, the purview of molecular biology. The approach of molecular biology is more reductionist, focusing at the level of precise molecular structures. Molecular biology thereby benefits from a rigorous standard of evidence-based inference by isolating variables in controlled experiments. But it necessarily sets aside much of the complexity of nature. A combination of these two, in the form of evolutionary biochemistry, targets a functional synthesis of evolutionary biology and molecular biology, using techniques such as ancestral protein reconstruction to physically ‘resurrect’ ancestral proteins with precise molecular structures and to observe their resulting expressed traits experimentally.

I love nerdy comics like XKCD and Saturday Morning Breakfast Cereal (SMBC). For the subject of this episode I think there’s a very appropriate XKCD comic. It shows the conclusion of a research paper that says, “We believe this resolves all remaining questions on this topic. No further research is needed.” And the caption below it says, “Just once, I want to see a research paper with the guts to end this way.” And of course, the joke is that no research paper is going to end this way because further research is always needed. I’m sure this is true in all areas of science but I think two particular fields it’s especially true. One is in neuroscience, where there is still so much that we don’t know. And the other is evolutionary biology. The more I dig into evolutionary biology the more I appreciate how much we don’t understand. And that’s OK. The still expansive frontiers in each of these fields is what makes them especially interesting to me. Far from being discouraging, unanswered questions and prodding challenges should be exciting. With this episode I’d like to look at evolutionary biology at its most basic, nuts-and-bolts level at the level of chemistry. This combines the somewhat different approaches of both evolutionary biology and molecular biology.

Evolutionary biology benefits from a non-reductionist focus on real biological systems at the macroscopic level of their natural and historical contexts. This high-level approach makes sense since selection pressures operate at the level of phenotypes, the observed physical traits of organisms. Still, it is understood that these traits are inherited in the form of molecular gene sequences, the purview of molecular biology. The approach of molecular biology is more reductionist, focusing at the level of precise molecular structures. Molecular biology thereby benefits from a rigorous standard of evidence-based inference by isolating variables in controlled experiments. But it necessarily sets aside much of the complexity of nature. A combination of these two, in the form of evolutionary biochemistry, targets a functional synthesis of evolutionary biology and molecular biology, using techniques such as ancestral protein reconstruction to physically ‘resurrect’ ancestral proteins with precise molecular structures and to observe their resulting expressed traits experimentally. This enables evolutionary science to be more empirical and experimentally grounded.

In what follows I’d like to focus on the work of biologist Joseph Thornton, who is especially known for his lab’s work on ancestral sequence reconstruction. One review paper of his that I’d especially recommend is his 2007 paper, Mechanistic approaches to the study of evolution: the functional synthesis, published in Nature and co authored with Antony Dean.

Before getting to Thornton’s work I should mention that Thornton has been discussed by biochemist Michael Behe, in particular in his fairly recent 2019 book Darwin Devolves: The New Science About DNA That Challenges Evolution. Behe discusses Thornton’s work in the eighth chapter of that book. I won’t delve into the details of the debate between the two of them, simply because that’s it’s own topic and not what directly interests me here. But I’d just like to comment that I personally find Behe’s work quite instrumentally useful to evolutionary science. He’s perceived as something of a nemesis to evolutionary biology but I think he makes a lot of good points. I could be certainly wrong about this but I suspect that many of the experiments I’ll be going over in this episode were designed and conducted in response to Behe’s challenges to evolutionary biology. Maybe these kinds of experiments wouldn’t have been done otherwise. And if that’s the case Behe has done a great service. 

Behe’s major idea is “irreducible complexity”. An irreducibly complex system is “a single system which is composed of several well-matched, interacting parts that contribute to the basic function, and where the removal of any one of the parts causes the system to effectively cease functioning.” (Darwin’s Black Box: The Biochemical Challenge to Evolution) How would such a system evolve by successive small modifications if no less complex a system would function? That’s an interesting question. And I think that experiments designed to answer that question are quite useful.

Behe and I are both Christians and we both believe that God created all things. But we have some theological and philosophical differences. My understanding of the natural and supernatural is heavily influenced by the thought of Thomas Aquinas, such that in my understanding nature is actually sustained and directed by continual divine action. I believe nature, as divine creation, is rationally ordered and intelligible, since it is a product of divine Mind. As such, I expect that we should, at least in principle, be able to understand and see the rational structure inherent in nature. And this includes the rational structure and process of the evolution of life. Our understanding of it may be miniscule. But I think it is comprehensible at least in principle. Especially since it is comprehensible to God. So I’m not worried about a shrinking space for some “god of the gaps”. Still, I think it’s useful for someone to ask probing questions at the edge or our scientific understanding, to poke at our partial explanations and ask, “how exactly?” But, perhaps different from Behe, I expect that we’ll continually be able to answer such questions better and better, even if there will always be a frontier of open questions and problems.

With complete admission that what I’m about to say is unfair, I do think that some popular understanding of evolution lacks a certain degree of rigor and doesn’t adequately account for the physical constraints of biochemistry. Evolution can’t just proceed in any direction to develop any trait to fill any adaptive need, even if there is a selection pressure for a trait that would be nice to have. OK, well that’s why it’s popular rather than academic, right? Like I said, not really fair. Still, let’s aim for rigor, shall we? Behe gets at this issue in his best known 1996 book Darwin’s Black Box: The Biochemical Challenge to Evolution. In one passage  he comments on what he calls the “fertile imaginations” of evolutionary biologists:

“Given a starting point, they almost always can spin a story to get to any biological structure you wish. The talent can be valuable, but it is a two edged sword. Although they might think of possible evolutionary routes other people overlook, they also tend to ignore details and roadblocks that would trip up their scenarios. Science, however, cannot ultimately ignore relevant details, and at the molecular level all the ‘details’ become critical. If a molecular nut or bolt is missing, then the whole system can crash. Because the cilium is irreducibly complex, no direct, gradual route leads to its production. So an evolutionary story for the cilium must envision a circuitous route, perhaps adapting parts that were originally used for other purposes… Intriguing as this scenario may sound, though, critical details are overlooked. The question we must ask of this indirect scenario is one for which many evolutionary biologists have little patience: but how exactly?”

“How exactly?” I actually think that’s a great question. And I’d say Joseph Thornton has made the same point to his fellow biologists, maybe even in response to Behe. In the conclusion of their 2007 paper he and Antony Dean had this wonderful passage:

“Functional tests should become routine in studies of molecular evolution. Statistical inferences from sequence data will remain important, but they should be treated as a starting point, not the centrepiece or end of analysis as in the old paradigm. In our opinion, it is now incumbent on evolutionary biologists to experimentally test their statistically generated hypotheses before making strong claims about selection or other evolutionary forces. With the advent of new capacities, the standards of evidence in the field must change accordingly. To meet this standard, evolutionary biologists will need to be trained in molecular biology and be prepared to establish relevant collaborations across disciplines.”

Preach it! That’s good stuff. One of the things I like about the conclusion to their paper is that it talks about all the work that still needs to be done. It’s a call to action (reform?) to the field of evolutionary biology. 

Behe has correctly pointed out that their research doesn’t yet answer many important questions and doesn’t reduce the “irreducible complexity”. True, but it’s moving in the right direction. No one is going to publish a research paper like the one in the XKCD comic that says, “We believe this resolves all remaining questions on this topic. No further research is needed.” Nature and evolution are extremely complex. And I think it’s great that Thornton and his colleagues call for further innovations. For example, I really like this one:

“A key challenge for the functional synthesis is to thoroughly connect changes in molecular function to organismal phenotype and fitness. Ideally, results obtained in vitro should be verified in vivo. Transgenic evolutionary studies identifying the functional impact of historical mutations have been conducted in microbes and a few model plant and animal species, but an expanded repertoire of models will be required to reach this goal for other taxa. By integrating the functional synthesis with advances in developmental genetics and neurobiology, this approach has the potential to yield important insights into the evolution of development, behaviour and physiology. Experimental studies of natural selection in the laboratory can also be enriched by functional approaches to characterize the specific genetic changes that underlie the evolution of adaptive phenotypes.”

For sure. That’s exactly the kind of work that needs to be done. And it’s the kind of work Behe has challenged evolutionary biologists to do. I think that’s great. Granted, that kind of work is going to be very difficult and take a long time. But that’s a good target. And we should acknowledge the progress that has been made. For example, earlier in the paper they note:

“The Reverend William Paley famously argued that, just as the intricate complexity of a watch implies a design by a watchmaker, so complexity in Nature implies design by God. Evolutionary biologists have typically responded to this challenge by sketching scenarios by which complex biological systems might have evolved through a series of functional intermediates. Thornton and co-workers have gone much further: they have pried open the historical and molecular ‘black box’ to reconstruct in detail — and with strong empirical support — the history by which a tightly integrated system evolved at the levels of sequence, structure and function.”

Yes. That’s a big improvement. It’s one thing to speculate, “Well, you know, maybe this, that, and the other” (again, being somewhat unfair, sorry). But it’s another thing to actually reconstruct ancestral sequences and run experiments with them. That’s moving things to a new level. And I’ll just mention in passing that I do in fact think that all the complexity in Nature was designed by God. And I don’t think that reconstructing that process scientifically does anything to reduce the grandeur of that. If anything, such scientific understanding facilitates what Carl Sagan once called “informed worship” (The Varieties of Scientific Experience: A Personal View of the Search for God). 

With all that out of the way now, let’s focus on Thornton’s very interesting work in evolutionary biochemistry.

First, a very quick primer on molecular biology. The basic process of molecular biology is that DNA makes RNA, and RNA makes proteins. Living organisms are made of proteins. DNA is the molecule that contains the information needed to make the proteins. And RNA is the molecule that takes the information from DNA to actually make the proteins. The process of making RNA from DNA is called transcription. And the process of making proteins from RNA is called translation. These are very complex and fascinating processes. Evolution proceeds through changes to the DNA molecule called mutations. And some changes to DNA result in changes to the composition and structure of proteins. These changes can have macroscopically observable effects.

In Thornton’s work with ancestral sequence reconstruction the idea is to look at a protein as it is in an existing organism, try to figure out what that protein might have been like in an earlier stage of evolution, and then to make it. Reconstruct it. By actually making the protein you can look at its properties. As described in the 2007 Nature article:

“Molecular biology provides experimental means to test these hypotheses decisively. Gene synthesis allows ancestral sequences, which can be inferred using phylogenetic methods, to be physically ‘resurrected’, expressed and functionally characterized. Using directed mutagenesis, historical mutations of putative importance are introduced into extant or ancestral sequences. The effects of these mutations are then assessed, singly and in combination, using functional molecular assays. Crystallographic studies of engineered proteins — resurrected and/or mutagenized — allow determination of the the structural mechanisms by which amino-acid replacements produce functional shifts. Transgenic techniques permit the effect of specific mutations on whole-organism phenotypes to be studied experimentally. Finally, competition between genetically engineered organisms in defined environments allows the fitness effects of specific mutations to be assessed and hypotheses about the role of natural selection in molecular evolution to be decisively tested.”

What’s great about this kind of technique is that it spans a number of levels of ontology. Evolution by natural selection acts on whole-organism phenotypes. So it’s critical to understand what these look like between all the different versions of a protein. We don’t just want to know that we can make all these different kinds of proteins. We want to know what they do, how they function. Function is a higher-level ontology. But we also want to be precise about what is there physically. And we have that as well, down to the molecular level. Atom for atom we know exactly what these proteins are.

To dig deeper into these experimental methods I’d like to refer to another paper, Evolutionary biochemistry: revealing the historical and physical causes of protein properties, published in Nature in 2013 by Michael Harms and Joseph Thornton. In this paper the authors lay out three strategies for studying the evolutionary trajectories of proteins.

The first strategy is to explicitly reconstruct “the historical trajectory that a protein or group of proteins took during evolution.”

“For proteins that evolved new functions or properties very recently, population genetic analyses can identify which genotypes and phenotypes are ancestral and which are derived. For more ancient divergences, ancestral protein reconstruction (APR) uses phylogenetic techniques to reconstruct statistical approximations of ancestral proteins computationally, which are then physically synthesized and experimentally studied… Genes that encode the inferred ancestral sequences can then be synthesized and expressed in cultured cells; this approach allows for the structure, function and biophysical properties of each ‘resurrected’ protein to be experimentally characterized… By characterizing ancestral proteins at multiple nodes on a phylogeny, the evolutionary interval during which major shifts in those properties occurred can be identified. Sequence substitutions that occurred during that interval can then be introduced singly and in combination into ancestral backgrounds, allowing the effects of historical mutations on protein structure, function and physical properties to be determined directly.”

This first strategy is a kind of top-down, highly directed approach. We’re trying to follow exactly the path that evolution followed and only that path to see what it looks like.

The second strategy is more bottom-up. It is “to use directed evolution to drive a functional transition of interest in the laboratory and then study the mechanisms of evolution.” The goal is not primarily to follow the exact same path that evolution followed historically but rather to stimulate evolution, selecting for a target property, to see what path it follows. 

“A library of random variants of a protein of interest is generated and then screened to recover those with a desired property. Selected variants are iteratively re-mutagenized and are subject to selection to optimize the property. Causal mutations and their mechanisms can then be identified by characterizing the sequences and functions of the intermediate states realized during evolution of the protein.”

If the first strategy is top-down and the second strategy is bottom-up, the third strategy is to cast a wide net. “Rather than reconstructing what evolution did in the past, this strategy aims to reveal what it could do.” In this approach:

“An initial protein is subjected to random mutagenesis, and weak selection for a property of interest is applied, enriching the library for clones with the property and depleting those without it. The population is then sequenced; the degree of enrichment of each clone allows the direct and epistatic effects of each mutation on the function to be quantitatively characterized.”

Let’s look at an example from Thornton’s work, which followed the first, top-down approach. The most prominent work so far has been on the evolution of glucocorticoid receptors (GRs) and mineralocorticoid receptors (MRs). See for example the 2006 paper Evolution of Hormone-Receptor Complexity by Molecular Exploitation, published in Science by Jamie Bridgham, Sean Carroll, and Joseph Thornton.

Glucocorticoid receptors and mineralocorticoid receptors bind with glucocorticoid and mineralocorticoid steroid hormones. The two steroid hormones studied in Thornton’s work are cortisol and aldosterone. Cortisol activates the glucocorticoid receptor to regulate metabolism, inflammation, and immunity. Aldosterone activates the mineralocorticoid receptor to regulate electrolyte homeostasis of plasma sodium and potassium levels. Glucocorticoid receptors and mineralocorticoid receptors share common origin and Thornton’s work was to reconstruct ancestral versions of these proteins along their evolutionary path and test their properties experimentally.

Modern mineralocorticoid receptors can be activated by both aldosterone and cortisol but modern glucocorticoid receptors are activated only by cortisol in bony vertebrates. So in their evolution GRs developed an insensitivity to aldosterone.

The evolutionary trajectory is as follows. There are versions of MR and GR extant in tetrapods, teleosts (fish), and elasmobranchs (sharks). GRs and MRs trace back to a common protein from 450 million years ago, the ancestral corticoid receptor (AncCR). The ancestral corticoid receptor is thought to have been activated by deoxycorticosterone (DOC), the ligand for MRs in extant fish.

Phylogeny tells us that the ancestral corticoid receptor gave rise to GR and MR in a gene-duplication event. Interestingly enough this was before aldosterone had even evolved. In tetrapods and teleosts, modern GR is only sensitive to cortisol; it is insensitive to aldosterone.

Thornston and his team reconstructed the ancestral corticoid receptor (AncCR) and found that it is sensitive to DOC, cortisol, and aldosterone. Most phylogenetic analysis revealed that precisely two mutations, amino acid substitutions, resulted in the glucocorticoid receptor phenotype: aldosterone insensitivity and cortisol sensitivity. These amino acid substitutions are S106P, from serine to proline at site 106, and L111Q, from leucine to glutamine at site 111. Thornston synthesized these different proteins to observe their properties. The protein with just the L111Q mutation did not bind to any of the ligands: DOC, cortisol, or aldosterone. So it is unlikely that the L111Q mutation would have occurred first. The S106P mutation reduces aldosterone and cortisol sensitivity but it remains highly DOC-sensitive. With both the S106P and L111Q mutations in series aldosterone sensitivity is reduced even further but cortisol sensitivity is restored to levels characteristic of extant GRs. A mutational path beginning with S106P followed by L111Q thus converts the ancestor to the modern GR phenotype by functional intermediate steps and is the most likely evolutionary scenario.

Michael Behe has commented that this is an example of a loss of function whereas his challenge to evolutionary biology is to demonstrate how complex structures evolved in the first place. That’s a fair point. Still, this is a good example of the kind of molecular precision we can get in our reconstruction of evolutionary processes. This does seem to show, down to the molecular level, how these receptors evolved. And that increases our knowledge. We know more about the evolution of these proteins than we did before. That’s valuable. We can learn a lot more in the future using these methods and applying them to other examples. 

One of the things I like about this kind of research is that it not only shows what evolutionary paths are possible but also which ones are not. Another one of Thornton’s papers worth checking out is An epistatic ratchet constrains the direction of glucocorticoid receptor evolution, published in Nature in 2009, co-authored by Jamie Bridgham and Eric Ortlund. The basic idea is that in certain cases once a protein acquires a new function “the evolutionary path by which this protein acquired its new function soon became inaccessible to reverse exploration”. In other words, certain evolutionary processes are not reversible. This is similar to Dollo’s Law of Irreversibility, proposed in 1893: “an organism never returns exactly to a former state, even if it finds itself placed in conditions of existence identical to those in which it has previously lived … it always keeps some trace of the intermediate stages through which it has passed.” In their 2009 paper Harms and Thornton and  state: “We predict that future investigations, like ours, will support a molecular version of Dollo’s law: as evolution proceeds, shifts in protein structure-function relations become increasingly difficult to reverse whenever those shifts have complex architectures, such as requiring conformational changes or epistatically interacting substitutions.”

This is really important. It’s important to understand that evolution can’t just do anything. Nature imposes constraints both physiologically and biochemically. I think in some popular conceptions we imagine that “life finds a way” and that evolution is so robust that organisms will evolve whatever traits they need to fit their environments. But very often they don’t, and they go extinct. And even when they do, their evolved traits aren’t necessarily perfect. Necessity or utility can’t push evolution beyond natural constraints. A good book on the subject of physiological constraints on evolution is Alex Bezzerides’s 2021 book Evolution Gone Wrong: The Curious Reasons Why Our Bodies Work (Or Don’t). Our anatomy doesn’t always make the most sense. It’s possible to imagine more efficient ways we could be put together. But our evolutionary history imposes constraints that don’t leave all options open, no matter how advantageous they would be. And the same goes for biochemistry. The repertoire of proteins and nucleic acids in the living world is determined by evolution. But the properties of proteins and nucleic acids are determined by the laws of physics and chemistry.

One way to think about this is with a protein sequence space. This is an abstract multidimensional space. Michael Harms and Joseph Thornton describe this in their 2013 paper.

“Sequence space is a spatial representation of all possible amino acid sequences and the mutational connections between them. Each sequence is a node, and each node is connected by edges to all neighbouring proteins that differ from it by just one amino acid. This space of sequences becomes a genotype–phenotype space when each node is assigned information about its functional or physical properties; this representation serves as a map of the total set of relations between sequence and those properties. As proteins evolve, they follow trajectories along edges through the genotype–phenotype space.”

What’s crucial to consider in this kind of model is that most nodes are non-functional states. This means that possible paths through sequence space will be highly constrained. Not just any path is possible. There may be some excellent nodes in the sequence space that would be perfect for a given environment. But if they’re not connected to an existing node via a path through functional states they’re not going to occur through evolution.

To conclude, it’s an exciting time for the evolutionary sciences. If you compare our understanding of the actual physical mechanisms for inheritance and evolution, down to the molecular level we are leaps and bounds ahead of where we were a century ago. Darwin and his associates had no way of knowing the kinds of things we know now about the structures of nucleic acids and proteins. This makes a big difference. It’s certainly not the case that we have it all figured out. That’s why I put evolutionary biology in the same class as neuroscience when it comes to what we understand compared to how much there is to understand. We’re learning more and more all the time just how much we don’t know. But that’s still progress. We are developing the tools to get very precise and detailed in what we can learn about evolution.