According to the last post, the constitution of complex natural systems should be understood using a theory of composite causal processes. Composite causal processes are formed from a pattern of discrete causal interactions among a group of smaller sub-processes. When the latter sustains a higher rate of in-group versus out-group interactions, they form a composite. While this account has intuitive appeal in the case of macroscopic systems, what about more basic building blocks of nature? Can the same approach work in the microscopic realm? In this post, I will make the case that it does, focusing on molecules. A key to reaching this conclusion will be the use of the conceptual resources of relational quantum mechanics (RQM).
Background: The Problem of Molecular Structure
In approaching the question of molecular composition, we need to reckon with a long-standing problem regarding how the structure of molecules—the spatial organization of component atoms we are all familiar with from chemistry—relates to quantum theory. Modern chemistry uses QM models to successfully calculate the value of molecular properties: one starts by solving for the molecular wave function and associated energies using the time-independent Schrödinger equation Ĥ ψ=Eψ. But there are several issues in connecting the quantum formalism to molecular structure. First and most simply, the quantum description of a multiple particle system does not “reside” in space at all. The wave function assigns (complex) numbers to points in a multi-dimensional configuration space (3N dimensions where N is the number of particles in the system). How do we get from this to a spatially organized molecule?
In addition to this puzzle, some of the methods used to estimate ψ in practice raise additional issues. Something to keep in mind in what follows is that multi-particle atomic and molecular wave equations are generally computationally intractable. So, simplifying assumptions of some sort will always be needed. One important strategy normally used is to assume that the nuclei are stationary in space, and then proceed to estimate the electronic wave function. Where do we get the assumption for the particular configuration for the nuclei in the case of a molecule? This is typically informed by experimental evidence and/or candidates can be evaluated iteratively, seeking the lowest equilibrium energy configuration. I’ll discuss the implications of this assumption shortly.
Next, there are different techniques used to estimate the electronic wave function. For multi-electron atoms, one adds additional electrons using hydrogen-like wave functions (called orbitals) of increasing energy. Chemistry textbooks offer visualizations of these orbitals for various atoms and we can form some intuitions for how they overlap to form bonded molecules (but strictly speaking remember the wave functions are not in 3D space). One approach to molecular wave functions uses hybrid orbitals based on these overlaps in its calculations. Another approach skips this process and just proceeds by incrementally adding the requisite electrons to orbitals calculated for whole molecule at once. In this method, the notion of localized atoms linked by bonds is much more elusive, but this intuitive departure interestingly has no impact on the effectiveness of the calculation method (this method is frequently more efficient).
Once we have molecular wave functions, we have an estimate of energies and can derive other properties of interest. We can also use the wave function to calculate the electron density distribution for the system (usually designated by ρ): this gives the number of electrons one would expect to find at various spatial locations upon measurement. This is the counterpart of the process we use to probabilistically predict the outcome of a measurement for any quantum system by multiplying the wave function ψ by its complex conjugate ψ* (the Born rule). Interestingly, another popular technique quantum chemists (and condensed matter physicists) use to estimate electronic properties uses ρ instead of ψ as a starting point (called Density Functional Theory). Notably, the electron density seems to offer a more promising way to depict molecular structure in our familiar space, letting us visualize molecular shape, and pictures of these density distributions are also featured in textbooks. Theorists have also developed sophisticated ways to correlate features of ρ with chemical concepts, including bonding relationships. However, here we still need to be careful in our interpretation: while ρ is a function that assigns numbers to points in our familiar 3D space, it should not be taken to represent an object simply located in space. I’ll have more to say about interpreting ρ below.
Still, this might all sound pretty good: we understand that the ball and stick molecules of our school days don’t actually exist, but we have ways to approximate the classical picture using the correct (quantum) physics. But this would be too quick—in particular, remember that in performing our physical calculations we put the most important ingredient of a molecule’s spatial structure in by hand! As mentioned above, the fixed nuclei spatial configuration was an assumption, not a derivation. If one tries to calculate wave functions for molecules from scratch with the appropriate number of nuclei and electrons, one does not recover the particular asymmetries that distinguish most polyatomic molecules and that are crucial for understanding their chemical behavior. This problem is often brought into focus by highlighting the many examples of molecules with the same atomic constituents (isomers) that differ crucially in their geometric structure (some even have the same bonding structure but different geometry). Molecular wave functions would generally not distinguish these from each other unless the configuration is brutely added as an assumption.
Getting from QM Models to Molecular Structure
So how does spatial molecular structure arise from a purely quantum world? It seems that two additional ingredients are needed. The first is to incorporate the role of intra-and extra-molecular interactions. The second is to go beyond the quantum formalism and incorporate an interpretation of quantum mechanics.
With regard to the first step, note that the discussion thus far focused on quantum modeling of isolated molecules in equilibrium. This is an idealization, since in the actual world, molecules are usually constantly interacting with other systems in their environment, as well as always being subject to ongoing internal dynamics. Recognizing this, but staying within orthodox QM, there is research indicating that applications of decoherence theory can go some way to accounting for the emergence of molecular shape. Most of this work explores models featuring interactions between a molecule and an assumed environment. Recently, there has been some innovative research extending decoherence analysis to include consideration of the internal environment of the molecule (interaction between the electrons and the nuclei -- see links in the footnote). More work needs to be done, but there is definitely some prospect that the study of interactions withing the QM-decoherence framework will shed light on show molecular structure comes about.
However, we can say already that decoherence will not solve the problem by itself. It can go some way toward accounting for the suppression of interference and the emergence of classical like-states (“preferred pointer states”), but multiple possible configurations will remain. These, of course, also continue be defined in the high-D configuration space context of QM. To fully account for the actual existence of a particular observed structures in 3D space requires grappling with the question of interpreting QM. There is a 100-year-old debate centered on the problem of how definite values of a system’s properties are realized upon measurement when the formalism of QM would indicate the existence of a superposition of multiple possibilities (aka the “measurement problem”).
Alexander Franklin & Vanessa Seifert have a new paper (preprint) that does an excellent job arguing that the problem of molecular structure is an instance of the measurement problem. It includes a brief look at how three common interpretations of QM (the Everett interpretation, Bohmian mechanics, and the spontaneous collapse approach) would address the issue. The authors do not conclude in this paper that the consideration of molecular structure has any bearing on deciding between rival QM interpretations. In contrast, I think the best interpretation is RQM in part because of the way it accounts for molecular structure: it does so in a way that also allows for these quantum systems to fit into an independently attractive general theory of how natural systems are composed (see the last post).
How RQM Explains Spatial Structure
To discuss how to approach the problem using RQM, let’s first return to the interpretation of the electron density distribution (ρ). As mentioned above, chemistry textbooks include pictures of ρ, and, because it is a function assigning (real) numbers to points in 3D space, there is a temptation to view ρ as depicting the molecule as a spatial object. The ability to construct an image of ρ for actual molecules using X-ray crystallography may encourage this as well. But viewing ρ as a static extended object in space is clearly inconsistent with its usual statistical meaning in a QM context. As an alternative intuition, textbooks will point out that if you imagine a repeated series of position measurements on the molecular electrons, then one can think of ρ as describing a time-extended pattern of these localizing “hits”.
But this doesn’t give us a reason to think molecules have spatial structure in the absence of our interventions. For this, we would want an interpretation that sees spatial localization as resulting from naturally occurring interactions involving a molecule’s internal and external environment (like those explored in decoherence models). We want to envision measurement-like interactions occurring whenever systems interact, without assuming human agents or macroscopic measuring devices need to be involved.
This is the picture envisioned by RQM. It is a “democratic” interpretation, where the same rules apply universally. In particular, all interactions between physical systems are “measurement-like” for those systems directly involved. Assuming these interactions are fairly elastic (not disruptive) and relatively transitory, then a molecule would naturally incur a pattern of localizing hits over time. These form its shape in 3D space.
It would be nice if we could take ρ, as usually estimated, to represent this shape, but this is technically problematic. Per RQM, the quantum formalism cannot be taken as offering an objective (“view from nowhere”) representation of a system. Both wave functions and interaction events are perspectival. So, strictly speaking, we cannot use ρ (derived from a particular ψ) to represent a pattern of hits resulting from interactions involving multiple partners. However, given a high level of stability in molecular properties across different contexts, I believe this view of ρ can still offer a useful approximation of what is happening. It gives a sense of how, given RQM, a molecule acquires a structure in 3D space as a result of a natural pattern of internal and environmental interactions.
Putting it All Together
What this conclusion also allows us to do is fit microscopic quantum systems into the broader framework discussed in the prior post, where patterns of discrete causal interactions are the raw material of composition. Like complex macroscopic systems, atoms and molecules are individuated by these patterns, and RQM offers a bridge from this causal account to our physical representations.
Our usual QM models of atoms and molecules describe entangled composite systems, with details determined by the energy profiles of the constituents. Such models of isolated systems can be complimented by decoherence analyses involving additional systems in a theorized environment. RQM tells us that that these models represent the systems from an external perspective, which co-exists side-by-side with another picture: the internal perspective. This is one that infers the occurence of repeated measurement-like interactions among the constituents, a pattern that is also influenced in part by periodic measurement-like interactions with other systems in its neighborhood. The theory of composite causal processes connects with this latter perspective. The composition of atoms and molecules, like that of macroscopic systems, is based on a sustained pattern of causal interactions among sub-systems, occurring in a larger environmental context.
Stepping back, the causal process account presented in these last three posts certainly leaves a number of traditional ontological questions open. In part, this is because my starting point comes from the philosophy of scientific explanation. I believe the main virtue of this theory of a causal world-wide-web is that it can provide a unified underpinning for explanations across a wide range of disciplines, despite huge variation in research approaches and representational formats. Scientific understanding is based on our grasp of these explanations, and uncovering a consistent causal framework that helps enable this achievement is a good way to approach ontology.
Bacciagaluppi, G. (2020). The Role of Decoherence in Quantum Mechanics. In E.N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2020 Edition). https://plato.stanford.edu/archives/fall2020/entries/qm-decoherence/
Esser, S. (2019). The Quantum Theory of Atoms in Molecules and the Interactive Conception of Chemical Bonding. Philosophy of Science, 86(5), 1307-1317.
Franklin, A., & Seifert, V.A. (forthcoming). The Problem of Molecular Structure Just Is the Measurement Problem. The British Journal of the Philosophy of Science.
Mátyus, E. (2019). Pre-Born-Oppenheimer Molecular Structure Theory. Molecular Physics, 117(5), 590-609.
Weisberg, M., Needham, P., & Hendry, R. (2019). Philosophy of Chemistry. In E. N. Zalta, (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2019 Edition). https://plato.stanford.edu/archives/spr2019/entries/chemistry/
 For background, see sections 4 and 6 of the Stanford Encyclopedia article “Philosophy of Chemistry”. Also, see the nice presentation of the problem of molecular structure in Franklin & Seifert (forthcoming) (preprint); this paper is discussed later in this post. For a perspective from a theoretical quantum chemist, see the recent paper from Edit Mátyus, which also features a good discussion of the background: Mátyus (2019) (preprint).
 Here ψ is the wave function, E is the energy, and Ĥ is the Hamiltonian operator appropriate for the system. For example, the Hamiltonian for an atom will contain a kinetic energy term and a potential energy term that is based on the electrostatic attraction between the electrons and the nucleus (along with repulsion between electrons).
 This assumption is justified by the vast difference in velocity between speedy electrons and the slower nuclei (an adiabatic approximation). For molecules, this is typically referred to as the “clamped nuclei” or Born-Oppenheimer approximation.
 These methods are known as the valence bond (VB) and molecular orbital (MO) techniques.
 The rationale behind DFT is that it can be demonstrated that for molecules the ground state energy and other properties can be derived directly from ρ (Hohenberg-Kohn theorems). This kind of equivalence between ψ and its associated density is clearly not generally true for quantum systems, but in this case the existence of a minimum energy solution allows for the result to be established.
 Of particular note here is the Quantum Theory of Atoms in Molecules (QTAIM) research program, initiated by R.W.F. Bader. QTAIM finds links to bonding and other chemical features via a detailed topological analysis of ρ. I discuss this in a 2019 paper (preprint).
 For decoherence studies involving the external environment, see the references cited in section 3.2 of Mátyus (2019) (preprint). Two recent ArXiv papers from Mátyus & Cassam-Chennai explore the contribution of internal dccoherence (see here and here).
 The present discussion is a specific instance of a more general point that now seems widely accepted in discussions of the QM interpretations: decoherence helps explain why quantum interference effects are suppressed when systems interact with their environments, but it does not solve the quantum measurement problem (which seeks to understand why definite outcomes are observed upon measurement). See the excellent SEP article by Bacciagaluppi.
 For more, see my earlier post, which lists a number of good RQM references.