I have a new personal website, and new blog posts will appear there. Please visit and let me know if you have any feedback about its format or readability. Thanks!
Saturday, June 12, 2021
Tuesday, June 08, 2021
RQM and Molecular Composition
According to the last post, the constitution of complex natural systems should be understood using a theory of composite causal processes. Composite causal processes are formed from a pattern of discrete causal interactions among a group of smaller sub-processes. When the latter sustains a higher rate of in-group versus out-group interactions, they form a composite. While this account has intuitive appeal in the case of macroscopic systems, what about more basic building blocks of nature? Can the same approach work in the microscopic realm? In this post, I will make the case that it does, focusing on molecules. A key to reaching this conclusion will be the use of the conceptual resources of relational quantum mechanics (RQM).
Background: The Problem of
Molecular Structure
In
approaching the question of molecular composition, we need to reckon with a
long-standing problem regarding how the structure of molecules—the spatial
organization of component atoms we are all familiar with from chemistry—relates
to quantum theory.[1] Modern chemistry uses QM models to
successfully calculate the value of molecular properties: one starts by solving
for the molecular wave function and associated energies using the
time-independent Schrödinger equation Ĥ ψ=Eψ.[2]
But there are several issues in connecting the quantum formalism to molecular
structure. First and most simply, the quantum description of a multiple
particle system does not “reside” in space at all. The wave function assigns
(complex) numbers to points in a multi-dimensional configuration space (3N
dimensions where N is the number of particles in the system). How do we get
from this to a spatially organized molecule?
In addition to this puzzle,
some of the methods used to estimate ψ in practice raise additional issues. Something
to keep in mind in what follows is that multi-particle atomic and molecular wave
equations are generally computationally intractable. So, simplifying assumptions of some sort will
always be needed. One important strategy normally used is to assume that the
nuclei are stationary in space, and then proceed to estimate the electronic
wave function.[3]
Where do we get the assumption for the particular configuration for the nuclei
in the case of a molecule? This is typically informed by experimental evidence and/or candidates can be evaluated iteratively, seeking
the lowest equilibrium energy configuration. I’ll discuss the implications of
this assumption shortly.
Next, there are different
techniques used to estimate the electronic wave function. For multi-electron
atoms, one adds additional electrons using hydrogen-like wave functions (called orbitals)
of increasing energy. Chemistry textbooks offer visualizations of these
orbitals for various atoms and we can form some intuitions for how they overlap
to form bonded molecules (but strictly speaking remember the wave functions are
not in 3D space). One approach to molecular wave functions uses hybrid orbitals
based on these overlaps in its calculations.
Another approach skips this process and just proceeds by incrementally
adding the requisite electrons to orbitals calculated for whole molecule at
once.[4]
In this method, the notion of localized atoms linked by bonds is much more elusive,
but this intuitive departure interestingly has no impact on the effectiveness
of the calculation method (this method is frequently more efficient).
Once we have molecular wave
functions, we have an estimate of energies and can derive other properties of interest.
We can also use the wave function to calculate the electron density distribution for the
system (usually designated by ρ): this gives the number of electrons
one would expect to find at various spatial locations upon
measurement. This is the counterpart of
the process we use to probabilistically predict the outcome of a measurement for any
quantum system by multiplying the wave function ψ by its complex conjugate ψ* (the
Born rule). Interestingly, another popular technique quantum chemists (and
condensed matter physicists) use to estimate electronic properties uses ρ
instead of ψ as a starting point (called Density Functional Theory).[5] Notably, the electron density seems to offer a more promising way to
depict molecular structure in our familiar space, letting us visualize
molecular shape, and pictures of these density distributions are also featured
in textbooks. Theorists have also developed sophisticated ways to correlate
features of ρ with chemical concepts, including bonding relationships.[6]
However, here we still need to be careful in our interpretation: while ρ is a function that assigns numbers to
points in our familiar 3D space, it should not be taken to represent an object simply located in space. I’ll have more to say
about interpreting ρ below.
Still, this might all sound
pretty good: we understand that the ball and stick molecules of our school days
don’t actually exist, but we have ways to approximate the classical picture
using the correct (quantum) physics. But
this would be too quick—in particular, remember that in performing our physical
calculations we put the most important ingredient of a molecule’s spatial
structure in by hand! As mentioned above, the fixed nuclei spatial configuration
was an assumption, not a derivation. If
one tries to calculate wave functions for molecules from scratch with the appropriate
number of nuclei and electrons, one does not recover the particular asymmetries
that distinguish most polyatomic molecules and that are crucial for
understanding their chemical behavior. This problem is often brought into focus by
highlighting the many examples of molecules with the same atomic constituents (isomers)
that differ crucially in their geometric structure (some even have the same
bonding structure but different geometry). Molecular wave functions would
generally not distinguish these from each other unless the configuration is brutely
added as an assumption.
Getting from QM Models to
Molecular Structure
So how does spatial molecular
structure arise from a purely quantum world? It seems that two additional
ingredients are needed. The first is to incorporate the role of intra-and extra-molecular interactions. The second is to go beyond the quantum formalism
and incorporate an interpretation of quantum mechanics.
With regard to the first step,
note that the discussion thus far focused on quantum modeling of isolated
molecules in equilibrium. This is an idealization, since in the actual world, molecules
are usually constantly interacting with other systems in their environment, as well as always being subject to ongoing internal dynamics.
Recognizing this, but staying within orthodox QM, there is research indicating
that applications of decoherence theory can go some way to accounting for the
emergence of molecular shape. Most of this work explores models featuring interactions
between a molecule and an assumed environment. Recently, there has been some
innovative research extending decoherence analysis to include consideration of
the internal environment of the molecule (interaction between the electrons and
the nuclei -- see links in the footnote).[7]
More work needs to be done, but there is definitely some prospect that the study
of interactions withing the QM-decoherence framework will shed light on
show molecular structure comes about.
However, we can say already
that decoherence will not solve the problem by itself.[8] It can go some way toward accounting for the suppression
of interference and the emergence of classical like-states (“preferred pointer
states”), but multiple possible configurations will remain. These, of course, also
continue be defined in the high-D configuration space context of QM. To fully
account for the actual existence of a particular observed structures in 3D space
requires grappling with the question of interpreting QM. There is a 100-year-old debate centered on
the problem of how definite values of a system’s properties are realized upon
measurement when the formalism of QM would indicate the existence of a
superposition of multiple possibilities (aka the “measurement problem”).
Alexander Franklin & Vanessa Seifert have a new paper (preprint) that does an excellent job arguing that the problem of molecular structure is an instance of the measurement problem. It includes a brief look at how three common
interpretations of QM (the Everett interpretation, Bohmian mechanics, and the
spontaneous collapse approach) would address the issue. The authors do not conclude in this paper that
the consideration of molecular structure has any bearing on deciding between
rival QM interpretations. In contrast, I
think the best interpretation is RQM in part because of the way it accounts for molecular
structure: it does so in a way that also allows for these quantum systems to fit into
an independently attractive general theory of how natural systems are composed
(see the last post).
How RQM Explains Spatial Structure
To discuss how to approach the problem using RQM, let’s first return to the interpretation of the electron density distribution (ρ). As mentioned above, chemistry textbooks include pictures of ρ, and, because it is a function assigning (real) numbers to points in 3D space, there is a temptation to view ρ as depicting the molecule as a spatial object. The ability to construct an image of ρ for actual molecules using X-ray crystallography may encourage this as well. But viewing ρ as a static extended object in space is clearly inconsistent with its usual statistical meaning in a QM context. As an alternative intuition, textbooks will point out that if you imagine a repeated series of position measurements on the molecular electrons, then one can think of ρ as describing a time-extended pattern of these localizing “hits”.
But this doesn’t give us a
reason to think molecules have spatial structure in the absence of our
interventions. For this, we would want an interpretation that sees spatial localization
as resulting from naturally occurring interactions involving a molecule’s
internal and external environment (like those explored in decoherence models). We want to envision measurement-like
interactions occurring whenever systems interact, without assuming human agents
or macroscopic measuring devices need to be involved.
This is the picture envisioned by RQM.[9] It is a “democratic” interpretation, where the same rules apply universally. In particular, all interactions between physical systems are “measurement-like” for those systems directly involved. Assuming these interactions are fairly elastic (not disruptive) and relatively transitory, then a molecule would naturally incur a pattern of localizing hits over time. These form its shape in 3D space.
It
would be nice if we could take ρ, as usually estimated, to represent this shape, but this is technically problematic. Per RQM, the quantum formalism cannot be taken as offering an objective
(“view from nowhere”) representation of a system. Both wave functions and interaction
events are perspectival. So, strictly speaking,
we cannot use ρ (derived from a particular ψ) to represent a pattern of hits
resulting from interactions involving multiple partners. However, given a high
level of stability in molecular properties across different contexts, I believe
this view of ρ can still offer a useful approximation of what is happening. It gives a
sense of how, given RQM, a molecule acquires a structure in 3D space as a
result of a natural pattern of internal and environmental interactions.
Putting it All Together
What this conclusion also allows us to do is fit microscopic quantum systems into the broader framework discussed in the prior post, where patterns of discrete causal interactions are the raw material of composition. Like complex macroscopic systems, atoms and molecules are individuated by these patterns, and RQM offers a bridge from this causal account to our physical representations.
Our usual QM models of atoms and molecules describe entangled composite systems, with details determined by
the energy profiles of the constituents. Such models of isolated systems can
be complimented by decoherence analyses involving additional systems in a
theorized environment. RQM tells us that that these models represent the systems
from an external perspective, which co-exists side-by-side with another picture: the internal perspective. This is one that infers the occurence of repeated measurement-like interactions among the constituents, a pattern that is also influenced in part by periodic measurement-like interactions with other systems in its neighborhood. The
theory of composite causal processes connects with this latter perspective. The composition of atoms and molecules, like that of
macroscopic systems, is based on a sustained pattern of causal interactions
among sub-systems, occurring in a larger environmental context.
Stepping back, the causal process account presented in these last three posts certainly leaves a number of traditional ontological questions open. In part, this is because my starting point comes from the philosophy of scientific explanation. I believe the main virtue of this theory of a causal world-wide-web is that it can provide a unified underpinning for explanations across a wide range of disciplines, despite huge variation in research approaches and representational formats. Scientific understanding is based on our grasp of these explanations, and uncovering a consistent causal framework that helps enable this achievement is a good way to approach ontology.
References
Bacciagaluppi,
G. (2020). The Role of Decoherence in Quantum Mechanics. In E.N. Zalta (Ed.), The
Stanford Encyclopedia of Philosophy (Fall 2020 Edition). https://plato.stanford.edu/archives/fall2020/entries/qm-decoherence/
Esser,
S. (2019). The Quantum Theory of Atoms in Molecules and the Interactive
Conception of Chemical Bonding. Philosophy of Science, 86(5),
1307-1317.
Franklin,
A., & Seifert, V.A. (forthcoming). The Problem of Molecular Structure Just
Is the Measurement Problem. The British Journal of the Philosophy of Science.
Mátyus,
E. (2019). Pre-Born-Oppenheimer Molecular Structure Theory. Molecular
Physics, 117(5), 590-609.
Weisberg,
M., Needham, P., & Hendry, R. (2019). Philosophy of Chemistry. In E. N.
Zalta, (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2019
Edition). https://plato.stanford.edu/archives/spr2019/entries/chemistry/
[1] For
background, see sections 4 and 6 of the Stanford Encyclopedia article
“Philosophy of Chemistry”. Also, see the nice presentation of the problem of
molecular structure in Franklin & Seifert (forthcoming) (preprint); this
paper is discussed later in this post. For a perspective from a theoretical quantum
chemist, see the recent paper from Edit Mátyus, which also features a good discussion of the background: Mátyus (2019) (preprint).
[2] Here
ψ is the wave function, E is the energy, and Ĥ is the Hamiltonian operator
appropriate for the system. For example, the Hamiltonian for an atom will
contain a kinetic energy term and a potential energy term that is based on the
electrostatic attraction between the electrons and the nucleus (along with
repulsion between electrons).
[3] This
assumption is justified by the vast difference in velocity between speedy electrons
and the slower nuclei (an adiabatic approximation). For molecules, this is
typically referred to as the “clamped nuclei” or Born-Oppenheimer approximation.
[4]
These methods are known as the valence bond (VB) and molecular orbital (MO)
techniques.
[5] The
rationale behind DFT is that it can be demonstrated that for molecules the
ground state energy and other properties can be derived
directly from ρ (Hohenberg-Kohn theorems).
This kind of equivalence between ψ and its associated density is clearly
not generally true for quantum systems, but in this case the existence of a
minimum energy solution allows for the result to be established.
[6] Of
particular note here is the Quantum Theory of Atoms in Molecules (QTAIM)
research program, initiated by R.W.F. Bader. QTAIM finds links to bonding and
other chemical features via a detailed topological analysis of ρ. I discuss this in a 2019 paper (preprint).
[7]
For decoherence studies involving the external environment, see the references
cited in section 3.2 of Mátyus (2019) (preprint). Two recent ArXiv papers from Mátyus & Cassam-Chennai explore the contribution of
internal dccoherence (see here and here).
[8] The present discussion is a specific instance of a more general point that now seems widely accepted in discussions of the QM interpretations: decoherence helps explain why quantum interference effects are suppressed when systems interact with their environments, but it does not solve the quantum measurement problem (which seeks to understand why definite outcomes are observed upon measurement). See the excellent SEP article by Bacciagaluppi.
[9] For more, see my earlier post, which lists a number of good RQM references.
Monday, May 31, 2021
Composing Natural Systems
An interesting feature of Relational Quantum Mechanics (RQM) is its implication that discrete measurement-like interaction events are going on between natural systems (unobserved by us) all the time. It turns out that this offers a way to incorporate quantum phenomena into an attractive account of how smaller natural systems causally compose larger ones. In this post I will discuss the general approach, including a brief discussion of its implications for the ideas of reduction and emergence. In a follow-up post, I will discuss the quantum case in more detail with a focus on molecules.
Composite Causal Processes
The ontological framework I’m using (discussed in the last section of the prior post) is a modified version of Wesley Salmon’s causal process account
(Salmon, 1984). The basic entities are called causal processes, and these
comprise a network characterized by two dimensions of causation, called
propagation and production. Propagation refers to the way an isolated causal
process bears dispositions or propensities toward potential interactions with
other processes--aka its disposition profile.
Production refers to how these profiles are altered in causal interactions with
each other (this is the mutual manifestation of the relevant dispositions).
The entities and properties described by science correspond
to features of this causal web. For example, an electron corresponds to a
causal process, and its properties describe its dispositions to produce change
in interactions with other systems.
Given this picture, we can go on to form an account of how composite
causal processes are formed. What is
exciting about the resulting view is that it can provide a framework for
systems spanning the microscopic-macroscopic divide.
For background, I note that neither Salmon nor others who
have explored causal process views provide a detailed account of composition.
Recall that Salmon’s intent was to give a causal theory in service of underpinning
scientific explanations. In this
context, he did outline a pertinent distinction between etiological
explanations and constitutive explanations. Etiological explanations
trace the relevant preceding processes and interactions leading up to a
phenomenon. A constitutive explanation, on the other hand, is one that cites
the interactions and processes that compose the phenomenon:
A constitutive explanation is
thoroughly causal, but it does not explain particular facts or general
regularities in terms of causal antecedents. The explanation shows, instead,
that the fact-to-be-explained is constituted by underlying causal mechanisms.
(Salmon, 1984, 270)
However, while Salmon sketches how one would divide a
causal network into etiological and constitutive elements, he doesn’t provide a
recipe for marking off the boundaries that define which processes/interactions
are “internal” to what is to be explained by the constitutive explanation (see
Salmon 1984, p. 275).
Going beyond Salmon, and drawing on the work of others, we
can offer an account of composition for causal processes. They key idea is to propose that a coherent
structure at a higher scale arises from patterns of repeated interactions at a
lower scale. We should pick out composite causal processes and their interactions
by attending to such patterns at the lower scale.
In Herbert Simon’s discussion of complex systems, he notes
that complexity often “takes the form of hierarchy (Simon, 1962, 468)” and
notes the role interactions play in this context:
In hierarchic systems we can
distinguish between interactions among subsystems, on the one hand, and the
interactions within subsystems—that is, among the parts of those subsystems—on
the other. (Simon, 1996, p.197, emphasis original)
The suggestion to take from
this is that differential interaction rates give rise to a hierarchy of causal processes.
When a group of processes interacts more with each other than with “outsiders”
then it can form a composite. For example, a social group like a family or a business
can be marked off from others (at a first approximation) by the differential
intensity with which its members interact within vs. outside the group.
As
part of his discussion of analyzing complex systems, Bill Wimsatt also explores
the idea of decomposition based on interactions, i.e., breaking down a system
into subsystems based on the relative strength of intra vs extra-system
interactions. (Wimsatt, 2007, 184-6). And while he describes how different theoretical
concerns lead us to utilize a variety of analytical strategies, Wimsatt makes
it clear that patterns of causal connections are the ultimate basis for understanding
complex systems:
Ontologically, one could take
the primary working matter of the world to be causal relationships, which are
connected to one another in a variety of ways—and together make up patterns of
causal networks…Under some conditions, these networks are organized into larger
patterns that comprise levels of
organization (Wimsatt, 2007, 200, emphasis original).[1]
Wimsatt
explains that levels of organization are “compositional levels”, characterized
by hierarchical part-whole relations (201). This notion of composition includes
not just the idea of parts, but of parts engaged in certain patterns of causal
interactions, consistent with the approach to composite causal processes suggested
above.
To
summarize: a composite causal process consists of two or more
sub-processes (the constituting group) that interact with a greater frequency
than each does with other processes. Just
like any causal process, a composite process carries its own disposition
profile: here the pattern of interacting sub-processes accounts for how
composite processes will themselves interact (what this means for the concepts
of reduction and emergence will be discussed below). Consider social groups
again, perhaps taking the example of smaller, pre-industrial societies. Each may
have its own distinctive dispositions to mutually interact with other,
similarly sized groups (e.g., to share a resource, trade, or to engage in raids
or battle). These would be composed from the dispositions of their constituent members
as they are shaped in the course of structured patterns of in-group interaction.
We can also envision here that the higher scale environmental interactions also
impact the evolution of the composite entity, but its stability is due to
maintaining its characteristic higher-frequency internal processes.
Let
me add a couple of further comments about composite processes. First, as already indicated, a group of constituting sub-processes
may be themselves composite, allowing for a nested hierarchy. Second, the impact
of larger scale external interactions can vary.
Some may have negligible impact. Other interactions (especially if
regular in nature) can contribute to shaping the ongoing nature of the
composite. At the other extreme, there will be some external interactions that
could disrupt or destroy it. The persistence of a composite would seem to
require a certain robustness in the internal interaction pattern of its components.
Achieving stability (and the associated ability to propagate a characteristic
higher scale disposition profile) may require the differential between intra-process
and extra-process interactions to be particularly high, or else there may need
to have a particular pattern to the repeated interactions. There will clearly be vague or boundary cases
as well.
Why
go to all this trouble of fairly abstract theorizing about a web of causal
processes? Because this account fleshes
out the notions that underwrite the causal explanations scientists formulate
in a variety of domains.
In
the physical sciences, the familiar hierarchy of entities, including atoms,
molecules, and condensed matter, all correspond to composite causal processes.
Of course, in physical models, what marks out a composite system might be
described in a number of ways (for example, in terms of the relative strength
of forces or energy-minimizing equilibrium configurations). But I argue this is consistent with the key
being the relative frequency of recurring discrete interactions in-system vs.
out-system. (This will be explored further in the companion post.)
In
biology, the complexity of systems may sometimes defy the easy identification
of the boundaries of composites. Also, a researcher’s explanatory aims will
sometimes warrant taking different perspectives on phenomena. In these cases,
scientists will describe theoretical entities that do not necessarily follow a
simple quantitative accounting of intra-process vs. extra-process interactions.
On the one hand, the case of a cell provides a pretty clear paradigm case meeting
the definition of a composite process. On the other hand, many organisms and
groups of organisms present difficult cases that have given rise to a rich
debate in the literature regarding biological individuality. Still, a causal
account of constitution is a useful starting point, as noted here by ElliottSober:
The individuality of organisms
involves a distinction between self and other—between inside and outside. This
distinction is defined by characteristic causal relations. Parts of the same
organism influence each other in ways that differ from the way that outside
entities influence the organism’s parts. (Sober, 1993, 150)
The
way parts “influence each other”, of course, might involve considerations beyond
a mere quantitative view of interactions, and connotes an entry point where
theoretical concerns can create distance from the basic conception of the
composite causal process. In a biological context, sub-processes and
interactions related to survival and reproduction may, for example, receive
disproportionate attention in creating boundaries around composite entities. Notably,
Roberta Millstein has proposed a definition of a biological population based on just this kind of causal interaction-based concept (Millstein 2009).
It
is also worth mentioning that constitutive explanations in science will rarely
attempt to explain the entire entity. This would mean accounting all of its
causal properties (aka its entire dispositional profile) in terms of its
interacting sub-processes. It is more common for a scientific explanation to
target one property corresponding to a behavior of interest (corresponding to
one of many features of a disposition profile).
Reduction
and Emergence
I
want to make a few remarks about how this approach to composites sheds light on
the topics of ontological reduction and emergence. In a nutshell, the causal
composition model discussed here gives a straightforward account of these
notions that sidesteps some common confusions and controversies, such as the
“causal exclusion problem.”
When
considering the relationship between phenomena characterized at larger scales
and smaller ones, the key observation is that a larger entity’s properties do
not only depend not only on the properties of smaller composing entities. They also
depend on their pattern of interaction. This
is in contrast to the usual static framing that posits a metaphysical relationship
(whether expressed in terms of composition or “realization”) between higher-level
properties and lower-level properties at some instant of time. This picture is conceptually confused (if
taken seriously as opposed to a being a deliberate simplifying idealization):
there is no reason to think such synchronic relationships characterize our
world.
Recall
that, in the present account, a property describes a regular feature of the
disposition profile of a causal process. A composite causal process is made up
of a pattern of interacting sub-processes.
The disposition profiles of the sub-processes are changing during these
interactions: they are not static. The
dispositions of the composite depend on this matrix of changing sub-processes. Note that both the forming of a higher-scale disposition
(and its manifestation in a higher-scale interaction) takes more time
than the equivalents at the smaller scale.
No composite entity or property exists at an instant: this is a fiction concocted by us facilitate
our understanding. Unfortunately, contemporary metaphysicians have taken this notion
seriously. It is perhaps easiest to see the
problem in the case of a biological system:
nothing is literally “alive” at an instant of time. Living things are sustained by temporally
extended processes. Less intuitively,
the same is true of inanimate objects.
Emergence
and reduction are clearer, unmysterious notions when based on this dynamic conception
of the composition relationship.
Properties of larger things “emerge” from the interacting group of
smaller things. The “reduction base” includes the interaction pattern of the
components and their (changing) properties.
The exclusion problem says that since higher-level properties are
realized by lower-level properties at any arbitrary instant of time, they
cannot have causal force of their own (on pain of overdetermination). We can
see why this is a pseudo-problem once a better understanding of composition is
in place. Causal production occurs at multiple scales.
This take on reduction and
emergence is obviously not unique to the causal process model discussed here. It is implied by any approach that recognizes
that properties of composites depend on interacting parts. For example, Wimsatt
discusses at some length how notions of reduction and emergence should be
understood given his understanding of complex systems. He offers a definition
of reductive explanation that shows a similarity to the causal process view of
constitutive explanation:
A reductive explanation of a
behavior or a property of a system is one that shows it to be mechanistically
explicable in terms of the properties of and interactions among the parts of
the system. (Wimsatt, 207, 275)
This
approach to reductive explanation is perfectly consistent with a form of
emergence, in the sense that the properties of the whole are intuitively “more
than the sum of its parts (277).” The key idea here, again, is that composition
includes the interactions between the parts. For comparison, Wimsatt introduces
the notion of “aggregativity”, where the properties of the whole are “mere”
aggregates of the properties of its parts. For this to happen, “the system
property would have to depend on the parts’ properties in a very strongly
atomistic manner, under all physically possible decompositions (277-280)”. He
analyzes the conditions needed for this to occur and concludes they are nearly
never met outside of the case of conserved quantities in (idealized) physical
theories.
Simon
had introduced similar notions, describing hypothetical idealized systems where
there are no interactions between parts as “decomposable,” which are then
contrasted to “nearly decomposable
systems, in which the interactions among the subsystems are weak but not
negligible (Simon, 1996, 197, emphasis original).” To highlight this
distinguishing feature, Simon considers a boundary case: that of gases. Ideal
gases, which assume interactions between molecules are negligible, are, for
Simon, decomposable systems. In the causal process account, we would similarly
point out that an ideal gas doesn’t have a clearly defined constituting group:
the molecules do not have a characteristic pattern of interacting with each
other at any greater frequency than they do with the external system (the
container). An actual, non-ideal gas, on the other hand, with weak but
non-negligible interactions between constituent molecules, would correspond to
the idea of a composite causal process.
Some
contemporary work in metaphysics, focused on dispositions/powers and their role
in causation, has incorporated similar views about composition and emergence. Rani Lill Anjum and Stephen Mumford describe a “dynamic view” of emergence:
The idea is that emergent properties are
sustained through the ongoing activity; that is, through the causal process of
interaction of the parts. A static instantaneous constitution view wouldn't
provide this (Anjum & Mumford 2017, 101)
In
their view, higher scale properties are emergent because they depend on lower-level
parts whose causal properties are undergoing transformation as they interact,
consistent with the view discussed here.
Most recently, R. D. Ingthorsson's new book, while not discussing emergence and
reduction explicitly, also presents a view of composition based on the causal
interaction of parts which is in the same spirit (Ingthorsson, 2021, Ch. 6).
Conclusion
I think composite causal processes provide a good framework for understanding how natural systems are constituted. A puzzle for the view, however, might arise via its use of patterns of discrete causal interactions to define composites. How would this work in physics, where the forces binding together composites, such as the Coulomb (electrostatic) force, are continuous? One possible answer is to point out that physical models employ idealizations, and claim their depictions can still correspond to the “deeper” ontological picture of causal processes. But I believe we can find a better and more comprehensive answer than this. To do so, we must look more carefully at physical accounts of nature’s building blocks, atoms and molecules, and see if we can uncover a correspondence with the causal theory. I think we can, assuming we utilize the RQM interpretation. This is the subject of the next post.
References
Anjum,
R., & Mumford, S. (2017). Emergence and Demergence. In M. Paolini Paoletti,
& F. Orilia (Eds.), Philosophical and Scientific Perspectives on
Downward Causation (pp. 92-109). New York: Routledge.
Ingthorsson,
R.D. (2021). A Powerful Particulars View of Causation. New York:
Routledge.
Millstein,
R. L. (2009). Populations as Individuals. Biological Theory, 4(3),
267-273.
Salmon,
W. (1984). Scientific Explanation and the Causal Structure of the World.
Princeton: Princeton University Press.
Simon,
H. (1962). The Architecture of Complexity. Proceedings of the American
Philosophical Society, 106(6), 467-482.
Simon,
H. A. (1996). The Sciences of the Artificial (3rd ed.). Cambridge, MA:
MIT Press.
Sober,
E. (1993). Philosophy of Biology. Boulder: Westview Press.
Wimsatt,
W. C. (2007). Re-Engineering Philosophy for Limited Beings. Cambridge,
Massachusetts: Harvard University Press.
[1]
This passage goes on to mention other, less neat, network patterns: “Under
somewhat different conditions they yield the kinds of systematic slices across
which I have called perspectives. Under some conditions they are so richly
connected that neither perspectives nor levels seem to capture their
organization, and for this condition, I have coined the term causal thickets
(Wimsatt, 2007, 200).”
Thursday, January 28, 2021
Why I Favor Relational Quantum Mechanics
In other words, such systems S have intrinsic dispositions to correlate with other systems/observers O, which manifest themselves as the possession of definite properties q relative to those Os. (Dorato, 2016, 239; emphasis original)
As he points out, referencing ideas due to philosopher C.B. Martin, such manifestations only occur as mutual manifestations involving dispositions characterizing two or more systems.3 Since these manifestations have a probabilistic aspect to them, the dispositions might also be referred to as propensities.
The metaphysic suggested by process views is effectively one in which the entire universe is a graph of real processes, where the edges are uninterrupted processes, and the vertices the interactions between them (Ladyman & Ross, 2007, 263).
Thursday, April 16, 2020
Metaphysics and the Problem of Consciousness
Against “vertical” metaphysical relations
My first post in this recent series was prompted by reading Philip Goff’s book presenting his panpsychist approach to the problem of consciousness.1 In the sections where he addresses the combination problem, Goff considers alternative strategies for situating a macro-size conscious subject in the world: several of these involve appeals to “grounding”. To sketch, grounding (in its application to ontology) is a kind of non-causal explanatory metaphysical relation between entities, with things at a more fundamental “level” of reality typically providing a ground for something at a higher level. For example, a metaphysician fleshing out the notion of a physicalist view of reality might appeal to a grounding relationship between, say, fundamental simple micro-physical entities and bigger, more complex macro-size objects. It’s a way of working out the idea that the former account for the latter, or the latter exist in virtue of the former. There are a variety of ways to explicate this kind of idea.2 Goff presents a version called constitutive grounding. He thinks this faces difficulties in the case of accounting for macro-sized conscious subjects in terms of micro-sized ones, and discusses an alternative approach where the more fundamental thing is at the higher level: he endorses a view where the most fundamental conscious entity is, in fact, the entire cosmos (“cosmopsychism”). In this scenario, human and animal concsciousness can be accounted for via a relation to the cosmos called grounding by subsumption. Goff motivates these various notions of grounding with examples that appeal to how certain of our concepts seem to be linked together, or to how our visual experiences appear to be composed.
Please read the book for the details.3 Here, I want to comment on why I don’t find an approach like this to be very illuminating. It is actually a part of a more general methodological concern I have developed over time. Certainly, trying to uncover the metaphysical truth about things is always a somewhat quixotic endeavor! But I think it is extremely likely to go wrong when done via excavation of our intuitions in the absence of close engagement with the relevant sciences.4 To make a long story short, I’ll just say that here I concur with much of Ladyman and Ross’s infamous critique of analytic metaphysics.5 But to get more specific, I have a deep skepticism in particular about the whole notion of synchronic (“vertical”) metaphysical relations. Not only panpsychist discussions but a great many philosophy of mind debates are structured around the idea that ontological elements at different “levels” are connected by such relations as part-whole, supervenience, or grounding. Positing these vertical relations, in turn, has contributed to confusion in debates about notions of (ontological) reduction and emergence. The causal exclusion problem, I believe, is misguided to the extent it is premised in part on the existence of these vertical relations.
I see no evidence that there are any such synchronic relations in the actual world investigated by the natural sciences (although they may characterize some of our idealized models). At arbitrary infinitesimal moments of time there exist no relata to connect: there are no such things as organisms, brains, cells, or even molecules. All these phenomena are temporally extended dynamic processes. Any static conception we employ is an artifact of our cognitive apparatus or our representational schemes. Reifying these static conceptions and then drawing vertical lines between entities at different scales is a mistake. My view is that all relations of composition in nature are diachronic.
Solve the problem with a new metaphysics of causation?
Given this, I think questions about how phenomena at different scales relate to each other involve a causal form of composition. So, one might ask whether thinking about the nature of causation help can with the problem of consciousness. Even before doing my own deep dive into research on the topic, I was drawn to those panpsychist approaches that explored this avenue. As mentioned in the earlier post, Russell’s account takes a causal approach to the structuring of subjects, although he himself doesn’t go on to offer a detailed theory.6 I think Whitehead’s speculative metaphysics can be characterized, at least in part, as an attempt to use a rich metaphysics of causation to account for the integration of mind and world. In more recent times, Gregg Rosenberg developed an account that found a home for consciousness in the nature of causation.7
Over time, however, I have also become skeptical of these more expansive causal theories. This is in spite of my view of the central role causation should play in any account of the composition of natural systems. Here, the problem is that these approaches go too far by baking in the answer to the mind-body problem from the beginning. Methodologically, I believe we should resist the urge to invent a causal theory that is so enriched with specific dualistic features that it directly addresses the challenge. For example, in Whitehead’s system every causal event (“actual occasion”) already has in place both a subjective and an objective “pole.” For Rosenberg, two kinds of properties (“effective” and “receptive”) are involved in each causal event, and this ultimately underpins the apparent dualism of the physical and mental. In contrast to these speculative solutions, we should be more conservative and pursue a causal theory that makes sense of our successful scientific explanations of natural phenomena, and then see how that effort might shed light on the mind. I’ll discuss my view on this in a future post.
1 Consciousness and Fundamental Reality. 2017. Oxford: Oxford University Press.
2 Here’s the SEP article on grounding.
3 Also, check out Daniel Stoljar’s review.
4 A quite different way metaphysics can go wrong is when those who are truly and deeply engaged with science (specifically physics) succumb to the tendency to (more or less) read ontology off of the mathematical formalism. But that is a discussion for another time.
5 Everything Must Go: Metaphysics Naturalized. 2007. James Ladyman & Don Ross. Oxford: Oxford Univerisity Press. See. Ch 1.
6 At least this is true of The Analysis of Matter (1927), where the view now known as Russellian Monism was most fully developed. In his later Human Knowledge: Its Scope and Limits (1948), he presents a bit of a theory via his account of “causal lines:” specifically, this comes in the context of an argument that such a conception of causation is needed to account for successful scientific inferences (part VI, chapter V). As an aside: by this time, Russell seemed to come quite a long way toward a reversal of the arguments presented in his (much more cited) “On the Notion of Cause” from 1913. There, Russell argued that the prevailing philosophical view of cause and effect does not play a role in advanced sciences. Someone looking to harmonize the early and late Russell might argue that the disagreement between the two positions is limited: one could say the later Russell is developing causal notions that better suit the practice of science as compared to the more traditional concept that is the focus of criticism in the earlier article. However, I think it is clear that the later book’s perspective is quite a sea change from the earlier paper’s generally dismissive approach to the importance of causation to science.
7 A Place for Consciousness. 2004. Oxford: Oxford University Press. I have some older posts about the book.