Monday, October 30, 2006

McFadden's Quantum Biology

My little series of posts (see here) on quantum biology was missing a review of Johnjoe McFadden’s book of a couple of years ago, Quantum Evolution. Below, I take a look at this speculative but well-written and detailed account of how quantum effects may be responsible for distinctive features of life and mind.

McFadden is a professor of molecular genetics who wrote this book for a popular audience back in 2000. Excerpts of the book appear here (evidently with the author’s permission). McFadden begins with a discussion of what defines life. He gives a brief history beginning with Aristotle and progressing through the triumphs of reductionist biochemistry over believers in vitalism. But after discussing the famously difficult problem of providing a precise definition of life, he concludes that “directed action” is a key notion. This is something analogous to the appearance of “will” in humans or higher animals. Moreover this directed action takes place all the way down to the microscopic level within organisms. Organisms are characterized by order via directed action at scales large and small (unsurprisingly, for a book on this subject, Erwin Schrödinger’s What is Life? is quoted several times, including is statement that life is “order from order”).

Prior to presenting the core arguments for quantum effects in life, McFadden reviews evolution and DNA replication. He presents the case that quantum-tunneling effects are one of the significant sources of mutation (in itself, I think this is generally accepted). He then discusses whether this could be responsible for some of the remaining challenges in understanding the workings of DNA evolution. He mentions the very controversial theory that adaptive mutations may occur at a frequency greater than chance. He will return to this subject later in the book.

Next is a discussion of the biggest mystery of biology, the origin of life. He discusses the inability of researchers to create primordial pre-cellular replicators in the laboratory. He reviews and criticizes some of the ideas on the origin of life that have been put forward: ideas from complexity theory; models of an ‘RNA’ world; and the invoking of the anthropic principle.

On his way toward providing his own answer, McFadden next takes a closer look at biochemistry, showing that as you drill down into particular biological functions you find they are driven by directed movements of individual protons or electrons via the electromagnetic force. This puts us squarely in the domain of physics.

So next comes a physics overview. He does a good job discussing thermodynamics and arguing why modeling biology in thermodynamic terms cannot tell the whole story. While order can emerge via energy flow in a thermodynamic context, this happens when random behavior at the micro-level leads to macro-level order. In biology, order exists all the way down to the atomic and sub-atomic realm.

Of course, the physical theory of the atomic and sub-atomic realm is quantum mechanics (QM). McFadden presents his own very readable summary of QM, leaning heavily on the two-slit experiment as a heuristic device. His strategy is to show that quantum measurements are happening at the micro-level in living systems. He gives an example of an enzyme action that ultimately depends on a single proton, which we know must be in a superposition of states absent measurement. So, a living system must be measuring itself. His view is that the classical world depends generally on continual measurement for its manifestation. This discussion leads to the next key tool McFadden wants to use: the quantum Zeno effect (and inverse Zeno effect). This, he speculates, is what is responsible for directed action at the micro-level.

With the review of QM in hand, he returns to a discussion of the origin of life and the question of how the first replicator was assembled (given the extreme improbability of it happening by chance). He theorizes that quantum superpositions could allow exploration of a large space of possibilities at the scale of an amino acid peptide chain. But the chances still seem small of making the self-replicator. However, harnessing the (inverse) Zeno effect could increase the probability. And, once you have a self-replicator, can we assume natural selection can do the rest of the job? No, there is still a big challenge here in getting a simple replicator to build the complex machinery of a cell. Moreover, in computer simulations, replicators tend to generate simpler systems, not more complex ones.

McFadden speculates that if a system on the edge of the classical frontier repeatedly fell back into quantum superposition and took advantage of the inverse quantum Zeno effect, this could have added complexity. Still, we haven’t been able to do anything like this in the lab.

And yet, the case seems relatively more compelling that non-trivial quantum effects are being exhibited in living cells (even if they are difficult or impossible to directly detect). To give credence to the existence of these effects one can estimate that decoherence times would be lengthy enough for them to occur in the relevant context. Also, important to note is that it is only coherent systems are sensitive enough to be affected by the weak electromagnetic fields which are known to exist in the cellular realm. McFadden concludes the quantum/classical barrier exists at the sub-cellular level of biology, and that organisms are comprised of “quantum cells”.

Getting back once again to the definition of life, McFadden says the cell’s ability to “capture” low entropy states to maintain order at the microscopic level via (internal) quantum measurements and the quantum Zeno effect is responsible for the distinctive directed action which characterizes life.

In the final chapters, McFadden first reprises the discussion of the role of quantum effects in DNA mutation and adaptive evolution. Then, he closes with a theory of how quantum effects in the brain may be linked to human will and consciousness. While structures in the brain (ion channels) are of the appropriate scale to invoke QM, the binding problem of how activities in the warm, wet brain would be correlated across large-scale neuronal assemblies is a problem. McFadden’s solution is that coherent quantum systems are coordinated by an electromagnetic field. Indeed, his model of the EM field as a solution to the binding problem can be decoupled from the quantum biology discussion. To save space in this post, let me refer the reader to this link for a review of this idea at Conscious Entities.

On the one hand, this book consists of speculation stacked on speculation. On the other hand, each step progresses from features of physics or biochemistry that we know to be true. Between the spheres of quantum physics and the human mind lies the world of biology: I continue to look for arguments and evidence that biological systems have features that can bridge these realms. This book was a fine effort along this line.

Wednesday, October 18, 2006

Quantum Ontology and Whitehead

The current Journal of Consciousness Studies included an interview of Henry Stapp by Harald Atmanspacher (here’s my most recent post about Stapp; Atmanspacher is also a physicist interested in consciousness). I’ve been a bit skeptical in the past regarding Stapp’s specific proposals for how quantum effects are implemented in the human brain, but I mostly agree with his metaphysical views, including the connections he draws between the ontology of quantum mechanics and that of Whitehead. Below is one paragraph from the interview which I thought captured this well.

The natural ontology for quantum theory, and most particularly for relativistic quantum field theory, has close similarities to key aspects of Whitehead's process ontology. Both are built around psycho-physical events and objective tendencies (Aristotelian ``potentia'', according to Heisenberg) for these events to occur. On Whitehead's view, as expressed in his Process and Reality (Whitehead 1978), reality is constituted of ``actual occasions'' or ``actual entities'', each one of which is associated with a unique extended region in space-time, distinct from and non-overlapping with all others. Actual occasions actualize what was antecedently merely potential, but both the potential and the actual are real in an ontological sense. A key feature of actual occasions is that they are conceived as ``becomings'' rather than ``beings'' -- they are not substances such as Descartes' res extensa and res cogitans, or material and mental states: they are processes.

Thursday, October 12, 2006

Caution: Universe under Construction

Causal Dynamical Triangulations (CDT) is another research program in quantum gravity which features causality at the fundamental level. Renate Loll has a nice overview of the work she and her collaborators are pursuing. I read the paper “The Universe from Scratch” authored by Loll, Jan Ambjorn, and Jerzy Jurkiewicz. Helpfully, this particular paper was written with the intent to be accessible to those outside the field. The “claim to fame” of the CDT approach is that its microscopic quantum spacetime model exhibits four-dimensions at a macroscopic scale under computer simulations.

CDT is similar to Loop Quantum Gravity (LQG) in a couple of respects. It seeks to quantize the gravitational degrees of freedom in a background independent and non-perturbative manner. One difference from the historical development of LQG is that LQG first used spin networks to create a non-dynamical structure, and then sometime later spin-foam models were developed which used a path-integral approach to evolve the networks. CDT incorporates the path integral at the outset as its “most important theoretical tool”.

The idea behind the path integral is to create superpositions of all the “virtual” paths or configurations which the spacetime degrees of freedom (metric field variables from General Relativity) can follow as time unfolds. This sum over all the possible configurations is the quantum spacetime.

The second key idea is to constrain the set of geometries which contribute to the sum to those which implement causality. There had been earlier approaches similar to CDT referred to as “Euclidean path-integral approaches to quantum gravity” which lacked this feature and did not exhibit the right dimensionality at the macro level.

Before discussing the causal constraint specifically, Loll et al. outline the general method for how the class of “virtual” paths should be chosen. As in quantum field theory, one needs some way to constrain or “regularize” the paths, so you don’t get wildly divergent outcomes. CDT's method in the context of spacetime geometry is to use “piecewise flat geometries” which are flat except for local subspaces where curvature is concentrated. The geometry used is a “triangulated” space (or Regge geometry). These “triangular” flat structures are “glued” together, so curvature only appears at the joints (they give a 2D pictorial example, to see how curvature arises when you glue these together). The motivation for this approach, the basic elements of which are not new, is that it is an economical way to build a discrete spacetime. Later, in finding a path integral, they will take the short distance cutoff to zero which they say will obtain a final theory which is not dependent on many of the arbitrary details which went into the construction.

So, with the geometries defined in this way: what ensemble of these should be included in the sum?

This is where causality comes into play. They again mention previous efforts which involved 4-dimensional Euclidean space, not 3+1 Lorentzian spacetime. It turns out there is not a relationship between a path integral for a Euclidean space vs. one for a Lorentzian space. So CDT needs to encode the causal Lorentzian structure right into the building blocks at the outset. If you impose appropriate causal rules, you can get four-dimensional spacetime to dominate at large scales. If this mirrors reality, then it is suggests it is the case in reality that causality at sub-Planckian scales is what is responsible for the existence of 4D spacetime.

So what are the causal rules in the CDT approach? “They are simply that each spacetime appearing in the sum over geometries should have a specific form. Namely, it should be a geometric object which can be obtained by evolving a purely spatial geometry in time, in such a way that its spatial topology (the way in which space hangs together) is unchanged as a function of time (emphasis added).” The authors have used computer simulations to model the nature of this spacetime at different scales, which lead to the four-dimensional shape emerging (it is not at this stage an analytical result). Now, one might naively question why getting four-dimensions to emerge when your microscopic building blocks were also four-dimensional (3+1) is a big deal, however, the result was far from predictable given the complex fluctuations and divergences generated by the quantum superpositions: in previous “Euclidean” versions, the dimensionality at higher scale would vary all over the place even if the building blocks were 4D-spatial.

The authors then discuss issues involving ongoing efforts to investigate other features of the spacetime model beyond dimensionality to see if they are consistent with gravity. They also discuss their hope that distinctively quantum gravitational cosmological predictions could be derived from the model.

An issue I have concerns the role of time in this model. It seems that a single time dimension is in place for all of the building blocks. This kind of global time directionality is philosophically less appealing compared to an approach which implements a strictly local time at the microscopic level (it also seems less consistent with relativity).

Monday, October 09, 2006

Emerging from the Noise

In “Towards Gravity from the Quantum”, Fotini Markopoulou (her homepage can be reached here if you click through on 'Faculty' then her name) describes work on a new approach to quantum gravity. This effort differs from other background independent approaches in that instead of seeking to quantize spacetime geometry, one starts with a microscopic description of a pre-spacetime quantum theory. Then, from this foundational theory, emerge “dynamically-selected excitations, that is coherent degrees of freedom which survive the microscopic evolution to dominate our scales.” The filter which enables emergence is borrowed from a process studied in quantum information theory. The plan, finally, is to define spacetime in terms of the interactions of these emergent excitations.

The pre-spacetime microscopic theory uses the formalism of quantum causal histories (“QCH” – which I introduced at the end of my prior post). QCH is “a locally finite directed network (graph) of finite-dimensional quantum systems.” There have been several ways QCH has been used to approach quantum gravity, which are reviewed in the paper. The emphasis is on this new approach which develops it as a “quantum information processor, which can be used as a pre-spacetime theory.”

In QCH (described in section 1.2 of the paper), the geometric paths followed in the graphs are constrained to be finite. Then quantum systems are associated with the graph. As I mentioned in the last post, early efforts attached Hilbert spaces to the relations/edges on the graphs. More recently, QCH uses a description of the evolution of an open quantum system called a “completely positive map” (alternatively, a “quantum channel”) to define the relations/edges between the vertices, which are (finite dimensional) Hilbert spaces and/or matrix algebra of operators acting on a Hilbert space.

So, for every edge on the graph, there is a completely positive map or quantum channel (an evolution from one vertex/Hilbert space to another).

I will skip over some of the details of this construction, but it looks like the other important part is to impose constraints on the evolution that preserve local causality in the spirit of the original causal set theory. Specifically, a source set for a given path(s) maps uniquely to a range map, thus imposing local causality when the edges are viewed as causal relata.

So, we have in QCH a (relatively) simple structure of open quantum systems, which form a local causal network.

Markopoulou describes several ways this structure has been used. It can be the basis of a discrete algebraic quantum field theory (section 1.3). More to the point, another way is to use it to create a path toward quantum gravity by taking quantum superpositions of the geometries defined by the model (section 1.4). This methodology has been part of the development of spin foam models as well as Causal Dynamic Triangulations (“CDT” -- which will be the subject of my next post). To make a causal spin foam model, you adapt QCH to spin network graphs, which are used in loop quantum gravity. Then you obtain a path integral of the superpositions of all constrained members of the set.

She discusses next the problems spin foam models have had recovering spacetime at lower energy scales (she notes briefly that the CDT model has had more success due to its unique features). The background independence and difficulty of implementing dynamics makes it difficult to apply coarse-graining techniques used elsewhere to recover a sensible low energy physics from these models.

The idea she wants to focus on builds on the idea that instead of summing over quantum geometry and trying to coarse-grain directly, one can first look for long-range propagating degrees of freedom that arise from the quantum systems and look to reconstruct the geometry from these.

She borrows from quantum information processing theory the notion of a “noiseless subsystem” in quantum error correction: a subsystem protected from the noise, usually thanks to symmetries of the noise (section 1.5). The analogy is that the “noise” is simply the fundamental microscopic evolution and the existence of a noiseless subsystem means a coherent excitation protected from the microscopic evolution. So we split the paths (quantum channels) into subsets A and B, where B is noiseless. B is an emergent subsystem (similar to the idea of a “decoherence –free” subsystem). I didn’t follow all the formalism describing noiseless subsystems. But I infer it’s a topic well discussed in quantum information theory.

Next, she investigates the idea that we have an emergent spacetime if these emergent coherent excitations behave as though they are in a spacetime. This subset of protected degrees of freedom (or coherent excitations) and their interactions will need to be invariant under Poincare transformations.

In preparation for the next section she works through some formalism to show more clearly how the sum over causal histories does include some of these coherent excitations. These turn out to be “braidings” of graph edges which are unaffected by the noise of evolution.

In section 1.6, Markopoulou describes her model whereby QCH is a pre-spacetime quantum information processor from which degrees of freedom emerge; interactions of these are theorized to be the events of our spacetime. (There are no separate gravitational degrees of freedom to be quantized).

She argues this model demonstrates a deeper form of background independence compared to other theories since the microscopic geometric degrees of freedom don’t survive as part of the description of emergent spacetime

The QCH quantum channels (graph edges) referred to before should now be viewed as information flows between quantum systems (vertices) with no reference to having spatio-temporal attributes at all.

Then, one analyzes the emergent coherent degrees of freedom (noiseless subsystems) and their interactions. She proposes that these can constitute an emergent Minkowskian spacetime if they are Poincare invariant at the relevant scale.

Important to this possibility is that the noiseless subsystems are not localized, they exhibit a global symmetry which allows them to be emergent at larger scales. They constitute their own “macro-locality”, which is unrelated to the original microlocality of the QCH graphs. Markopoulou outlines the promise and possible shortcomings of this approach, and says more is work is underway to develop these ideas. Importantly, she and her collaborators have not gotten gravity (Einstein’s equations) back out yet. On the other hand, when he mentioned this work in his book, Lee Smolin seemed excited by the idea that the emergent “particles” from this kind of approach might have the potential to lead to particle theory in addition to gravity (most background independent approaches to quantum gravity set aside the problem of matter fields at least initially, in their pursuit of gravity).

She finishes with a note on time:
Just as the emergent locality has nothing to do with the fundamental micro-locality, time and causality will also be unrelated macro vs. micro. So, the theory “puts” in time at the micro-level (via its causality constraints), but emergent spacetime will have no preferred time slice –as required in general relativity.

I think this distinction between microscopic and macroscopic time has interesting implications for thinking about causality. It is suggestive that a theory like Markopoulou’s implies that while causality is fundamental at the local level, the macroscopic “laws of physics” are emergent regularities.

Wednesday, October 04, 2006

Causality First

As discussed in my last post, Lee Smolin concluded that the most promising models of quantum gravity include causality as a fundamental feature. In the next couple of posts, I’ll outline some notes from my reading regarding how this idea has been developed in several research programs. Below are brief notes on causal set theory and quantum causal histories. As always, I will make mistakes in my efforts to summarize some of the material, so please check out the papers themselves if you have interest.

Causal Sets

A very nice paper (actually lecture notes) by Rafael Sorkin was helpful in summarizing causal set theory and its application to spacetime (hat tip: Dan Christensen’s page of quantum gravity links).

As he tells the story, the idea of seeing the causal order of relativistic spacetime as its most fundamental aspect was present in the earliest days of Einstein’s theory. There were early efforts which made some headway in managing to recover the geometry of a flat four-dimensional spacetime from nothing more than the underlying point set and a timelike vector among points. The fact that the points in the manifold constitute a continuum proved to be an obstacle to this methodology. But of course the development of quantum mechanics gave good independent motivation to consider discrete models of spacetime geometry. In a discrete model, causal order plus the counting of the discrete volumes offers the potential of recovering spacetime geometry.

According to Sorkin: “The causal set idea is, in essence, nothing more than an attempt to combine the twin ideas of discreteness and order to produce a structure on which a theory of quantum gravity can be based.”

The basic idea of a causal set (or “causet”) is easy enough for a layperson to understand. Sorkin defines (p.6) a causal set as a locally finite ordered set: A set endowed with a binary relation possessing 3 properties – transitivity, irreflexivity, and local finiteness (which implies discreteness). The combination of transitivity and irreflexivity rules out cycles where the timelike vector can loop back to its beginning. The set could be depicted as a graph (with the elements as vertices and the relations as edges) or as a matrix, or it can be helpful to think of it as a “family tree”. The relation between elements is one of “precedes” or “lies to the past of”, etc.

Sorkin goes on to summarize work which looks to recover the elements of spacetime by analyzing the kinematics of a causal set. As an example, he says it is known that the length of the longest chain “provides a good measure of the proper time (geodesic length) between any two causally related elements of a causet that can be approximated by a region of Minkowski space.” He discusses how to reconstruct other constituents of Minkowski space (M4) from its causal order and volume-elements. He then talks about recovering more geometric information, such as dimensionality. This work seemed to be less definitive in its results.

Next, he discusses routes toward causal set dynamics, and hopefully a quantum causal set dynamics. Sorkin describes an idea where you create a classical stochastic evolution of the set in a (global) time direction, and modify the results to create a quantum dynamics (this part I don’t yet understand).

Some cosmological applications are discussed, the most exciting of which (mentioned by Smolin in his book) is a correct order of magnitude prediction for the cosmological constant which emerges from the model. Given these results, Smolin expects the causal set research program to be an ongoing part of the background-independent approaches to quantum gravity.

Quantum Causal Histories

Causal Set theory is discrete, but at its roots not distinctively quantum (as I read it).

Quantum Causal Histories (QCH) is a model which welds quantum features to the causal elements. As described by Fotini Markopoulou (her home page is reached by clicking through to "faculty" then her name on this Perimeter Institute link) in this neat little 6-page review, the idea is to “quantize” the causal structure by attaching Hilbert spaces to the events of a causal set. “These can be thought of as elementary Planck-scale [quantum] systems that interact and evolve by rules that give rise to a discrete causal history.

Actually, she discusses the fact that the finite dimensional Hilbert spaces, following the rules of quantum mechanics, would not in general respect local causality if attached to events. Instead she says one should attach them to the causal relations (edges on a graph), with operators put on the events (nodes or vertices). Then the quantum system evolution respects local causality. There is an intuition here also that an event denotes a change, and so fits with the notion of an operator. Note also when you link quantum systems together like this, there is not a global Hilbert space or wavefunction for the whole system: if these building blocks are built up into a cosmological model, there would be no wavefunction for the universe, but a collection of local ones. By the same token, there is no observer of a global quantum system outside the universe, all the observers on the inside.

There are different ways to link the spaces up (different kinds of graphs). One way is to use the spin networks, as in loop quantum gravity. When spin networks are used in a model of the causal evolution of quantum spatial geometry, the nodes of the spin network graph are the events in a causal set. Markopoulou details this notion and other ways QCH can be modeled. I should note that in later papers, the QCH structure appeared to be refined further; for instance this paper refers to the substituting of matrix algebras of operators for the Hilbert spaces.

This short paper concludes with references to work underway (this was a 1999 paper) to use QCH as a structure for a quantum gravity model. I will come back to QCH as part of a subsequent post with my notes on a much more recent paper by Markopoulou featuring work mentioned in Smolin’s book.