Tuesday, December 12, 2006

Gödel’s Platonism

I got around to reading Rebecca Goldstein’s brief and engaging book on Gödel -- Incompleteness: The Proof and Paradox of Kurt Gödel. The book centered on the irony that Gödel’s own philosophical interpretation of his work (which indeed may have driven his efforts to begin with) was in complete opposition to how it was most commonly interpreted by others.

Gödel was a Platonist, believing that the mind was able to make contact with absolute mathematical reality. Given that he was an attending member of the Vienna circle in the 1920’s, which was the locus of logical positivism, many assumed he was of like mind, believing there was no truth beyond what man could empirically discover. Gödel’s extreme reluctance to speak or write on his views helped make this misunderstanding possible. Indeed, the incompleteness theorems have often been co-opted by sloppy post-modernists (along with relativity theory and the uncertainty principle) in making the case for truth relativism. They would focus on the conclusion that we can’t construct formal systems (large enough to at least encompass arithmetic) that are both complete and provably consistent and treat this fact as revealing a limitation in our ability to reach absolute truth. Gödel believed the actual lesson was that the human mind can and does perceive truth beyond the capability of formal systems (equivalently, algorithmic computing machines).



[UPDATE 12 January 2012:  For a review quite critical of Goldstein see Solomon Feferman's here.  He says she has no basis for taking Godel's later platonism and suggesting it motivated his earlier seminal work.]

To digress a moment, I just about forgot I have an old post on Gödel. This blog is ably serving one of its functions -- an external memory module. In that post I noted the consensus of experts that while the incompleteness theorems may point toward philosophical conclusions (such as thinking the mind surpasses a computer), they don’t provide any proofs after you depart their formal setting. However, one philosophical stance I said they did appear to support was the notion of an ultimate limit on “objective” knowledge. (Note I also maintained that such an observation need not lead to thorough-going relativism.) Now, in reading more about Gödel’s own views, I’m not feeling confidant that assertion quite captures things.

In one of those happy coincidences, the recent update to the Online Papers in Philosophy blog maintained by Jonathan Ichikawa [UPDATE: this blog not longer exists] had a paper by eminent mathematical logician Solomon Feferman which examined one of Gödel’s rare talks (the 1951 “Gibbs” lecture). In the talk, Gödel presented the philosophical implications in terms of a disjunction thus: “Either…the human mind…infinitely surpasses the powers of any finite machine, or else there exist absolutely unsolvable Diophantine problems.” By the most reliable accounts, Gödel did indeed believe that the mind surpassed finite systems and therefore there were no (ideally) unsolvable problems, but he expressed a bit of caution in this talk by presenting the disjunction.

Feferman, in keeping with the usual deflationary mode of papers by experts on this topic, respectfully shows how the imprecision and complexity of these issues prevent one from reaching a logical proof of Gödel’s claim (or even ruling out that the disjuncts could both be true). Once again, the broad statements about the mind and its capabilities can’t be derived from the mathematical arguments which inspire them. Still, many interesting facets of these issues are illuminated in the discussion.

The Feferman paper included one (unpublished) Gödel quote which struck me as very insightful. Gödel (responding to something Turing had written) says: …”mind, in its use, is not static, but constantly developing, i.e., we understand abstract terms more and more precisely as we go on using them…though at each stage the number and precision of the abstract terms at our disposal may be finite, both…may converge toward infinity…” We are clearly finite and contingent creatures, and so the idea that we are in direct contact with Platonic truths is hard to support; but we do appear to have a gift of rationality which allows us to converge toward absolute truths. I see a connection here to the idea of modal rationalism, which I’ll explore in a future post.

One more note: that OPP update had another paper which touches on this topic, Philip Ebert’s “What Mathematical Knowledge could not be”. This is a nice survey of positions on the reality of mathematical objects; it doesn’t itself advance an argument.

Wednesday, November 29, 2006

Widening the Trail

Well, to follow up on the last post, I liked Strawson’s new paper a lot, but will take note that his “journey” took us to some places which seem pretty familiar.

His realism about experience led to a rejection of materialist monism, and thinking about how both experiential and non-experiential being can co-exist in any naturalistic way led him to consider panexperientialism (see Chalmers, 1996). He discussed the notion of inside and outside perspectives as a way to explain why reality may seem to have dual modes (Nagel 1974). He outlined the idea that the experiential could seen as the categorical basis of reality, while the non-experiential is the dispositional/relational dimension of reality investigated by physics (Russell 1927). At this point, trying to make further sense of these ideas took him outside the arena of philosophy of mind, strictly speaking, to a discussion about how the panexperientialist view relates to the topics of causality and the composition of individuals (see Rosenberg 2004, and, of course, Whitehead 1928).

I don’t mean this comment to be too critical: I think all this is a good thing. Strawson’s work, presented in his unique style, helps widen the trail and thus bolsters the case for these ideas.

[Updated 16 March 2009 for broken links]

Friday, November 24, 2006

Strawson Continues His Journey

[UPDATE 16 March 2009: the link to the paper mentioned below is unfortunately broken -- Strawson has a new home page, but it no longer has the link; the JCS special issue is also available on amazon.]

The recent JCS consisted of Galen Strawson’s recent paper “Realistic Monism: Why Physicalism Entails Panpsychism” as the target article, a number of critical responses from other philosophers, and finally Strawson’s reply. Recall that this was the paper which argued that realism about experience, and a rejection of brute emergence of experience from non-experiential parts, leads to panexperientialism. Except that instead of simply replying to his critics, Strawson offered extended further arguments and speculations on the problem of experience in its full metaphysical dimension with, as a bonus, a lengthy digression on Descartes (arguing that the usual “substance dualist” label is something of a caricature – in various writings/correspondence he expressed a desire to avoid distinguishing substance from its attributes).

I thought the whole thing was a marvelous read and a great contribution: seemingly eccentric yet an honest and insightful inquiry. It is good to see him bring more attention to the arguments for panexperientialism.

I’m not going to try to do justice to the whole thing (nearly 100 pages!), but let me discuss some highlights. (Unfortunately, it is not online at this time that I can see).

The distillation of the problem is that he (and many of us) want to affirm that experiential and non-experiential truths are fundamental, and cannot be reduced to the other -- yet at the same time we would like to have a monism.

Strawson offers this definition of “Equal-Status Fundamental Duality monism” (ESFD monism for short):

Reality is substantially single. All reality is experiential and all reality is non-experiential. Experiential and non-experiential being exist in such a way that neither can be said to be based in or realized by or in any way asymmetrically dependent on the other (etc.)


The question is how reality can be fundamentally dual yet single. How can we develop such a view?

Before going on, I should mention a subsidiary thesis in Strawson’s thinking which was brought out in one of the commentaries, that is, a commitment to “smallism”: “All facts are (fully) determined by ultimates.” In other words, our tendency to view reductive explanations as good explanations is endorsed: the question is what sort of ultimate or ultimates can support reality (a post on this topic is here).

Also, to the extent that experience is determined to be an ultimate, Strawson doesn’t think that a separate subject of experience is required at the fundamental level (see Justin’s post at his Panexperientialism blog for more on this).

But, acknowledging the tension and possible incoherence in combining a duality and a monism in the above definition, Strawson next steps back to consider what happens if we give up on the fundamental duality. He asserts that if either experiential or non-experiential being has to “give way”, it cannot be the experiential which does so, given that it is what we know directly, at least in certain respects (Strawson also discusses the epistemology of this direct acquaintance later in the paper). If, for the purposes of our analysis, we let the non-experiential give way, we are left with the notion that the “energy-stuff that makes up the whole of reality is itself is something that is experiential in every respect. The universe consists of experience…arrayed in a certain way”. To be clear, we’re not talking about passive experiential content of some sort to be perceived by somebody, it is that the active energy of nature is intrinsically experiential. Strawson calls this position “pure panpsychism”.

(Terminological note: for Strawson there is no technical reason to distinguish between “panpsychism” and “panexperientialism” given his view that at the micro-level, there is no distinction between experience and a subject of experience. I think panexperientialism is to be preferred for clarity: no one is attributing human-style minds to more fundamental units of nature.)

So the 3 options are radical eliminativism about experience, ESFD monism, or pure panpsychism. The first is a non-starter, while the second is appealing but probably incoherent as stated. Can we make pure panpsychism work?

Strawson next considers challenges facing this notion of experience as the active energy of nature. He ponders that space must supervene on this ultimate energy/activity, rather than being a container for it. Causality and the laws of nature must also arise from this working of this ultimate experiential activity as well. The biggest challenge, though, is what is often called the combination problem, but which Strawson calls the composition problem.

The composition problem, dating to William James, is how the macro-level experience we’re acquainted with could be built from micro-level experiences.

Now, some commentaries on the target paper think that the composition problem argues decisively against panpsychism. One criticism of this type argues that Strawson’s assumptions about our epistemological acquaintance with experience implies we would be “privy to” the micro-level experiences themselves, and this is not the case. Strawson denies the need for any commitment to such a “full revelation” epistemology. We only are acquainted with some aspects of experience (partial revelation). (For a description of another critique based on the composition problem and a counter-argument, I refer you to another fine post at the Panexperientialism blog.)

Toward the end of the paper, Strawson puts forth some speculative thoughts on the composition problem and how things must be if panpsychism really describes nature. He ponders the fact that our acquaintance with experience is “from the inside”. It must also then, have an “outside”. This leads to a glimmer of how a kind of ESFD monism could be true, if the inside and outside are both essentially aspects of the being of an experience. But this “outside” can’t be something ontologically distinct from the experience.

His musings turn next to issues of causation and composition. We might say the outside of an experience is its relation with other experiences, including its relation to experiences which compose it, or of which it is a part. Key to thinking about how to characterize these ideas is to stress the active rather than passive nature of experience. Rather than an atom of experience as a fundamental unit, we might speak of an “experiencing”.

He says there would be a first-person ontology intrinsic to an experiencing, but experiencings must exist in a fashion which gives rise to what we think of as third-person phenomena. These two “perspectival” realities coexist with the monistic reality that all is experience.

With regard to the composition problem, it must work something like this: “…experiencings… can be as they are to themselves…compatibly with their having causal effects on other [experiencings] and compatibly with their part in constituting other…distinct [experiencings].” And given the “Laws of Experiential Nature, whatever they are,” when one constitutes part of another the second will not have access to the inside nature of the first in the same way the first does.

To summarize:

Experiential realities may be said to function as non-experiential but experience-causing realities for other experiential realities, and to function as non-experiential but experience-constituting realities for other experiential realities. Again, it may be said that although there is no non-experiential being absolutely speaking, there is non-experiential being relatively or relationally speaking.[Emphasis original]


He says while these thoughts are speculative, it is not uncontrolled speculation, more importantly it is not unwarranted. A commitment to the reality of experiential being, combined with a subsidiary commitment to “smallism”, leads directly to this kind of account.

Given the length of this post, I will offer some additional comments of my own in a follow up.

Friday, November 17, 2006

Modal Realism and the Cosmological Argument

It seems to follow from a certain kind of realism about metaphysical possibilities that a version of the cosmological argument goes through. The idea is that if the space of possibilities exists, then it exists necessarily. The actualized concrete events of the world are contingent and depend on the necessary space of possibilities.

There are many variations, but the cosmological argument states that the chain of events needs a necessary first cause to get started. Or else it is cast in terms of arguing that contingent things ultimately must depend on a necessary self-existent thing. In the model under consideration here contingent things (events) are actualizations of possibilities. A given event is subject to causal constraint by prior or adjacent events but is always also dependent on the space of possibilities.

There seems to be no well motivated reason to consider an objection involving, say, an infinite chain of meta-modal spaces upon which the first-order space of possibilities depends. So, the space of possibilities would be a self-existent necessary entity and the argument goes through.

[UPDATE: 5 February 2009 -- This modal realism-inspired cosmological argument should not be confused with other arguments which goes by the name "modal". These arguments (which doesn't work IMO) try to use modal logic to imply theism: see discussion here for instance.]

Friday, November 10, 2006

Modal Realism, Modal Rationalism

The case for modal realism can be motivated in a few ways. It often begins with contemplating the everyday modal propositions we make about the world. We seem to know that things might have been different, after all, and there are different ways the world could be in the future. Some dinosaurs might have survived the meteor catastrophe. I might have tried to avoid traffic by leaving earlier this morning. It’s possible that I might stop writing this post now, and finish it later. By virtue of what can these propositions be considered true or false? Being of realist persuasion, I think there must be something in the reality outside our minds to provide truthmakers for these propositions (my prior posts on modality and modal realism can be found here).

Modal realism also seems linked to realism about causality, when the causal connection is seen as a counterfactual dependence. I might not have checked my e-mail: if I hadn’t, I would not have seen your message.

In a deflationary metaphysics where there exists one world subject to deterministic laws, all connections are necessary. Possibility and contingency would only be illusions. They would only exist in our minds. (Likewise it seems to me that the directional flow of time and causality would have no parallel in the world; necessary connections are symmetric.)

But given a naturalistic worldview, our minds arise from the same stuff as the rest of the world. Is it plausible that a world which lacks real possibility would give rise to creatures for whom the notion is indispensable?

Of course, we know strict determinism is false, given the real indeterminism present in quantum mechanics. I have argued elsewhere for an interpretation which sees the quantum states as incorporating real (although not concrete) possibilities, while measurements are concrete actualization events.

Leaving aside for now the tough problem of describing the modal space in any detail (see note below), I am intrigued by the notion that the modal propositions and modal reasoning we employ are grounded in a modal reality.

The thesis of modal rationalism, as explored by David Chalmers in section 10 of this paper, is the idea that our notion of what is conceivable (logically possible) does indeed match what is metaphysically possible. He examines and argues against proposals that these need be distinct modal spaces.

If I couple modal rationalism with a modal realism which posits that metaphysical possibilities really exist (outside the cranium), this creates what I think is a fascinating picture. We, along with all natural phenomena, are continually actualized from a space of possibilities: our roots in this space form the basis of our evolved faculty for modal reasoning.


Note: For a great discussion of how to flesh out the space of metaphysical possibility, and related issues including modal rationalism, I recommend Richard Chappell’s recent draft paper.  [UPDATE 25 January 2010:  here's a link to a pdf of the final paper from Chappell's website.]

Monday, October 30, 2006

McFadden's Quantum Biology

My little series of posts (see here) on quantum biology was missing a review of Johnjoe McFadden’s book of a couple of years ago, Quantum Evolution. Below, I take a look at this speculative but well-written and detailed account of how quantum effects may be responsible for distinctive features of life and mind.

McFadden is a professor of molecular genetics who wrote this book for a popular audience back in 2000. Excerpts of the book appear here (evidently with the author’s permission). McFadden begins with a discussion of what defines life. He gives a brief history beginning with Aristotle and progressing through the triumphs of reductionist biochemistry over believers in vitalism. But after discussing the famously difficult problem of providing a precise definition of life, he concludes that “directed action” is a key notion. This is something analogous to the appearance of “will” in humans or higher animals. Moreover this directed action takes place all the way down to the microscopic level within organisms. Organisms are characterized by order via directed action at scales large and small (unsurprisingly, for a book on this subject, Erwin Schrödinger’s What is Life? is quoted several times, including is statement that life is “order from order”).

Prior to presenting the core arguments for quantum effects in life, McFadden reviews evolution and DNA replication. He presents the case that quantum-tunneling effects are one of the significant sources of mutation (in itself, I think this is generally accepted). He then discusses whether this could be responsible for some of the remaining challenges in understanding the workings of DNA evolution. He mentions the very controversial theory that adaptive mutations may occur at a frequency greater than chance. He will return to this subject later in the book.

Next is a discussion of the biggest mystery of biology, the origin of life. He discusses the inability of researchers to create primordial pre-cellular replicators in the laboratory. He reviews and criticizes some of the ideas on the origin of life that have been put forward: ideas from complexity theory; models of an ‘RNA’ world; and the invoking of the anthropic principle.

On his way toward providing his own answer, McFadden next takes a closer look at biochemistry, showing that as you drill down into particular biological functions you find they are driven by directed movements of individual protons or electrons via the electromagnetic force. This puts us squarely in the domain of physics.

So next comes a physics overview. He does a good job discussing thermodynamics and arguing why modeling biology in thermodynamic terms cannot tell the whole story. While order can emerge via energy flow in a thermodynamic context, this happens when random behavior at the micro-level leads to macro-level order. In biology, order exists all the way down to the atomic and sub-atomic realm.

Of course, the physical theory of the atomic and sub-atomic realm is quantum mechanics (QM). McFadden presents his own very readable summary of QM, leaning heavily on the two-slit experiment as a heuristic device. His strategy is to show that quantum measurements are happening at the micro-level in living systems. He gives an example of an enzyme action that ultimately depends on a single proton, which we know must be in a superposition of states absent measurement. So, a living system must be measuring itself. His view is that the classical world depends generally on continual measurement for its manifestation. This discussion leads to the next key tool McFadden wants to use: the quantum Zeno effect (and inverse Zeno effect). This, he speculates, is what is responsible for directed action at the micro-level.

With the review of QM in hand, he returns to a discussion of the origin of life and the question of how the first replicator was assembled (given the extreme improbability of it happening by chance). He theorizes that quantum superpositions could allow exploration of a large space of possibilities at the scale of an amino acid peptide chain. But the chances still seem small of making the self-replicator. However, harnessing the (inverse) Zeno effect could increase the probability. And, once you have a self-replicator, can we assume natural selection can do the rest of the job? No, there is still a big challenge here in getting a simple replicator to build the complex machinery of a cell. Moreover, in computer simulations, replicators tend to generate simpler systems, not more complex ones.

McFadden speculates that if a system on the edge of the classical frontier repeatedly fell back into quantum superposition and took advantage of the inverse quantum Zeno effect, this could have added complexity. Still, we haven’t been able to do anything like this in the lab.

And yet, the case seems relatively more compelling that non-trivial quantum effects are being exhibited in living cells (even if they are difficult or impossible to directly detect). To give credence to the existence of these effects one can estimate that decoherence times would be lengthy enough for them to occur in the relevant context. Also, important to note is that it is only coherent systems are sensitive enough to be affected by the weak electromagnetic fields which are known to exist in the cellular realm. McFadden concludes the quantum/classical barrier exists at the sub-cellular level of biology, and that organisms are comprised of “quantum cells”.

Getting back once again to the definition of life, McFadden says the cell’s ability to “capture” low entropy states to maintain order at the microscopic level via (internal) quantum measurements and the quantum Zeno effect is responsible for the distinctive directed action which characterizes life.

In the final chapters, McFadden first reprises the discussion of the role of quantum effects in DNA mutation and adaptive evolution. Then, he closes with a theory of how quantum effects in the brain may be linked to human will and consciousness. While structures in the brain (ion channels) are of the appropriate scale to invoke QM, the binding problem of how activities in the warm, wet brain would be correlated across large-scale neuronal assemblies is a problem. McFadden’s solution is that coherent quantum systems are coordinated by an electromagnetic field. Indeed, his model of the EM field as a solution to the binding problem can be decoupled from the quantum biology discussion. To save space in this post, let me refer the reader to this link for a review of this idea at Conscious Entities.

On the one hand, this book consists of speculation stacked on speculation. On the other hand, each step progresses from features of physics or biochemistry that we know to be true. Between the spheres of quantum physics and the human mind lies the world of biology: I continue to look for arguments and evidence that biological systems have features that can bridge these realms. This book was a fine effort along this line.

Wednesday, October 18, 2006

Quantum Ontology and Whitehead

The current Journal of Consciousness Studies included an interview of Henry Stapp by Harald Atmanspacher (here’s my most recent post about Stapp; Atmanspacher is also a physicist interested in consciousness). I’ve been a bit skeptical in the past regarding Stapp’s specific proposals for how quantum effects are implemented in the human brain, but I mostly agree with his metaphysical views, including the connections he draws between the ontology of quantum mechanics and that of Whitehead. Below is one paragraph from the interview which I thought captured this well.

The natural ontology for quantum theory, and most particularly for relativistic quantum field theory, has close similarities to key aspects of Whitehead's process ontology. Both are built around psycho-physical events and objective tendencies (Aristotelian ``potentia'', according to Heisenberg) for these events to occur. On Whitehead's view, as expressed in his Process and Reality (Whitehead 1978), reality is constituted of ``actual occasions'' or ``actual entities'', each one of which is associated with a unique extended region in space-time, distinct from and non-overlapping with all others. Actual occasions actualize what was antecedently merely potential, but both the potential and the actual are real in an ontological sense. A key feature of actual occasions is that they are conceived as ``becomings'' rather than ``beings'' -- they are not substances such as Descartes' res extensa and res cogitans, or material and mental states: they are processes.

Thursday, October 12, 2006

Caution: Universe under Construction

Causal Dynamical Triangulations (CDT) is another research program in quantum gravity which features causality at the fundamental level. Renate Loll has a nice overview of the work she and her collaborators are pursuing. I read the paper “The Universe from Scratch” authored by Loll, Jan Ambjorn, and Jerzy Jurkiewicz. Helpfully, this particular paper was written with the intent to be accessible to those outside the field. The “claim to fame” of the CDT approach is that its microscopic quantum spacetime model exhibits four-dimensions at a macroscopic scale under computer simulations.

CDT is similar to Loop Quantum Gravity (LQG) in a couple of respects. It seeks to quantize the gravitational degrees of freedom in a background independent and non-perturbative manner. One difference from the historical development of LQG is that LQG first used spin networks to create a non-dynamical structure, and then sometime later spin-foam models were developed which used a path-integral approach to evolve the networks. CDT incorporates the path integral at the outset as its “most important theoretical tool”.

The idea behind the path integral is to create superpositions of all the “virtual” paths or configurations which the spacetime degrees of freedom (metric field variables from General Relativity) can follow as time unfolds. This sum over all the possible configurations is the quantum spacetime.

The second key idea is to constrain the set of geometries which contribute to the sum to those which implement causality. There had been earlier approaches similar to CDT referred to as “Euclidean path-integral approaches to quantum gravity” which lacked this feature and did not exhibit the right dimensionality at the macro level.

Before discussing the causal constraint specifically, Loll et al. outline the general method for how the class of “virtual” paths should be chosen. As in quantum field theory, one needs some way to constrain or “regularize” the paths, so you don’t get wildly divergent outcomes. CDT's method in the context of spacetime geometry is to use “piecewise flat geometries” which are flat except for local subspaces where curvature is concentrated. The geometry used is a “triangulated” space (or Regge geometry). These “triangular” flat structures are “glued” together, so curvature only appears at the joints (they give a 2D pictorial example, to see how curvature arises when you glue these together). The motivation for this approach, the basic elements of which are not new, is that it is an economical way to build a discrete spacetime. Later, in finding a path integral, they will take the short distance cutoff to zero which they say will obtain a final theory which is not dependent on many of the arbitrary details which went into the construction.

So, with the geometries defined in this way: what ensemble of these should be included in the sum?

This is where causality comes into play. They again mention previous efforts which involved 4-dimensional Euclidean space, not 3+1 Lorentzian spacetime. It turns out there is not a relationship between a path integral for a Euclidean space vs. one for a Lorentzian space. So CDT needs to encode the causal Lorentzian structure right into the building blocks at the outset. If you impose appropriate causal rules, you can get four-dimensional spacetime to dominate at large scales. If this mirrors reality, then it is suggests it is the case in reality that causality at sub-Planckian scales is what is responsible for the existence of 4D spacetime.

So what are the causal rules in the CDT approach? “They are simply that each spacetime appearing in the sum over geometries should have a specific form. Namely, it should be a geometric object which can be obtained by evolving a purely spatial geometry in time, in such a way that its spatial topology (the way in which space hangs together) is unchanged as a function of time (emphasis added).” The authors have used computer simulations to model the nature of this spacetime at different scales, which lead to the four-dimensional shape emerging (it is not at this stage an analytical result). Now, one might naively question why getting four-dimensions to emerge when your microscopic building blocks were also four-dimensional (3+1) is a big deal, however, the result was far from predictable given the complex fluctuations and divergences generated by the quantum superpositions: in previous “Euclidean” versions, the dimensionality at higher scale would vary all over the place even if the building blocks were 4D-spatial.

The authors then discuss issues involving ongoing efforts to investigate other features of the spacetime model beyond dimensionality to see if they are consistent with gravity. They also discuss their hope that distinctively quantum gravitational cosmological predictions could be derived from the model.

An issue I have concerns the role of time in this model. It seems that a single time dimension is in place for all of the building blocks. This kind of global time directionality is philosophically less appealing compared to an approach which implements a strictly local time at the microscopic level (it also seems less consistent with relativity).

Monday, October 09, 2006

Emerging from the Noise

In “Towards Gravity from the Quantum”, Fotini Markopoulou (her homepage can be reached here if you click through on 'Faculty' then her name) describes work on a new approach to quantum gravity. This effort differs from other background independent approaches in that instead of seeking to quantize spacetime geometry, one starts with a microscopic description of a pre-spacetime quantum theory. Then, from this foundational theory, emerge “dynamically-selected excitations, that is coherent degrees of freedom which survive the microscopic evolution to dominate our scales.” The filter which enables emergence is borrowed from a process studied in quantum information theory. The plan, finally, is to define spacetime in terms of the interactions of these emergent excitations.

The pre-spacetime microscopic theory uses the formalism of quantum causal histories (“QCH” – which I introduced at the end of my prior post). QCH is “a locally finite directed network (graph) of finite-dimensional quantum systems.” There have been several ways QCH has been used to approach quantum gravity, which are reviewed in the paper. The emphasis is on this new approach which develops it as a “quantum information processor, which can be used as a pre-spacetime theory.”

In QCH (described in section 1.2 of the paper), the geometric paths followed in the graphs are constrained to be finite. Then quantum systems are associated with the graph. As I mentioned in the last post, early efforts attached Hilbert spaces to the relations/edges on the graphs. More recently, QCH uses a description of the evolution of an open quantum system called a “completely positive map” (alternatively, a “quantum channel”) to define the relations/edges between the vertices, which are (finite dimensional) Hilbert spaces and/or matrix algebra of operators acting on a Hilbert space.

So, for every edge on the graph, there is a completely positive map or quantum channel (an evolution from one vertex/Hilbert space to another).

I will skip over some of the details of this construction, but it looks like the other important part is to impose constraints on the evolution that preserve local causality in the spirit of the original causal set theory. Specifically, a source set for a given path(s) maps uniquely to a range map, thus imposing local causality when the edges are viewed as causal relata.

So, we have in QCH a (relatively) simple structure of open quantum systems, which form a local causal network.

Markopoulou describes several ways this structure has been used. It can be the basis of a discrete algebraic quantum field theory (section 1.3). More to the point, another way is to use it to create a path toward quantum gravity by taking quantum superpositions of the geometries defined by the model (section 1.4). This methodology has been part of the development of spin foam models as well as Causal Dynamic Triangulations (“CDT” -- which will be the subject of my next post). To make a causal spin foam model, you adapt QCH to spin network graphs, which are used in loop quantum gravity. Then you obtain a path integral of the superpositions of all constrained members of the set.

She discusses next the problems spin foam models have had recovering spacetime at lower energy scales (she notes briefly that the CDT model has had more success due to its unique features). The background independence and difficulty of implementing dynamics makes it difficult to apply coarse-graining techniques used elsewhere to recover a sensible low energy physics from these models.

The idea she wants to focus on builds on the idea that instead of summing over quantum geometry and trying to coarse-grain directly, one can first look for long-range propagating degrees of freedom that arise from the quantum systems and look to reconstruct the geometry from these.

She borrows from quantum information processing theory the notion of a “noiseless subsystem” in quantum error correction: a subsystem protected from the noise, usually thanks to symmetries of the noise (section 1.5). The analogy is that the “noise” is simply the fundamental microscopic evolution and the existence of a noiseless subsystem means a coherent excitation protected from the microscopic evolution. So we split the paths (quantum channels) into subsets A and B, where B is noiseless. B is an emergent subsystem (similar to the idea of a “decoherence –free” subsystem). I didn’t follow all the formalism describing noiseless subsystems. But I infer it’s a topic well discussed in quantum information theory.

Next, she investigates the idea that we have an emergent spacetime if these emergent coherent excitations behave as though they are in a spacetime. This subset of protected degrees of freedom (or coherent excitations) and their interactions will need to be invariant under Poincare transformations.

In preparation for the next section she works through some formalism to show more clearly how the sum over causal histories does include some of these coherent excitations. These turn out to be “braidings” of graph edges which are unaffected by the noise of evolution.

In section 1.6, Markopoulou describes her model whereby QCH is a pre-spacetime quantum information processor from which degrees of freedom emerge; interactions of these are theorized to be the events of our spacetime. (There are no separate gravitational degrees of freedom to be quantized).

She argues this model demonstrates a deeper form of background independence compared to other theories since the microscopic geometric degrees of freedom don’t survive as part of the description of emergent spacetime

The QCH quantum channels (graph edges) referred to before should now be viewed as information flows between quantum systems (vertices) with no reference to having spatio-temporal attributes at all.

Then, one analyzes the emergent coherent degrees of freedom (noiseless subsystems) and their interactions. She proposes that these can constitute an emergent Minkowskian spacetime if they are Poincare invariant at the relevant scale.

Important to this possibility is that the noiseless subsystems are not localized, they exhibit a global symmetry which allows them to be emergent at larger scales. They constitute their own “macro-locality”, which is unrelated to the original microlocality of the QCH graphs. Markopoulou outlines the promise and possible shortcomings of this approach, and says more is work is underway to develop these ideas. Importantly, she and her collaborators have not gotten gravity (Einstein’s equations) back out yet. On the other hand, when he mentioned this work in his book, Lee Smolin seemed excited by the idea that the emergent “particles” from this kind of approach might have the potential to lead to particle theory in addition to gravity (most background independent approaches to quantum gravity set aside the problem of matter fields at least initially, in their pursuit of gravity).

She finishes with a note on time:
Just as the emergent locality has nothing to do with the fundamental micro-locality, time and causality will also be unrelated macro vs. micro. So, the theory “puts” in time at the micro-level (via its causality constraints), but emergent spacetime will have no preferred time slice –as required in general relativity.

I think this distinction between microscopic and macroscopic time has interesting implications for thinking about causality. It is suggestive that a theory like Markopoulou’s implies that while causality is fundamental at the local level, the macroscopic “laws of physics” are emergent regularities.


Wednesday, October 04, 2006

Causality First

As discussed in my last post, Lee Smolin concluded that the most promising models of quantum gravity include causality as a fundamental feature. In the next couple of posts, I’ll outline some notes from my reading regarding how this idea has been developed in several research programs. Below are brief notes on causal set theory and quantum causal histories. As always, I will make mistakes in my efforts to summarize some of the material, so please check out the papers themselves if you have interest.

Causal Sets

A very nice paper (actually lecture notes) by Rafael Sorkin was helpful in summarizing causal set theory and its application to spacetime (hat tip: Dan Christensen’s page of quantum gravity links).

As he tells the story, the idea of seeing the causal order of relativistic spacetime as its most fundamental aspect was present in the earliest days of Einstein’s theory. There were early efforts which made some headway in managing to recover the geometry of a flat four-dimensional spacetime from nothing more than the underlying point set and a timelike vector among points. The fact that the points in the manifold constitute a continuum proved to be an obstacle to this methodology. But of course the development of quantum mechanics gave good independent motivation to consider discrete models of spacetime geometry. In a discrete model, causal order plus the counting of the discrete volumes offers the potential of recovering spacetime geometry.

According to Sorkin: “The causal set idea is, in essence, nothing more than an attempt to combine the twin ideas of discreteness and order to produce a structure on which a theory of quantum gravity can be based.”

The basic idea of a causal set (or “causet”) is easy enough for a layperson to understand. Sorkin defines (p.6) a causal set as a locally finite ordered set: A set endowed with a binary relation possessing 3 properties – transitivity, irreflexivity, and local finiteness (which implies discreteness). The combination of transitivity and irreflexivity rules out cycles where the timelike vector can loop back to its beginning. The set could be depicted as a graph (with the elements as vertices and the relations as edges) or as a matrix, or it can be helpful to think of it as a “family tree”. The relation between elements is one of “precedes” or “lies to the past of”, etc.

Sorkin goes on to summarize work which looks to recover the elements of spacetime by analyzing the kinematics of a causal set. As an example, he says it is known that the length of the longest chain “provides a good measure of the proper time (geodesic length) between any two causally related elements of a causet that can be approximated by a region of Minkowski space.” He discusses how to reconstruct other constituents of Minkowski space (M4) from its causal order and volume-elements. He then talks about recovering more geometric information, such as dimensionality. This work seemed to be less definitive in its results.

Next, he discusses routes toward causal set dynamics, and hopefully a quantum causal set dynamics. Sorkin describes an idea where you create a classical stochastic evolution of the set in a (global) time direction, and modify the results to create a quantum dynamics (this part I don’t yet understand).

Some cosmological applications are discussed, the most exciting of which (mentioned by Smolin in his book) is a correct order of magnitude prediction for the cosmological constant which emerges from the model. Given these results, Smolin expects the causal set research program to be an ongoing part of the background-independent approaches to quantum gravity.

Quantum Causal Histories

Causal Set theory is discrete, but at its roots not distinctively quantum (as I read it).

Quantum Causal Histories (QCH) is a model which welds quantum features to the causal elements. As described by Fotini Markopoulou (her home page is reached by clicking through to "faculty" then her name on this Perimeter Institute link) in this neat little 6-page review, the idea is to “quantize” the causal structure by attaching Hilbert spaces to the events of a causal set. “These can be thought of as elementary Planck-scale [quantum] systems that interact and evolve by rules that give rise to a discrete causal history.

Actually, she discusses the fact that the finite dimensional Hilbert spaces, following the rules of quantum mechanics, would not in general respect local causality if attached to events. Instead she says one should attach them to the causal relations (edges on a graph), with operators put on the events (nodes or vertices). Then the quantum system evolution respects local causality. There is an intuition here also that an event denotes a change, and so fits with the notion of an operator. Note also when you link quantum systems together like this, there is not a global Hilbert space or wavefunction for the whole system: if these building blocks are built up into a cosmological model, there would be no wavefunction for the universe, but a collection of local ones. By the same token, there is no observer of a global quantum system outside the universe, all the observers on the inside.

There are different ways to link the spaces up (different kinds of graphs). One way is to use the spin networks, as in loop quantum gravity. When spin networks are used in a model of the causal evolution of quantum spatial geometry, the nodes of the spin network graph are the events in a causal set. Markopoulou details this notion and other ways QCH can be modeled. I should note that in later papers, the QCH structure appeared to be refined further; for instance this paper refers to the substituting of matrix algebras of operators for the Hilbert spaces.

This short paper concludes with references to work underway (this was a 1999 paper) to use QCH as a structure for a quantum gravity model. I will come back to QCH as part of a subsequent post with my notes on a much more recent paper by Markopoulou featuring work mentioned in Smolin’s book.

Wednesday, September 27, 2006

What's New in Quantum Gravity

I read Lee Smolin’s new book, The Trouble with Physics. I’m not going to review it in detail (here is a good review;UPDATE 3_Oct. see also review up at Cosmic Variance), but will instead focus below on the section where he discusses some newer approaches to quantum gravity.

The largest part of the book diagnoses the reasons for the slow progress in solving the big outstanding problems in theoretical physics. (Some of my earlier posts on this topic and referencing Smolin are here, here and here). The focus is on his perception of the shortcomings of string theory and also on the “sociological” issues which have led to string theory’s dominance of the field. He also makes suggestions for improving the situation so the next generation of physicists can make better progress.

While the focus is on string theory, it is also clear that the Loop Quantum Gravity program (with which Smolin has been most identified) has also made only slow progress – if it had been more successful, this book would not exist.

While these parts of the story left me a bit depressed, I still recommend the book for those interested in the topic: Smolin is a great writer and is well positioned to speak on the issues. (The book is frequently being reviewed in conjunction with Peter Woit’s new book, which I plan to read; in the meantime I continue to follow Woit’s interesting blog.)

Amidst the gloominess, Smolin does make some optimistic comments about the development of alternative background-independent (BI) approaches to quantum gravity (Chapter 15). In BI approaches, he says, one does not “start with space, or anything moving in space.” Instead, one starts with an abstract quantum mechanical structure, then looks for spacetime to emerge at larger scales. Early attempts proceeded by quantizing Einstein’s spacetime geometry directly (in the spirit of quantizing the classical electromagnetic field). These didn’t work, with the main problem being the generation of infinities in the expressions. A more sophisticated model, that of loop quantum gravity, has created finite outcomes, but (if my understanding is right) it works by encoding the spacetime of relativity directly into a quantum geometry. The complexity which comes from including the quantum states of all the geometric degrees of freedom in the model has made it difficult to get the dynamical four-dimensional spacetime back out again. [UPDATE (29 Sept.): See a few additional notes on LQG below in the comments.]

So, now, the idea is to take the BI approach even deeper and construct a quantum “pre-spacetime” theory. Looking at these approaches, Smolin also concludes they must include causality as a fundamental feature. In relativity, the light-cones implement a causal structure: you can tell which events precede or succeed others from a given reference frame. While we usually might think in terms of spacetime imposing the causal structure, Smolin says you could turn this around and say that causality determines the spacetime geometry. In this spirit, many researchers in quantum gravity now think causality is fundamental, and must feature in the construction of a pre-spacetime theory.

Here is Smolin’s summary of his meta-thoughts on quantum gravity theories:

“The most successful approaches to quantum gravity to date combine these three basic ideas: that space is emergent, that the more fundamental description is discrete, and that this description involves causality in a fundamental way. (emphasis original)”

Note he doesn’t say time is emergent. If causality is fundamental, then some notion of time is fundamental. However, it wouldn’t be a global time dimension we’re talking about: time would be localized at the level of the building blocks.

The chapter includes a survey of a number of approaches, however, the two newer ideas which Smolin seemed most enthusiastic about were a model called Causal Dynamic Triangulations and a new take on Quantum Causal Histories which utilizes an idea from quantum information processing to make spacetime emerge from a pre-spacetime reality. I will follow up with an attempt to look a bit more closely at these models.



Wednesday, September 13, 2006

The Russellian Stance & Concluding Thoughts

(This is the final post in a series, for background see the previous posts.) There is a well-known philosophical argument regarding the limits of our knowledge of physical objects and the relevance of this for the problem of consciousness. This argument is most prominently associated with Bertrand Russell, although there are connections with the work of many other philosophers. The argument has two parts. The first part asserts that we only know physical objects by their dispositional, relational, or extrinsic nature. Our knowledge of physical phenomena leaves untouched their categorical, non-relational, or intrinsic nature. In his book, Daniel Stoljar refers to this as the categorical argument. The second part of the argument draws a connection between the (hidden) intrinsic aspect of physical phenomena and the seemingly intrinsic quality of the phenomenal properties of first-person experience. (For other discussions online, see the “Type-F monism” section of this Chalmers paper; an earlier treatment of the view -- “o-physicalism” – in this Stoljar paper; and, for a more extended treatment of related philosophical theories, see this SEP article by Leopold Stubenberg. Also, Chapter 2 of Gregg Rosenberg’s book employs a creative argument in this vein).

Stoljar briefly gives his own account of the categorical argument. He divides physical truths into three exhaustive categories: spatio-temporal truths, truths about secondary qualities, and truths about primary qualities (note that properties from our physical theories like mass and charge would fall into this grouping). He discusses ways these individually or in combination can be seen to fail to tell the whole story about physical objects and specifically only tell a dispositional story about physical things. While some of the details could be contentious, the categorical argument's conclusion appears plausible.

There are two ways to take this argument further. For the first, note how the terrain here overlaps the discussion of the “structure and dynamics” objection which was the topic of my earlier post (although in Stoljar’s book, it comes in a later chapter). One could argue that the categorical argument leads directly to this thesis (Interpretation 1):

1. All non-experiential truths are (and will always be) dispositional.
2. Experience concerns categorical truths.
3. Therefore, non-experiential truths cannot entail experiential truths.

Stoljar’s epistemic thesis would be false – there cannot be non-experiential truths of which we are ignorant which are relevant to experience.

I should note that for a composite system (like a human), it can be said that our experience depends on both dispositional and categorical truths rather than exclusively categorical truths; but the above argument would still hold.

Stoljar argues differently. He wants to extend the categorical argument into a "categorical ignorance hypothesis" akin to, but more specific than, his own view. The key to this is an assertion that non-experiential truths could encompass both dispositional and categorical physical truths. Just because we don’t know of any examples of non-experiential categorical truths doesn’t mean there aren’t any. Note that Stoljar wouldn’t think it relevant even if our ignorance of this latter putative type of truth was chronic. Given this assumption, we have Interpretation 2:

1. Known non-experiential truths are dispositional
2. We are ignorant of categorical truths
3. Categorical truths may be both non-experiential and relevant to experience.

The resulting “Russellian version” of Stoljar’s epistemic view does dissolve CA and KA, and thus solves the logical problem of consciousness. (Note that Stoljar does not positively recommend this version of the Russellian view, rather he is advancing the more general version of the epistemic argument and is using the Russellian version to show an example of how the more general version could be fleshed out).

Analysis

Stoljar’s thesis is difficult to defeat, because it is often easier to sow doubt about the adequacy of our knowledge than it is to defend positive assertions (like the statement above saying “all non-experiential truths are dispositional”).

In responding to Stoljar, I think the best strategy is to look more closely at how the paired terms non-experiential/experiential, dispositional/categorical, and objective/subjective are used and how they relate to each other. To do this, I will employ what I intend to be a minimal description of the world which must obtain to make sense of how these terms apply to it.

I start with the objective/subjective pair, because I think it is easiest to agree upon what these terms mean. In my last post I argued that objective and subjective truths are exhaustive of all truths. In essence, I’m saying that this is a dichotomy which describes our world. This doesn’t mean the world is described by a fundamental dualism like that of Descartes: both kinds of truth could supervene on an underlying common ground (neutral monism). On the other hand, a pluralism is implied, of course: The world contains individuated natural systems, and one type of truth obtains when a system participates in an event (subjective truth), and another type of truth obtains when such event is indirectly encountered (objective truth). There are as many “subjective” points-of-view as there are natural systems.

If you grant me this much, then the most natural interpretation of experiential truths is to identify them with subjective truths, while non-experiential truths are identified with objective truths. As discussed in the last post, such an interpretation defeats Stoljar’s thesis.

With regard to dispositional versus categorical truths, the terms are a bit more obscure, but the analysis leads to a similar result. A system’s indirect non-participatory knowledge of truths about other systems will only be of dispositional truths. We can only know them by how they interact with further entities. Another way to see this is to use the synonyms “relational” or “extrinsic” as alternative pointers to the same concept. The most straightforward interpretation would likewise identify non-dispositional (i.e. categorical) truths with those known through direct participation. The idea of a "non-experiential categorical truth" appealed to by Stoljar above has no room to fit into this picture and is not a well motivated concept.

Concluding Thoughts

Stoljar’s book was valuable to me not only for his interesting thesis, but for how useful it was as a touchstone for reflecting on the debates in (analytic) philosophy of mind which I’ve been reading about for around 15 years now. Within this philosophical domain, I have been most persuaded by the views I’ve discussed in these posts: a version of the Russellian stance, bolstered by arguments primarily associated with Chalmers and Nagel.

But, of course, consensus is far off. I haven’t discussed the parts of Stoljar’s book where he argues against competing physicalist perspectives on the problem of consciousness (arguments one agrees with always generate less scrutiny!). But many philosophers still subscribe to these arguments.

It seems that a significant move toward consensus will probably require input from outside philosophy of mind. Even in my “minimal description of the world” outlined above I was essentially “cheating” by bringing in ontological assertions about the existence of natural systems and how they relate to each other. But I think some sort of “triangulation” is necessary to make progress. Consciousness needs to fit into an improved metaphysical portrait. As an example, this is what appealed to me about Gregg Rosenberg’s thesis (see posts here), which combined insights from an analysis of causation and the composition of natural individuals to find the right “place for consciousness” in the world. It is also the reason I continue to be very interested in the philosophical interpretation of foundations of physics.

It seems right that a deeper understanding of consciousness will go hand in hand with a deeper understanding of nature.


Friday, September 08, 2006

The Objection from Objectivity

(This is the second post in a series; please see the previous post for background). The second objection considered by Daniel Stoljar in Chapter 8 of his book employs the idea of objectivity as the “Factor X” which characterizes non-experiential truths. Stoljar paraphrases this objection, which relates to an argument prominently associated with Thomas Nagel, as saying that non-experiential truths are and will always be objective (third-person) truths, and the problem of consciousness will always re-emerge since experiential truths are subjective (first-person) truths. I should declare my antecedent bias here that I have always found this argument to be extremely persuasive myself. If we invoke Nagel’s example of the bat: once I know all the objective truths about a bat, what it is like to be the bat is a further truth.

Now the way this objection is framed (that is, non-experiential truths are those known via the objective point of view and experiential truths are defined as those known through the subjective point of view) poses a challenge for Stoljar. He cannot argue against the following statement:

1. Even if you were to know all the objective (non-experiential) truths you would still not thereby know the subjective (experiential) truths.

Stoljar notes that this statement is analytic and flows from the terms of the objection. So he will try to form a reply by arguing that the following seemingly natural inference from statement #1 actually does not follow:

2. Even if you were to know all the objective facts, there will still appear to you to be an element of contingency in the relation between the objective and subjective facts.

It is this appearance of contingency or lack of entailment that is the basis for a third statement:

3. CA and KA will continue to be forceful no matter how many objective facts we learn.


Stoljar’s wants reject the inference from #1 to #2. His first strategy for doing this is to set up in parallel a counterexample thus:

4. John is in pain

This, he explains, is clearly a subjective truth. Then, since #4 is subjective then it negation must be as well:

5. John is not in pain

Next we have:

6. If John is a number, then he is not in pain.

Note this is a necessary truth. And since the antecedent of #6:

7. John is a number

is an objective truth, then we have an example where an objective truth entails a subjective truth. Putting this all in a form which mimics (1) and (2) above, we have:

8. Even if I were to know that John is a number, I would not thereby know that he is not in pain.

9. Even if I were to know that John is a number, there would still appear to me to be an element of contingency in the relation between John’s being a number and his not being in pain.

The inference from #8 to #9 is invalid, since #8 is true and #9 is false. Since this parallels #1 and #2, Stoljar says that inference is also false. When I read this the first time, I found the form of the counterexample fine, but its content was so different than #1 and #2 that I didn’t feel its force. I should also note he says we could use the slightly less strange “If John does not exist, then he is not in pain” as a substitute statement #6. But Stoljar senses the reader will need more help to flesh this all out.

So, continuing the discussion, Stoljar says he wants to distinguish the idea that the objective facts entail the subjective facts from the idea that understanding all the objective facts entails an understanding of the subjective ones. He introduces the idea that {if A then B} could be necessary but not synthesizable. Here’s his definition of synthesizable: “{If A then B} is synthesizable if and only if in every possible world in which S understands A, then S understands (or is in a position to understand) B – for short, if and only if understanding A entails understanding B.” Stoljar says the statement #1 above shows that subjective facts are not synthesizable from objective facts, but does not entail statement #2 – the relation could still be necessary and could appear to us as necessary, given future lifting of the veil of ignorance. To show that something could be not synthesizable but still appear necessary, Stoljar offers further examples such as this: if x is colored, then x is extended. Understanding about x being colored doesn’t entail understanding about what it is for it to be extended, but we can see that the conditional statement indeed describes a necessary relation given our possession of all the relevant facts. Again, in reading this, I found the content of Stoljar’s example to be different enough from that of #1 and #2 that I began to contemplate what could be faulty about the parallel.

For me, despite the examples, it was difficult to see that that one could ever concede the non-synthesizability of the subjective facts from the objective facts but come to find them appearing nonetheless entailed by them. But – and here again that is what makes the epistemic view slippery – Stoljar might say my feeling on this is just due to my ignorance! Can I flesh out a specific disanalogy in Stoljar’s examples which makes them lose force? I’ll give it a try.

The difference is that in the original case the two types of truths (objective and subjective) are presumed by definition to be exhaustive of all truths. In the cases of Stoljar’s examples, the additional facts which led us to see that {if A then B} could be necessary even when not synthesizable invoked other types of facts beyond the types represented by A and B. In the original counterexample #8 and #9 I need to know about abstract and concrete objects and what kind of objects can have pain. These facts were the “new” facts beyond just knowing everything there was to know about the antecedent “numberish” facts. But in the original objection, there are no new types of facts beyond the objective facts that could help me erase the appearance of contingency, because objective and subjective facts exhaust all facts.

I would note that I am not criticizing the distinction between a relation of necessity and one of synthesizability per se. Stoljar has used this effectively elsewhere (as in this paper). I just think it doesn’t do the job in this particular case.

Now I’d be grateful if anyone finds fault with my analysis here. For now, though, the reply Stoljar offers in this section was a rare passage in a clearly argued book which I found unconvincing.

Conclusion:

The somewhat complicated reply to the objection from objectivity was unconvincing to me. I think there is still room to believe that the evident dichotomy of all truths into subjective and objective truths remains a key obstacle to dissolving the logical problem of consciousness in favor of a materialist stance.

Wednesday, September 06, 2006

The Objection from Structure and Dynamics

This is the first in a series of three posts which take a closer look at certain sections of Daniel Stoljar’s book Ignorance and Imagination: The Epistemic Origin of the Problem of Consciousness. In his Chapter 8 he discusses and replies to 2 possible objections to his thesis, and these will be the focus of this post and the next. In another follow-up post I want to discuss his treatment of the Russellian (“neutral monist”) stance.

Backdrop:

To review, Stoljar usefully frames the logical problem of consciousness as follows. We have reason to endorse a triad of inconsistent statements:

1. There are experiential truths.
2. If there are experiential truths, every experiential truth is entailed by some non-experiential truth.
3. If there are experiential truths, NOT every experiential truth is entailed by some non-experiential truth.

Briefly and roughly, experiential truths are events of phenomenal and subjective character for which it is like something to undergo them. Stoljar (rightly in my opinion) takes #1 above to be obvious and does not spend much time on it. We have reason to believe #2 above given the track record of explaining manifest or folk truths in terms of scientific descriptions (which, to-date, all seem to coherently supervene on ground-level truths of physical theory). We might reasonably expect a correct treatment of experience in terms of non-experiential scientific truths to follow in due course. Someone holding to the philosophical position of physicalism (or materialism, to use the older term) accepts #2 and rejects #3. As an aside, I think Stoljar’s choice of using the term “non-experiential” rather than “physical” where possible is helpful, since it is always possible to get distracted in defining “physical” – although there are times where you need to revisit this.

We have reason to believe #3 above due to a number of related arguments, especially those known as the conceivability and knowledge arguments (“CA” and “KA”). For the purposes of my posts, I will presume familiarity with these; for a quick reference on these as well for the objection to Stoljar considered below, see David Chalmer’s paper Consciousness and its Place in Nature (this is also the paper which has his useful taxonomy of positions on the problem – Stoljar’s would be a form of “Type-C” materialism); for a more recent extended treatment of CA from Chalmers, see this paper. Dualists and other critics of physicalism accept #3 and reject #2.

In his book, Stoljar considers and rejects criticisms of CA/KA in the literature with the exception of his own, which he argues is successful. His (disarmingly modest-seeming) stance is an epistemic one, arguing CA/KA fail because we are ignorant of a type of “experience-relevant non-experiential truth”. It is only this ignorance which gives CA and KA their force, and if all the facts were known, we would all reject #3 and accept #2.

In his Chapter 8, Stoljar considers some possible objections and offers replies. He gives a general form of an objection (a master argument) as follows (some paraphrasing): we know we’re ignorant of many things, but we know enough to know that any relevant non-experiential truth is going to be characterized by some Factor X. The problem we have now which leads to the reasonableness of #3 will always re-emerge when we consider these new truths as a result.

Now, let me digress and admit that in similar discussions in the past I might have already given into a temptation to stop and protest: “how can it be that experiential truths can ever be entailed by (or “emerge” from) non-experiential truths? Isn’t the idea of “experience-relevant non-experiential truth incoherent?” Similar sentiments abounded in the Galen Strawson paper I praised and linked to a short while ago (btw, Stoljar has recently posted a response to the Strawson paper). But, in terms of Stoljar’s explicitly epistemic argument, one is not entitled to this response at this point. If the view is correct, then the notion only seems incoherent because of my ignorance. So, we must soldier on and put more flesh on what it is about non-experiential truths (Factor X) which has and will continue to make them incapable of appearing to entail experiential truths.

The Objection from “Structure and Dynamics”

The first candidate for Factor X Stoljar considers, stemming from Chalmers, invokes the idea that physical descriptions invariably characterize phenomena in terms of “structure and dynamics” (sometimes “structure and function” – see the first paper linked to above as well as Chapter 3, section 2 of The Conscious Mind). For any structural or functional explanation of consciousness, we’ll always be able to further ask why such a structure or function is accompanied by experience. Experience will not be entailed by the structural/functional explanation.

Stoljar, reading and interpreting Chalmers, says structure involves either a spatiotemporal relation or the property of playing a certain causal role – in other words what philosophers might also call dispositional or relational properties. Dynamics refers to how the system changes its dispositional or relational properties through time. So, instead of structure and dynamics, one could say “relations and dispositions”. Yet another way to put it is to say that we are referring to extrinsic properties. It is then asserted that these cannot be said to entail intrinsic properties, and experiential truths concern intrinsic properties. In replying to Chalmers, Stoljar refers to structure and dynamics, but says his reply would still hold force if we utilize these other terms.

Stoljar’s first reply to the objection is to say that while we can categorize physical descriptions in term of non-experiential structure and dynamics there also are experiential structure and dynamics. The experiential field has varying degrees of intensity, unity and dispositional qualities (such as a pain getting worse if I move a certain way). So, Stoljar asserts that if one knew everything about structure and dynamics, one would also know about some experiential structure and dynamics. Now, Chalmers might say that knowing about physical structure and dynamics will not help with the distinctively experiential variety of these. But why not? I think Stoljar’s response is effective up to this point.

Chalmers or his stand-in might say next, however, that experiential truths necessarily include another type of truth untouched by structure and dynamics. If so, Stoljar needs to know what this is so as not to beg the original question. A candidate might be intrinsic (non-relational, non-dispositional) properties. Stoljar next replies to this suggestion. He says while it seems that experiences have intrinsic properties (such as the blueness of an expanse of blue sky), it is really the representation of the sky which has the intrinsic property not the experience itself. I thought this was a weak response. It invokes a distinct and controversial argument regarding the representational nature of experience which is orthogonal to the debate at hand.

But Stoljar then offers a second reply to this revised version of the objection (to recapitulate: the idea is that experiential truths sometimes involve intrinsic properties, non-experiential truths never do, and you can’t get derive intrinsic truths from extrinsic ones). He offers examples such as the following: being a husband is an extrinsic (relational) property of Jack Spratt; and being a wife is an extrinsic property of Jack's wife; but being married is an intrinsic property of them as a pair. So the intrinsic fact of the whole did indeed derive from the relational facts of the parts. But, one might respond that this part/whole discussion misses the point when it is argued that it is the extrinsic description of a single thing that will fail to entail the intrinsic nature of that same thing. But, Stoljar seems comfortable to leave things here, thinking one can always assert a relational parts->intrinsic whole for any complex object, and since our starting point is the experience of a complex human individual, this reply has some force. The objector has just one defense left: he could assert that the world has an intrinsic nature at its most basic indivisible level. At this point, the dialectic stops, since the idea that all physical reality has an intrinsic nature relates to the Russellian view that Stoljar treats separately (as will I).

Conclusion

I think Stoljar’s reply to the objection from structure and dynamics is successful up to a point. He pushes the objector toward arguing for an intrinsic nature at the basic level of the world, rather than for a composite individual such as a human per se.


Friday, August 11, 2006

The Facts Are Not All In

I benefited from reading Daniel Stoljar’s book, Ignorance and Imagination: The Epistemic Origin of the Problem of Consciousness. The book defends the thesis that the problem of phenomenal conscious experience in contemporary philosophy of mind is rooted in our ignorance of a type of “experience-relevant non-experiential truth”. (See the brief blurb from David Chalmer’s blog here and also visit Conscious Entities and scroll down to see the recent review there[UPDATE 22 Feb.2007: the permalink is here]).

In defending his thesis, Stoljar does a great job reviewing and summarizing many of the debates of the last couple of decades. He invariably takes complicated arguments and simplifies their language and structure as he moves his discussion along. The book is valuable for this aspect alone.

In terms of his position, it’s a fairly modest idea – we’re ignorant about something relevant but of course he can’t say what it is! It does, however, seem a reasonable position to hold. He argues that this epistemic view of the problem is successful in undermining the conceivability and knowledge arguments which conclude that experiential truths are not all entailed by the non-experiential truths. Importantly, though, he is also arguing that none of the various philosophical/conceptual refutations of these arguments work. According to Stoljar the problem of consciousness is not one where the debate can be won “from the armchair” within the confines of philosophy of mind at the present time.

I’m sympathetic to this since I also think we need to triangulate on the problem using other considerations. At the same time, given the imperfect terms of the debate, I don’t think Stoljar is entitled to the conclusion that our ignorance is specifically of non-experiential truths. It is equally plausible, for instance, that the new truths would not be considered “non-experiential”, but as truths upon which both experiential and non-experiential truths (as we currently conceive of them) supervene. He considers a view very much like this (due to Russell), but concludes (wrongly in my humble opinion) that it is best thought of as an instance of his epistemic view.

I have some more thoughts provoked from my reading which I’ll try to develop into another post, but for now let me reiterate my admiration for the book: it’s very well written and is recommended for those who follow these debates.


Tuesday, August 01, 2006

Revised Mission for Templeton?

There’s a post by Sean Carroll on Cosmic Variance from the other day noting that the Foundational Questions Institute (FQXi) had announced the recipients of its first round of grants geared toward research into deep questions of physics and cosmology. It looks like an exciting and worthy list. The sole funding (so far) for the institute has come from the John Templeton Foundation (JTF), which is well known for having as a primary goal the promoting of a reconciliation between science and religion. At the end of his comment to the post, Anthony Aguirre (associate director of FQXi) says: “…it is interesting to note that we recently found out that JTF has a new mission statement. Not sure what this signifies.”

If you go to the foundation’s home page, you now see this mission statement:

The mission of the John Templeton Foundation is to serve as a philanthropic catalyst for scientific discovery on what scientists and philosophers call the 'Big Questions.' Ranging from questions about the laws of nature to the nature of creativity and consciousness, the Foundation’s philanthropic vision is derived from Sir John’s resolute belief that rigorous research and cutting-edge scholarship is at the very heart of new discoveries and human progress.

I don’t know when this appeared (and it wasn’t heralded by a press release), but it is remarkable to note the focus on science and the complete absence of the words “religion” or “spirituality”, etc.

[UPDATE (Nov. 14, 2006): The foundation has revised the site; the mission statement above is reworded slightly but the content is the same; the older mission statement mentioned next appears to have been removed...]

Now it turns out that if you drill down into the "About the Foundation” section you will see a different mission statement, which has the references to moral and spiritual dimensions, etc. and a quote from Sir John referring to God. And if you check out the overview of what is still called the Science & Religion part of the foundation’s work, you still see the emphasis on religion. It may be that mission statement on the home page shouldn't be taken to imply a significant change in emphasis. (Take a look at the Programs and Conferences sections for more information on what the foundation is doing -- there is also a 15 minute video).

Many, including Sean Carroll, have been concerned about accepting grants from the foundation because of suspicions it really simply wants to advance religion, specifically Christianity. I would refer you to this article by science writer John Horgan from a few months back, for background on these sorts of qualms.

The reason this is all intriguing is that the foundation has immense resources (over $1 billion), yet the idea of reconciling religion with science seemed misguided. Science is of course not a worldview but a methodology for investigation of the world. If you try to inject religion into science it wouldn't be science anymore. Perhaps JHF realizes this, and simply thinks funding scientific research into the deepest foundational issues will lead to results friendly to their worldview.

Another way to approach these issues would be more philosophical than scientific -- looking for middle ground between the science-inspired worldview of metaphysical naturalism versus religious worldviews. But I've long thought that if this was the goal, then the foundation should have supported academic philosophy more explicitly alongside physics on the one hand and theology on the other. Disciplines such as philosophy of science, philosophy of mind, and metaphysics, for instance, have (in my opinion) a crucial role to play if you want to find such common ground between worldviews. Perhaps I can be encouraged by the reference to “philosophers” in the mission statement above.

Regardless of this, if the foundation intends to continue or further increase its focus on funding of foundational research in physics and cosmology with no explicit religious "strings" attached (as appears to be the case with the FQXi funding), that would be a good thing.

Update: Congratulations to Matt Leifer (of Quantum Quandaries) for being one of the grant recipients (his comment on the grant announcement is here).

Monday, July 24, 2006

Musing on QFT Ontology

Spurred by the SEP entry on Quantum Field Theory by Meinard Kuhlmann, I spent some time attempting to draft a blog post about the philosophically problematic ontology of quantum particles and fields (see especially section 5 of the article). I decided I needed to read more on the topic before going further, however (I'd be grateful for recommendations for further reading). In reflecting on Quantum Mechanics itself, I’ve argued the merits of a dual ontology, where interaction/measurement events comprise the concrete macro-world, and the wave function provides a description of a system's natural propensities -- still “real” in an important sense, but not concrete.

But I don’t know enough at this point to flesh this out to a discussion of QFT yet. So, in addition to recommending the SEP article above, I’ll compensate by linking to this blog post at Sigpfe, which puzzles about the meaning of “particles” primarily in the context of the Unruh effect (HT: Reality Conditions). I agree that the notion of a particle makes no sense in anything like the classical meaning although there are indeed “particle” detection events.


Tuesday, July 11, 2006

Books Unblogged: Biblical Theme

I’ve read a few good books in recent times and have been meaning to mention them on the blog. Here are 3 of them.

Bart Ehrman’s book, Misquoting Jesus: The Story Behind Who Changed the Bible and Why, received extra attention due to the inclusion in the book of the story of his personal religious journey. Ehrman was an evangelical Christian as a young person but lost his faith during his progression toward becoming a leading New Testament scholar. But the main part of the book is a very well-written introduction (for laypeople) to NT textual criticism, and it is on this basis that I recommend it for those interested in the topic. His account of the detective work involved in tracking the evolution of various manuscripts and teasing out probable copying errors and deliberate scribal alterations is fascinating stuff.

Recently, the early Christian text The Gospel of Judas received a lot of attention in the media. Some of the excitement about the discovery of this text was probably misplaced since there is nothing at all the (likely) mid-second century text tells us about the historical Jesus or Judas. On the other hand, it is rightly welcomed as another valuable window into the variety of early Christian communities, and it is certainly intriguing that one such community identified with Judas and created a revisionist account wherein he is the one disciple to know Jesus’ true nature. I missed the television special but enjoyed reading the book linked above, which includes a translation and 4 essays. In the first essay, Rodolphe Kasser describes the perilous journey of the manuscript from discovery to publication. In the second essay Bart Ehrman gives a very basic overview of the Gospel in the context of the conventional wisdom regarding the relationship of the Gnostic gospels to the proto-orthodox church. While I like Ehrman (see above), I thought this was a fairly weak essay which ignored other interpretations: for more texture on the implications of Judas and the other Gnostic texts on our view of early Christianity, see this NY Review of Books article. In the third essay, Gregor Wurst briefly relates the gospel to the anti-heretical writings of Irenaeus (who wrote about the existence of a Gospel of Judas), and in the final essay Marvin Meyer helpfully explains the strange and complicated cosmology in the Gospel by relating it to parallels in other texts.

The next book, Harold Bloom’s Jesus and Yahweh: The Names Divine, I really only recommend for those who have found Bloom’s previous work rewarding. The book is his rumination on the “characters” of Jesus and Yahweh from a literary, cultural and religious perspective. The main reason for my caveat is that this book seemed sloppily tossed off in stream of consciousness mode. This made it a somewhat exasperating read for me even though I find Bloom to be an insightful thinker. For those that don’t know, Bloom is a prominent and prolific literary critic with a fairly unique yet also traditional (aesthetic) perspective. His initial renown was for his book The Anxiety of Influence, which outlined his approach to criticism. I particularly enjoyed his take on The Western Canon in a subsequent book. He has expanded his attention to wider cultural and religious criticism in other books, and indeed it rings true that he would have long been ruminating on the relationship between Christianity and Judaism as the ultimate example of the “Anxiety” (fyi, Bloom is writing from a culturally but not particularly religious Jewish point of view). I was interested to find from this book that Bloom finds the Jesus of the Gospel of Mark in particular to be a compellingly “uncanny” character to rival the Yahweh of the “J” thread of what Christians call the “Old Testament”.

Tuesday, June 27, 2006

Interpreting Quantum Probability

I have been reading more about the philosophical interpretations of probability and how this topic relates to the interpretation of quantum mechanics (QM). If we assume hidden-variable theories don’t work, some notion of probability is at the core of QM.

I had previously read this SEP article on the interpretation of probability. It provides a good overview of the history of the issue among philosophers, and reveals a wide variety of ideas under past and present consideration. Among physicists, however, debate on interpreting probability seems to center broadly on two conceptions: a frequency interpretation and a Bayesian interpretation.

I thought this recent post at physics musings was a good one on this topic, and I also benefited from the links (including the one to this John Baez page). Then, a recent post at Quantum Quandaries provided a link to this paper by Marcus Appleby. In it Appleby argues persuasively for the Bayesian conception and considers the implications for interpreting QM.

The frequentist conception appeals because it is intended to be objective. If we could repeat an experiment an infinite number of times we would empirically fill out the probability distribution of outcomes. The problem is we can’t do this. The Bayesian interpretation is epistemic: it shows how, given one’s prior assumption about a probability distribution, a measurement outcome serves to improve it. Appleby argues that an epistemic conception is unavoidable. He shows the problems with frequentist conceptions which try to provide an interpretation in situations with finite ensembles; here, one attempts to focus only on a pragmatically relevant finite subset of outcomes. However, this choice of subset is influenced by the context of the situation and the biases of the chooser and thus reintroduces the subjective element.

Appleby discusses the propensity interpretation of QM, which places the probability as an objective property of the system being measured. He suspects this idea usually underlies the adoption of a frequentist perspective. Propensity can, however, be made consistent with the Bayesian conception, if one gives up the idea that propensity is a directly observable property.

Appleby next discusses attempts to formulate an objective version of the Bayesian conception. Can the prior probability distribution be objectively grounded? No, because as some point you have an initial assumption which is not empirical. You cannot derive a probability from a non-probabilistic empirical fact.

Now, getting back to what it all means for our worldview: since probabilities are irreducible to objective facts, and quantum mechanics describes reality, does this mean we have to give up the idea that there is an objective real world out there? Is it true, a la most summaries of the Copenhagen interpretation, that QM only describes the content of our knowledge?

In turning to this question in the last section of his paper, Appleby surprised me by bringing up the problem of qualia from the domain of the philosophy of mind. Qualia (arguably) are irreducibility first-person phenomena which do not fit into a mechanistic view of the world. A fully objective realist view of the world has no place for qualia. And yet, Appleby says, you would say the same thing about real probability or propensity, since these are irretrievably “contaminated” by subjectivity. For him, this points to the need to give up the fully objective realism and accept that we need to find a fuller extension or development of a Copenhagen-style interpretation.

Not mentioned in this 2-year old paper is the Relational Interpretation of Quantum Mechanics (RQM), which can be thought of as such a generalization and extension of Copenhagen. This interpretation seems to fit best with the Bayesian interpretation of probability. For some more recent discussion of RQM follow some of the links in the physics musings post above and also see the recent posts in this thread at PhysicsForums.