Friday, December 05, 2008

Markopoulou: Time is Fundamental, Space is Not

The Foundational Questions Institute has run an essay contest on "The Nature of Time" and received a wide variety of responses. These come from well known physicists, other academics, and amateurs alike. Because of time contraints I've only read a few, beginning with authors I recognized (there are likely some "diamonds in the rough" if one plows through all the contributions).

Fotini Markopoulou of the Perimeter Institute, whose work I mentioned in the last post (and several older ones), wrote: "Space does not exist, so time can." She has a talent for writing clearly about these deep concepts, and I find her arguments persuasive (even if her work toward a full theory of quantum gravity still has a long road ahead). So I highly recommend the essay.

Kudos also to cosmologist and blogger Sean Carroll for his nice essay: "What if Time Really Exists" (here is the Cosmic Variance post introducing it). While I don't like some of his specific suggestions (associating time's arrow with macroscopic entropy considerations), I liked the stance he takes in the essay.

For countervailing views you can read the contributions of Carlo Rovelli and Julian Barbour.

UPDATE (7 January 2009): I just found this interesting post by Scott Aaronson - "Time: Different from space" which includes his computer science-derived insight on why time (causal structure)is fundamental.

Tuesday, November 18, 2008

Aether Makes a Comeback

The nineteenth century version of the ancient concept of the aether (or ether) was killed by the Michelson-Morley experiment and the success of Einstein’s theory of special relativity. Electro-magnetic radiation needed no substance to support wave propagation. Of course we did not revert to a view of space as an void sprinkled with a few solid objects. In modern particle theory, space-time is pictured as filled with matter fields. And in general relativity, space-time is revealed as a dynamic actor, not just a backdrop. Still, space-time remains distinct from matter/energy, and is geometric, rather than substantive. It thus retains a bit of the conceptual flavor of an empty container (a related discussion on the blog is here).

I was surprised to see the number of physics papers on arxiv which invoke the concept of aether (or ether) in the context of theoretical proposals to solving outstanding problems (e.g. dark energy). For me, aether was brought to mind by certain quantum gravity research programs.These propose that the space-time of general relativity is not fundamental: it emerges (along with the matter fields of the standard model) from something more basic – an underlying network of elementary quantum systems. This underlying network is not itself defined against a spatial backdrop and lacks the usual notions of distance or locality. Both space-time geometry and matter as we know them are constituted by the quantum systems: they arise from the aether.

For an example of this kind of work, here’s the second “quantum graphity” paper from Fotini Markopoulou and colleagues (the authors do not invoke the term aether, so don’t blame them!*). The introduction does a good job of discussing the stance they are taking toward the space-time of general relativity, and places this in the context of how other quantum gravity research programs approach the issue.

* Although they do link their work to the model described in this paper: “Quantum ether: photons and electrons from a rotor model” by Levin and Wen.

{UPDATED 19 November, 2008: Minor edits; 8 December 2008: Sean Carroll at Cosmic Variance just posted about his collaboration on aether field models.}

Friday, November 07, 2008

Ruminating on Theism and Personhood

As I’ve discussed in previous posts (see list below), one can try to put a theistic spin on my concept of the necessarily existing metaphysical “megaverse”: one can identify the megaverse with “God”. To be sure, it would be a non-traditional concept of God compared with that of our Western religious traditions – it would be a variety of panentheism (or perhaps panendeism). But perhaps this is simply stretching things too far, and using the label “God” is simply incongruous. Certainly there may be less room for confusion in communicating the ideas if one leaves “God” out of it (I think the historical reception given to Whitehead’s process metaphysics is a cautionary tale in this respect). While there are several considerations here, I think this decision of labeling should be driven in part by the question of personhood, which seems to be an important component of most conceptions of God.

I have a longstanding skepticism about the attribution of personhood to God -- if personhood means something similar to what it means in the human context. In the context of traditional religions, I always suspected anthropomorphism was at the root of this attribution. In outlining my version of the cosmological argument for a necessarily existing entity, I disliked even using the term “necessary being”, because it has the flavor of “human being”, and thus too readily invokes personhood.

Despite this skepticism, the panentheistic version of my view would say that human characteristics arise from the same raw materials which also constitute God, so there must be some essential affinity. Given my specific opinion that first-person experience is rooted in the most fundamental level of reality, I suspect that God might well be a subject of experience – a key aspect of human personhood.

On the other hand, when discussing Timothy O’Connor’s book (here and here), I disagreed with him on whether the necessary being (NB) needed to be considered an agent (and agency seems to be another key aspect of personhood). It seemed sufficient for the NB to be an impersonal font of creation without need of intentions, purposes, or discrete top-down decision-making. My thought here is that a human being moves within a sea of other beings or systems, and his/her agency arises in this context. The NB is the source and sum of all being, and does not operate in a larger context. This seems to me to be an important difference. To use an analogy: if a human being is an actor, then God is the theater, not another actor.

So, at this point I have a mixed verdict – our human essence is derived from the NB, but our status as a finite subset of the NB’s totality makes our nature very different. Whether the concept of personhood can be stretched to cover both situations is unclear, and I think the better option is to decline to consider the NB to be a person. And this may be a good reason to resist using the label God for what I have in mind.

----------------------------------------------------------------------------------
Note: I’m even less well-read in relevant literature in this area than in other subjects I discuss: any reading suggestions in philosophy of religion are welcome.
----------------------------------------------------------------------------------

Post Series (in chronological order): A Philosophical Path to Theism?

Modal Realism and the Cosmological Argument
Exploring the Borderlands
Panentheism
Whitehead's Philosophical Theism
Logos vs. Chaos, Part One
Logos vs. Chaos, Part Two
A Necessary Being or Just a Collection?
Why the Megaverse is a Unified Entity
Is the Megaverse a Subject of Experience?


Thursday, October 30, 2008

Finally! (off-topic post)


28 years since the last Phillies championship and 25 years since the last city title in a major sport.

And what a great team. They are a well rounded, solid group of players, with stars we've seen develop over the years as well as clutch role players added recently. Great personalities who've given us wonderful entertainment and now the most satisfaction a fan can get.

Ryan Howard's victory lap -- picture by elisbrown -- flickr under creative commons license.

Wednesday, October 22, 2008

What Lies Beyond the Big Bounce

We don’t have a fully developed theory of quantum gravity yet, but there is one consequence of the theory we already know: it will banish general relativity’s space-time singularities from our conception of the universe. In particular, the idea of the big bang needs to be retired after decades of dominating professional and popular views of cosmology: the observed universe did not begin as a singularity but rather grew out of a pre-existing reality – a “big bounce”.

Martin Bojowald had a nice article in SciAm recently ("Follow the Bouncing Universe" in the print edition). Bojowald is a loop quantum gravity theorist: while loop theory has not produced an adequate theory for quantum gravity (and I think it probably won’t), it has produced formalisms that may be useful for constructing models which offer insight into the question of what will replace singularities in QG. This work goes under the rubric “loop quantum cosmology (LQC)”. I also noticed that Bojowald’s senior colleague Abhay Ashtekar has a paper out summarizing the results of work in LQC.

What intrigues me is their exploration of what the region on the other side of the big bounce might be like.

In his article, Bojowald first outlines the idea that space-time in QG is not a continuum, but rather has a fine-scale fundamental structure. These space-time “atoms” follow the rules of quantum mechanics and therefore the physics that prevails at high energies/short distances will differ from general relativity (GR). Specifically, in the loop model, a repulsive force comes into play at high energy densities, preventing singularities. In the case of the big bang, one scenario is that the initial high density state arose when a pre-existing universe collapsed (hence – a “bounce”). Bojowald describes an early, simplified, model which seemed to imply that the pre-existing universe was similar to our own. However, Bojowald says his own subsequent work found that quantum effects would have dominated the immediately pre-existing world:


“So the bounce was not a brief push by a repulsive force, like the collision of billiard balls. Instead, it may have represented the emergence of our universe from an almost unfathomable quantum state – a world in highly fluctuating turmoil.”

Bojowald finishes by discussing how we might learn more about the pre-existing universe from astronomical clues.

Ashtekar’s paper discusses the same research more formally; in addition he also deals with LGC models for black holes, where again singularities are replaced by quantum regions (somewhat surprisingly to me, black holes are somewhat more difficult to model than the big bang itself). He concludes his discussion of the big bang/bounce this way: “Big bang is not the Beginning nor the big crunch the End. Quantum space-time appears to be vastly larger than what general relativity had us believe!”

My takeaway is that a realm of quantum possibilia extends beyond and surrounds us our island of observable cosmos. The old idea of the universe as a relatively straightforward, neatly bounded space-time container must be discarded.

Friday, October 03, 2008

The Intentionality of the Single Cell

I highly recommend this paper by Tecumseh Fitch. In it he traces the distinctive intrinsic intentionality of the mind to the capabilities of the eukaryotic cell -- which he calls "Nano-Intentionality". Hat tip to this post at Conscious Entities (although Peter was less impressed than I was with the core argument).

Wednesday, September 24, 2008

Paper on Entanglement in Biology

Through a search on arXiv I found a paper by Hans Briegel and Sandu Popescu called “Entanglement and intra-molecular cooling in biological systems? – A quantum thermodynamic perspective”. It made what struck me as a straightforward case for why non-trivial entanglement is plausible in a biological context, given that living things are open thermodynamical systems. The paper’s goals are modest: it does not present experimental results but just describes some toy models which motivate their argument. The authors believe research needs to be directed towards searching for signatures of entanglement in biological systems.

The authors do mention in passing that they think coherence on a very large scale, such as across the whole brain, is “virtually impossible”. If true, this implies that if brain processes exploit quantum effects, it is as a result of aggregation of such effects at the sub-cellular level.

As an aside, here’s the URL for the search I did on arxiv. This obviously only captures a small slice of what might be happening in this area, and specifically tilts toward the theoretical rather than experimental side of things. My unscientific impression is most of the papers here are being authored outside the United States. Hopefully a concrete result such as last year's detection of quantum effects in photosynthesis at Berkeley will accelerate research in this area everywhere.



Friday, September 19, 2008

2008-2009 GPPC Program

Here's a link to the Greater Philadelphia Philosophy Consortium's program for this year (the GPPC is a cooperative effort by 15 area universities to sponsor philosophy conferences and other programs). There is also a regional calendar which includes some other events taking place at member schools - the great majority are open to the public.

I'm looking forward in particular to the November 22 event at Swarthmore, "New Approaches to the Mind/Body Problem," which will feature talks from the phenomenological perpective on mind from Barnard College's Taylor Carman and Oregon's Mark Johnson.

Friday, September 12, 2008

Is the Megaverse a Subject of Experience?

My thanks to Justin, whose comment and question on the last post is prompting me to try to clarify some of my thinking.

Below I address his question of whether this “megaverse” I’ve discussed in the last couple of posts might be a cosmic-level subject of experience, given our shared endorsement of panexperientialism in the context of philosophy of mind. I tentatively think the answer is yes. Along the way I’ll try to better explain some of my reasoning and my use of terms.

The most common conception of the universe people have via science is that of a space-time container with matter/energy inside it. I’ve come to believe that this conception is wrong. I think it is likely that space-time itself is not a fundamental entity, but co-emerges with matter from a more fundamental level of quantum causal events. And the universe we see is only a slice of something larger (there are no boundaries – quantum gravity models imply that the big-bang was not a singularity, but arose from a pre-existing context).

So what we think of as the actual universe is an arbitrary slice of a larger reality, and it therefore doesn’t have a good claim to be a unified whole or a candidate for being a cosmic experiential subject.

What lies beyond our actual universe? Various motivations have led cosmologists as well as philosophers to propose the existence of many worlds or universes -- a multiverse. If these are completely separate worlds, then their existence would seem to have no impact on ours, but they might help explain the appearance of contingency and fine-tuning in ours. I’d note that the multiverse conception at first doesn’t seem to fit well with the idea of our universe as a cosmic subject – unless there is one such subject for every universe.

Again, though, what we call the actual world is not some space-time container with stuff inside, it is just the causally connected region or nexus we find ourselves in. So then it is wrong to think of the multiverse as a collection of distinct space-time containers. Even if we have not been in causal contact, these other parts of reality should not be thought of as completely separate realms. I’ve taken to calling this total reality the “megaverse” rather than the multiverse, given this way of thinking.

When thinking about the nature of this megaverse, I’ve connected it to my philosophical thinking on modal realism and causality. My modal realism leads me to identify the megaverse with the complete set of metaphysical possibilities (going beyond the multiverse motivations of physicists/cosmologists). My preferred model of causality leads me to see a close relationship between each actual event and the possible but not actual events which are also part of the megaverse. (Note also because “actual” just denotes “local” in this model -- actual is an indexical term -- what is an actual event vs. a possible event is not a fundamental distinction. All events are on an even footing.)

So, I’ve given the name megaverse to this largest conception of reality, and I see it as a holistic entity given the interdependence of its constituent-events. Let me come back, then, to the question of postulating a cosmic experiential subject: if all events have an experiential aspect, then it makes sense that the holistic network of all events is also the subject of (all) experiences. I’m not sure this makes the megaverse something which has consciousness or agency in a way analogous with the human variety. This is something to think further about.

Wednesday, September 10, 2008

Why the Megaverse is a Unified Entity

To continue the discussion from the last post, I sketch below some reasons why the metaphysical “megaverse” – the sum of all actual and possible events – should be considered a unified whole rather than a “mere” collection.

First, while I realize the field of mereology has been contentious for ages, I have always favored arguments that a whole is more than the sum of is parts (composition is not identity). I have several posts dealing with this in the context of trope theory: it appears that tropes cannot be bundled without an external unifier (I discussed Bill Vallicella’s arguments on this topic here and in the second half of this post). What could unify objects? The best answer IMO comes in terms of a theory which approaches composition via causation (see the paper discussed here for why this is a promising approach).

Moving specifically to my preferred event ontology, I think that participation in a larger causal pattern serves to unify constituent events into higher-level units (event complexes). Gregg Rosenberg’s theory on causation and natural individuals serves as an example of this approach (discussed here and here). To extrapolate, we can picture a hierarchy of causal patterns culminating in the largest one of all – the megaverse.

Further, because individual micro-level events in this model are actualizations drawn from a set of possible events, they simply cannot exist in isolation. The existence of an actual event presupposes a space of possibilities. I believe quantum physics provides a posteriori evidence for this feature of reality.

There are other possible arguments for the unity of the megaverse. For instance, the megaverse serves to ground necessary truths (such as those of arithmetic and basic logic). These truths extend throughout the megaverse, providing a unifying “shape” to events.

Also, the megaverse supports consciousness. In my preferred theory of mind, causation is inherently experiential, and the unification of constituent experiences is a defining characteristic of consciousness.

Much more can and should be said, but I think the case is good for considering the necessary existent to be a unified entity.

Wednesday, September 03, 2008

A Necessary Being or Just a Collection?

In Chapter 7 of his Metaphysics (paperback, second edition), Peter Van Inwagen discusses the cosmological argument for the existence of a necessary being. He critiques traditional formulations which invoke the Principle of Sufficient Reason (PSR), and then presents some versions which rely on premises weaker than the traditional PSR. He thinks some of these versions have more merit (but none support a truly compelling argument in his view).

Below I look at one of the versions he discusses, which I found interesting and helpful for the purpose of exploring an argument I’ve been favoring. A key premise in both arguments has to do with whether the sum of everything that exists is actually an entity or being in its own right.

The first premise of the argument from Van Inwagen’s book relies on the following principle: “Every being has this feature: the fact that it exists has an explanation” (p.125). If we accept this principle then, Van Inwagen argues, it is plausible to suppose that here are no beings which are both independent and contingent. To be independent means there can be no explanation in terms of other beings: therefore, given that we demand some explanation for existence, the only answer can be that the being’s non-existence must be impossible. Its existence is necessary rather than contingent. It follows that we can construct what might be called a Monist or Pantheist cosmological argument (my labels):

Monist or Pantheist Cosmological Argument (adapted from Van Inwagen, p.125):

1. There are no independent and contingent beings (premise)
2. There is a being which is the totality of all beings: the world (premise)
3. The world is an independent being (follows from 2)
4. The world is not a contingent being (from 1 and 3)
5. The world is a necessary being (follows from 4)

I think there are good reasons to accept premise 1, but I want to discuss here the fact that even if one accepts it, many would reject the second premise. As Van Inwagen explains, a naturalist typically has no problem seeing the contingent objects in the world as mutually dependent, without any need to postulate a world-being. Traditional theists also reject premise 2, since they see the necessary being as being distinct from its contingent creation. When it comes to our actual world, I also would reject premise 2: I don’t see a good reason to consider the totality of things an additional fundamental thing (I have a post related to this topic here).

The form of this argument, however, helped bring out an implied premise in my own modal realist version of the cosmological argument (first sketched here). This is the premise that the sum of all metaphysical possibilities is a kind of totality-entity in a way that the sum of all actual things is not.

Let me lay out the argument as follows (note that while I will use the term “beings” to be consistent with the discussion above, this would not normally be my choice):

Modal Realist (or Panentheistic) Cosmological Argument:

1. There are no independent and contingent beings (premise)
2. Modal Realism is correct: all possible as well as actual beings exist (premise)
3. There is a being which is the totality of all beings: let’s call this the “megaverse” (premise)
4. The megaverse is an independent being (follows from 3)
5. The megaverse is not a contingent being (from 1 and 4)
6. The megaverse is a necessary being (follows from 5)

So the question I want to ponder is this: is premise 3 here more defensible than the similar premise 2 in the earlier argument? I think so, but I haven’t argued directly for this before.

My preliminary thoughts go something like this.

Certainly, the megaverse is metaphysically exhaustive in a way that the actual world is not, and so has a better claim to independence and necessary existence. But could it just be a “mere” collection of possible and actual beings rather than a totality entity/being?

Recall that in David Lewis’ model of modal realism, the set of possible worlds is indeed a mere collection. For Lewis, each world is causally distinct (the actual world is simply the one we find ourselves in). In my preferred model, on the other hand, there is an intimate connection between the actual and the possible at the level of each individual event. Each causal event is an actualization of a possibility. So there is a continual closeness or adjacency therefore between what we see as actual and what is possible. The actual “world” is a causal network of events embedded in a larger framework of possible events (more related discussion here). So the “megaverse” is not a collection of distinct worlds, but is better seen as this transcendent framework which our ever-evolving actual “world” fits into.

Still, this doesn’t seem to rule out that at the level of events (rather than “worlds”); we could view the megaverse as a “mere” collection.

I’ve been scratching my head on this, and have some more thoughts, but I’ll save them for a follow-up post.

Tuesday, August 12, 2008

Logos vs. Chaos, Part Two

This the second of two posts -- the first post is here.

So, according to Timothy O’Connor, there are two kinds of necessary being (NB, for short) which could provide the right kind of ultimate explanation for our contingent reality: a personal agent (Logos), or an impersonal primordial world generator (Chaos). In Chapter 4 of his book, Theism and Ultimate Explanation, he examines these options to see which provides the better explanation.

O’Connor considers several types of Chaos models. He distinguishes between single-stage (all creation at once) and multi-stage models. For single-stage models, he describes three versions: Immutable Chaos, Abundant Chaos, and Random Chaos.

In the Immutable Chaos model, the world is necessarily a product of the NB’s nature. On reflection, O’Connor finds this model hard to credit: it seems unlikely that our large, highly arbitrary and extensively structured world should need to follow from the NB’s nature. Also, the idea of a single world being a necessary product runs afoul of the need for the NB to provide a non-fully contrastive cause, as discussed earlier in the book.

In the Abundant Chaos model, many worlds are produced, including ours. Oddly, to me at least, O’Connor uses the same sort of objection he had to Immutable Chaos to argue against the plausibility of this model:

“For surely it is no explanation of why the nature of Chaos is ordered to this effect to say that it is because it is ordered to that entire range of effects, and this is one of them. Really, the situation is just made more problematic. How is it that a highly unified source could be causally ordered to just this effect, and also to just that one, and… (Emphasis original, p.94)”

This analysis seems wrong to me. If there are many worlds (perhaps an infinite number of them), then arguing from the highly particular nature of our world loses its force. It is, after all just our local neighborhood in a great expanse. In the extreme case, one might see the NB giving rise to (or, in the panentheistic mode, being constituted by) all metaphysically possible worlds. I’ll return to this point again below when I review O’Connor’s later invocation of the fine-tuning argument as a guide to choosing the correct NB model.

The third Chaos option is labeled Random Chaos. Here, the NB generates a world from its nature utilizing something like a random number generator. O’Connor sees this as having more merit than the other Chaos options. To me, however, this didn’t seem much different in a crucial sense from the Abundant Chaos model: myriad possibilities are viable and we find ourselves in one of them.

O’Connor also considers multi-stage versions of Random Chaos. For instance, perhaps at the termination point of a world (like Big Bang/Big Crunch points) a successor world is generated with random alterations from the previous entry. He says such models may have merit although the mechanisms are somewhat obscure to him (for a multiverse proposal with this feature, see physicist Lee Smolin’s first book). I didn’t find the single-stage/multi-stage distinction very important in all of this.

At this step in his discussion, O’Connor sees the Random Chaos model as the best alterative to the Logos (or agent) model. How might one choose between them? At this point, O’Connor invokes the fine-tuning argument. He believes that the fine-tuning argument fails as a stand-alone design argument for the existence of a NB, but given independent motivation of the NB from the cosmological argument from contingency, he thinks it can help choose between the options as to the NB’s nature.

I’ll skip reviewing some of O’Connor’s discussion of the fine-tuning argument, which would be familiar to those who have spent time on it (a nice summary is on these posts at Parableman). He concludes that it does succeed in elevating the Logos option over Chaos. He thinks the particular nature of our world argues for a NB which is an agent acting on purpose-driven intentions in creating our world. I disagree with this conclusion because I believe the (independently motivated) existence of a multiverse removes the force from the fine-tuning argument. O’Connor, on the other hand, concedes that the multiverse idea dilutes the force of fine-tuning, but he doesn’t think it eliminates it.

O’Connor does say that the strongest multiverse concept (for the purpose of countering fine-tuning) is one which invokes the existence of myriad metaphysically possible worlds as opposed to the cosmological models offered to-date by physicists (I agree). In evaluating whether the multiverse defeats the fine-tuning argument, he first offers the objection that positing the multiverse is less parsimonious then positing a NB designer -- however he concedes this is not obviously persuasive. Then he offers a second objection, which I found dubious, saying that a multiverse option would be but one of many “totality” possibilities, many of which may not contain intelligent life. But if the multiverse is the complete metaphysical manifold, then this objection is ill-founded. I conclude that O’Connor does not find a compelling objection to the metaphysical multiverse model as a defeater for the fine-tuning argument (discussion takes place on pages 107-108). Therefore, the fine-tuning argument fails to provide a basis for preferring Logos.

Further, we should recall the rationale we used for positing some kind of NB in the first place, which is to ground real modal truths of necessity and possibility. Given a robust modal realism (which O’Connor and I both endorse) I think we should conclude that the NB is the source for (and should perhaps even be identified with) the full manifold of metaphysical possibilities. (By the way, I don’t think it matters for this discussion whether the other possibilities are concretely realized or abstract.) So, I conclude that a “Chaos” model of the NB is the preferred model. I judge this on the grounds of parsimony, since the personalized Logos model needs to have the same metaphysical scope, but also adds extra features (purposes, intentions) which are unneeded.

For now, I will not be blogging about the last two chapters of O’Connor’s book. Chapter 5 deals with the matter of how many worlds a Logos NB would create and discusses how this applies to the problem of evil and other matters. Finally Chapter 6 is of less purely philosophical interest as O’ Connor discusses issues in Christian theology and the problem of reconciling the God of metaphysics to the God of the Bible.

So, this concludes my discussion of O’Connor’s though-provoking book. I will be following up, though, with some additional reflections on this topic of inferring the nature of a transcendent necessary entity from what we know about our world. Also, I understand that Peter Van Inwagen discussed this Chaos/Logos topic in his Metaphysics, so I may have some thoughts after reading the relevant passages (which I intend to do soon).

[UPDATE: 2 September 2008] In the paperback 2nd edition of Van Inwagen’s book (linked above), he concludes that the multiverse objection is decisive against the fine-tuning argument. He says: “It is the possibility of an interplay of chance and an observational selection effect that is the undoing of the teleological argument in the form in which we are considering it (p.158).” When subsequently discussing the options of Logos and Chaos, he therefore sees no reason to prefer one over the other:

“As far as our present knowledge goes (aside from any divine revelations various individuals and groups may be privy to), we have no reason to prefer either of the following two hypotheses to the other:
• This is the only cosmos, and some rational being has (or rational beings have) fine-tuned it in such a way that it is a suitable abode for life.
• This is only one among a vast number of cosmoi (some of which are -- a statistical certainty -– suitable abodes for life) (p.161).”

To be sure, he does not give any credence to my argument that the chaos option is preferable to logos on grounds of parsimony.

I might also mention some context for those who haven’t read the book: Van Inwagen, while being one of our most prominent philosophers who is also a theist, finds none of the traditional philosophical arguments for theism (ontological, cosmological and teleological) compelling.

Monday, August 04, 2008

Logos vs. Chaos, Part One

Timothy O’Connor has written an interesting book full of meaty metaphysics (plus some theology): Theism and Ultimate Explanation: The Necessary Shape of Contingency. In this post I will focus on the sections of the book which were of greatest interest to me, but I enjoyed reading the whole thing. The central goal of the book is to present an up-to-date cosmological argument from contingency for the existence of a transcendent necessary being.

Like O’Connor, I’m interested in pursuing the quest for ultimate metaphysical explanation, and I agree with him that it is the apparent contingent nature of our world which demands an answer (“why this?”).

Unlike O’Connor, however, I don’t come to this quest as a traditional theist. I’ve long thought that the purposes and activities of a personal god cannot provide an explanation without raising equivalently difficult questions. So, an intriguing part of the book is where O’Connor discusses and compares two models of a transcendent necessary being: the personal and the impersonal. Or as he labels them: Logos and Chaos.

Before getting to this point, O’Connor lays the groundwork. In the first two chapters of the book, he discusses the case for modal realism and sketches a theory of modal epistemology. I agree with him that our concepts of possibility and necessity are so fundamental to explanation and reasoning (as well as to everyday life) that they are grounded in real modal truths. With regard to his account of how we come to know such truths, I found the discussion interesting, but tentative, and will leave this aside for now.

In Chapter 3, we begin to look at where the quest for explanation will take us. If the contingency of the world is acknowledged, and the necessities we come to know (if imperfectly) are real, what explains this situation? After commenting on the tendency for many philosophers to dismiss this sort of question as unanswerable, O’Connor turns his attention to the options on offer (I would note that Graham Oppy, in his review, disagrees with O’Connor in that he counts “contingent world as brute fact” as one of the options).

First, O’Connor discusses the common naturalist option which asserts that there is an infinite causal chain of contingent things, and that no further explanation is needed. O’Connor says even if the infinite causal chain is the right model, we can still coherently ask for an explanation of why THIS chain of things, and not another. The rejoinder at this point is to point out that a fully contrastive explanation of why this chain is as it is, if it existed, would convert the contingent chain to a necessary one. But O’Connor responds by saying that we can still seek a complete explanation, even if it is a non-fully-contrastive one. He says the only way to do this will be to ground contingent reality in a transcendent-cause explanation that is non-contrastive.

But how can a necessary being (NB, for short) provide the right kind of explanation? The necessity of the cause would seem to “prove too much” and convert the contingent chain to necessary status. Indeed, this is a standard critique of the Leibnizian version of the cosmological argument, which features the Principle of Sufficient Reason (PSR). O’Connor concedes the effectiveness of this critique, and points out that his version will not invoke the PSR. But isn’t the PSR or something like it implied by the desire for an explanation of the contingent world?

O’Connor says he can avoid this as he isn’t offering a contrastive explanation (e.g. why this world instead of any other?) He says he can do this by invoking an explanation based on a model of the NB as an agent:

"In the context of an agent who exercises a capacity to freely act for a purpose, explanation is grounded in an internal similarity relation of the content of the prior purpose to that of the effective intention. To understand why an intention is freely generated, one need only identify its reasons-bearing content. This contrasts of course, with a mechanistic model of intentional action on which an agent’s purposes or desires or beliefs explain the choice, or formation of an intention, solely in terms of an external, causal relationship to it. But it is readily understandable in its own terms. (p.83)"

So the NB-agent’s creative activity of forming an intention, based on reasons (which is then satisfied by the creation of a contingent world), need not be seen as a necessary sequence of steps:

"And it would be a confusion to suppose that we needed a further explanation of the generation of the intention – for that is just the agent’s exercise of control over his state of intention and its product. (p.83, emphasis original)"

But still we want to ask: why intend contingent world C instead of C*? Well, we can’t ask this question, because an answer would provide for C being inevitable. But that doesn’t mean we haven’t given an explanation at all, according to O’Connor. The buck stops with the (contingent) intentions of the NB-agent.

To me, this seems a difficult stance. A naturalist might ask why we can’t stop the explanation at an earlier stage (before invoking the NB); O’Connor wants to stop the discussion at a later stage – but either way we eventually reach a “conversation-stopper”. Do the extra steps add enough value to prefer this NB-agent view, especially given all the (controversial) extra apparatus of intentions, purposes, activity, etc.?

Well, we need to recall the NB view’s main advantage, which is that it helps underwrite the reality of modal truths (there exists both contingent and necessary aspects to reality). But, do we really need the agent apparatus to provide this advantage?

To his credit, O’Connor sees the existence of another alternative (or set of alternatives). He starts by noting that, in addition to his agent model, there are many instances where we accept causal explanations even when they are not fully contrastive ones. Specifically, we do this in cases of “indeterministic mechanistic causal processes in the natural world, and the kinds of scientific explanations that may be given for them.” He gives the example of a disease which, when untreated, leads to a debilitating outcome 27% of the time. If the indeterminism here is fundamentally irreducible, we still have a good explanation for the outcome (the untreated disease) even if not a fully contrastive one (we can’t say why the outcome manifested itself this particular time).

So, the question becomes: could the necessary being be an impersonal “fount” of creative possibilities, which generates one or more contingent worlds?

{End of part one}

Note: I’m skipping over some sections along the way – for instance I’m leaving aside the discussion of the Spinozan view here (could our single world be the necessary being?). Also, O’Connor mentions in passing that it is hard to rule out a model of multiple NB’s -- although there seems to be no good argument for it either.


Friday, August 01, 2008

Conway and Kochen vs. Determinism

John Conway and Simon Kochen are out with a paper called "The Strong Free Will Theorem" (HT), updating and "strengthening" their earlier paper (discussed here). Recall that the theorem begins with axioms which, while idealized, flow from accepted aspects of quantum theory and relativity and then concludes that if humans are assumed to be free in setting up experiments, then particles have the same kind of freedom in selecting among experimental outcomes. The theorem also serves as another argument toward ruling out hidden-variable interpretations of QM.

This paper presents a "stronger" version of the theorem, by showing it still works if one of the axioms is loosened, but otherwise the thrust is unchanged. In keeping with the earlier paper, though, the authors add to the formal argument some provocative philosophical comments, which I enjoy. Here's how the paper concludes:

"Although...determinism may formally be shown to be consistent, there is no longer any evidence that supports it, in view of the fact that classical physics has been superceded by quantum mechanics, a non-deterministic theory. The import of the free will theorem is that it is not only current quantum theory, but the world itself that is non-deterministic, so that no future theory can return us to a clockwork universe."

Thursday, July 24, 2008

3 Links

I'm currently reading Theism and Ultimate Explanation: The Necessary Shape of Contingency by Timothy O'Connor. I'm very interested in the cosmological argument from contingency, and this book is an up-to-date take on that and related metaphyical issues. I hope to have a post on this at some point but in the meantime here is a review from a naturalist's perspective by Graham Oppy (HT: sideblog at FQI).

I really enjoyed this insightful cartoon posted at Cosmic Variance along with the comments by Sean Carroll (the original source for the cartoon is here). No offense to cosmologists, but for purely philosophical reasons I think it is best to identify the actual world with the observable or causally connected universe (not that I think that's all there is, but because the regions we assume exist beyond the observable have a different ontological as well as epistemological status - see a related post here).

Finally, I want to post a friendly link to the discussion forum at Panendeism.org. Panendeism, as I understand it, is meant to be like Panentheism, but with the "deism" label stressing that this is a worldview arrived at through reason, without reliance on authority or revelation.


Tuesday, July 08, 2008

Sir John Templeton Dead at 95

Sir John was one of great investors and philanthropists (here is the press release from the John Templeton Foundation site). I had the opportunity to hear him speak a couple of times and I conversed personally with him for a few minutes once (about interest rates and currency markets) and he was extremely insightful.

The foundation’s work, a big part of which looks to bridge religion and science, is sometimes controversial (for more see this old post), but it has funded many worthy efforts, including for example the recent seeding of the Foundational Questions Institute. I hope they continue to fund this kind of scientific research. In terms of critique, my personal wish would be that they divert some funding from religion and theology to non-religiously motivated study of metaphysics and philosophy of science. In any case, I think our progress on the big questions should be better because of the existence of the foundation. So, thanks to Sir John and to those who work at implementing his vision.


Friday, June 27, 2008

Reduce Everything to Space-time?

[UPDATED 25 Sept.2009: Fixed Links]
I want to quickly comment on an interesting post by Justin at Panexperientialism. In it he reviews a book by Freya Mathews (called The Ecological Self) and also discusses a draft paper by Jonathan Schaffer (Spacetime the One Substance). Please check out his post, which discusses many aspects of Mathews’ ideas in particular beyond what I’m picking up on here (I have not read the book).

Both Mathews and Schaffer advocate a monistic metaphysical view where matter is effectively reduced to space-time.

I agree with these authors that the dual scheme of {space-time container plus material objects} must be rejected, but think they are slightly off-track in wanting to reduce the properties of matter fields to space-time (at least space-time anything like we currently think of it).

These brief comments focus on the relationship of this idea to the work of theoretical physicists. Mathews acknowledges that an early attempt to derive this reduction from general relativity failed (Wheeler’s Geometrodynamics), but still likes the metaphysical vision for philosophical reasons. In his paper Schaffer argues toward a similar goal, but along the way I think he overstates the degree to which GR and (especially) quantum field theory as we know them are congenial to this vision. QFT has matter fields housed in a separate space-time container. In GR the matter and space-time are dynamically intertwined, but the fact that you can model the geometry while leaving out matter shows that they remain distinct.

In some ways the quest for a theory of quantum gravity can (should?) be viewed as a quest for a monistic theory which is rid of the dual scheme. I continue to try to follow the different theories as a layperson to see how they come down on this issue.

String theory: originally an extension of QFT which retained the feature of having fields on a background space-time. Has evolved in many ways over the years and maybe can overcome this starting point (?).

Loop Quantum Gravity and Causal Dynamical Triangulations: these seek to formulate a quantum version of space-time with the promise of integrating matter into the picture later. I’m not sure if this promised integration would be more monistic than GR.

Causal Sets; Quantum Causal Histories/Geometrogenesis; Internal Relativity; Quantum Computing and Condensed Matter-based approaches: these programs seem best on this question as they try to specify a monistic underlying micro-theory from which space-time and matter fields as we know them may simultaneously emerge.

I would note that if the latter sort of approach works, it doesn’t support Schaffer’s advocacy of priority monism (see my previous post on this topic). The underlying network would not be a very coherent whole, but a fairly ill-behaved evolving pluralism of micro-events. Even though Schaffer wants to overcome the container/object scheme, his view of space-time as the holistic fundamental object still has a bit of a hangover from the container idea in my opinion.

Tuesday, June 24, 2008

CDT in Scientific American

Surprising and promising results have not come too frequently in quantum gravity research, but the Causal Dynamical Triangulations program led by Renate Loll, Jan Ambørn and Jerzy Jurkiewicz had an exciting moment in 2004. A computer simulation showed that a space-time model with the right dimensionality arose from a path integral superposition of fairly generic microscopic geometric building blocks. The team has kept up a steady stream of research investigating and seeking to extend this result, and now they have published a popular article in the latest Scientific American.

I recommend the article, if one has access to it. I also discussed (as best I could) the basics of the CDT approach in this earlier post, so I won’t repeat all that here. Also, I coincidentally had just read a recent paper by the team which showed how they have generated not just the right dimensionality, but also specifically find a de Sitter universe in a simulation.

The thing I find most interesting about CDT is that it may give some evidence that selecting asymmetric time and causality as fundamental features is important in quantum gravity (a caveat is that their simulation uses globally synchronized time, and I wonder if they can relax this assumption).

One reason to be cautious is that CDT at this point only deals with space-time, not matter. Like in Loop Quantum Gravity, there is an expectation that matter fields can be coupled to the theory later on. This is in contrast to research by Fotini Markopoulou and Olaf Dreyer (see posts here and here), who think that it is the matter fields which are to emerge from a micro-quantum substrate, and that space-time geometry is to be inferred from the matter. If this works, it seems conceptually more appealing, since you’ve dealt with both space-time and matter at once. (See also this recent FQXi article on Markopoulou and Dreyer).

One last interesting aspect which I hadn’t thought much about before was discussed in the recent paper. This is the fact that CDT (and I assume other “emergence-style” programs) need to be investigated by computer simulation, rather than by deriving a specific analytical result through mathematical formalism. This essentially means giving up on the traditional idea of a “theory of everything” which can be written down in a set of equations. The CDT team doesn’t see this as a weakness, and cites condensed matter theory (see also this post) for example as a field where emergent behaviors are profitably studied without the possibility of precise description at the micro-level. They also invoke the idea of “self-organizing” complex systems. Here’s a quote:

“Think of quantum gravity as a strongly coupled system of a very large number of microscopic constituents, which by its nature is largely inaccessible to analytic pen-and-paper methods. This is no reason for despair, but a common situation in many complex systems of theoretical interest in physics, biology and elsewhere, and merely calls for a dedicated set of technical tools and conceptual notions.”

Wednesday, June 11, 2008

Sensitivity to a Single Photon

Patrick Suppes and Jose Acacio de Barros wrote a paper called Quantum Mechanics and the Brain (HT: Clark’s sideblog). Obviously the title was irresistable to me. However, it turns out that the title is a bit misleading. The paper has a few paragraphs discussing previous proposals regarding the role QM might play in the brain and briefly gives the authors’ views on these -- frankly without saying anything new. The paper then shifts to what is its main focus, which is to highlight an interesting instance where there is a good indication that biological systems do exploit quantum level phenomena. This is the animal eye’s demonstrated ability to react to a very faint light signal: specifically the ability to detect and respond to a handful of photons, and probably a single photon.

The paper describes previous experimental results with insects and animals. Sensitivity to single photons was inferred using statistical methods in some of these tests (some of the older references in the paper were also discussed in this Usenet posting from 1996). The authors then outline a proposal for future experiments to confirm the results by utilizing the technological ability we now have to shoot single photons.

This result would demonstrate that organisms are sensitive to the quantum realm. While we remain far from understanding the role quantum effects might play in conscious experience, this is another step toward exploring the subject.

Friday, June 06, 2008

A Russell-Style Objection to Many-Worlds

In the standard formulation of Quantum Mechanics, there are two processes: in the absence of a measurement, a system described by a wave function evolves continuously and deterministically; when a measurement is made, the system instantaneously takes on a specific value according to the property being measured (often called a “collapse”). Everett’s Relative State formulation of Quantum Mechanics, the most famous version of which is called the Many-Worlds Interpretation, seeks to drop the collapse process from the theory. The interpretation then struggles to explain why we have the experience of measurement outcomes that we do (e.g. all outcomes are still happening but in many different worlds).

My common sense objection to the interpretation has always been based on the roots of the theory in scientific experimentation. QM was formulated to explain the phenomena we observe in the microscopic realm, including, obviously, the outcomes of measurements! It seems perverse to drop the process describing measurement.

Why then is the interpretation fairly popular? It is because the collapse process seems mysterious and “unphysical”, while the evolution of quantum systems in the absence of collapse is mathematically well behaved and, despite the many new complications and nuances, remains closer in spirit to the traditional dynamics of “matter in motion”.

I don’t know for certain whether Bertrand Russell ever wrote about Many-Worlds (it’s doubtful given the chronology – he died in 1970 and Everett’s 1957 work wasn’t widely discussed until the 70’s), but he made a careful philosophical study of physics (and all contemporary science) and I think his work helps clarify my objection to the interpretation.

Recall that Russell’s project was to analyze the data and methods of science and how they relate to human experience. He described how the realms of physical theory and experience can be brought closer together in an ontology of events and their causal relations.

Here are some quotes from Russell’s Human Knowledge (which I’m reading now, inspired by Carey Carlson’s references to it in his book):

Here’s the quote that first made me think of Many-Worlds: “Mathematical physics contains such a superstructure of theory that its basis in observation is obscured.” (p.41)

We need to remember: “…a datum for physics is something abstracted from a system of correlated psychological data.” (p.59)

And: “There is here a peculiarity: physics never mentions percepts [experiences] except when it speaks of empirical verification of laws; but if its laws are not concerned with percepts, how can percepts verify them?” (p.219)

The answer is that it doesn’t make sense to see the entities described in physics as something disjoint with the events of first-person experience. Science can be seen as describing a causal network which connects up in consistent ways with experiential events (think of how the human experiences of telescopic images provide the raw material and ongoing testing for astronomical and cosmological theories). Therefore, reality is best viewed as a web of events consisting of the directly experienced and the indirectly inferred, the latter of which are the usual target of physical theories.

In quantum physics, the wave function is derived as a description of reality which connects consistently with experiential observations. It has this in common with other theories of physics. Of course with quantum physics there is always a twist. The twist is that, unlike the entities of other physical theories, the wave function can itself only be viewed as a fully objective physical entity if we pretend that the measurement events don’t exist! Here we are unavoidably given a choice to elevate the reality of the directly experienced events (measurements) or the inferred physical entity (the wave function). The events must take priority since they are the part of reality which is not known through inference. In fact quantum measurement events are the best candidates to be the raw material for a consistent ontology of the concrete world.

Given all this, I think the best interpretation of quantum mechanics is a version of the relational or perspectivist approaches. In this interpretation, a relational network of measurement events (or interactions) constitutes the concrete world. All quantum systems interact (perform measurements upon each other) with no ontological distinction between the macroscopic and microscopic realms (or the human and non-human). The wave function in this interpretation describes a system’s propensities for interaction outcomes from the perspective of a particular measuring system.

Quantum Foundations Series (some highlights from posts with the Quantum Physics label):

The Duality at the Heart of Physics
I discuss how the two processes of QM are unavoidably interdependent.

Free Will All the Way Down
A paper describes the relationship between freedom at the human level and indeterminism at the micro-level.

Interpreting Quantum Probability
Is the probabilistic aspect of QM objectively real or epistemological?

The Limit of the Bayesian Interpretation
Why the Bayesian (subjective) interpretation of quantum probability is preferred but is still incomplete.

Wigner’s Friend and Perspectivist Quantum Theory
A superior interpretation of QM by Paul Merriam.

Merriam’s Quantum Relativity
How Merriam addresses the “preferred basis” problem for the perspectivist interpretation.

Tuesday, May 20, 2008

Should a Russellian be a Panpsychist?

Emmett Holman has an interesting article in the latest Journal of Consciousness Studies that’s also very timely in light of the discussion in the prior post. Surveying several approaches to a Russellian theory of mind (RTM, to use his abbreviation), Holman observes that most have been panpsychist, but at least one is presented as a version of physicalism, and some are neutral monist interpretations. Holman takes a look at the landscape to see what the advantages of competing approaches might be. He doesn’t come to a strong conclusion, but thinks that it’s not clear that a panpsychist approach should be preferred.

Setting the stage, Holman outlines the basis for interest in RTM: “The advertising for RTM is that it constitutes just the insight needed to break (what many see as) the current impasse on the mind-body problem.” He explains in particular that RTM is a response to dissatisfaction with “mainstream” physicalism, defined as the stance that “the mental supervenes on the physical as the physical is characterized by physical theory” (emphasis original).

According to RTM, the problem is that physical theory characterizes the entities in its domain strictly extrinsically, in terms of causal, functional and other relations in which they stand to each other and to our experience. Nowhere does physical theory inform us as to the intrinsic nature of the entities; and it can only be in virtue of this intrinsic nature that the entities have the causal powers or dispositional properties they exhibit. The question arises: can we know what this intrinsic nature is?

Well, the Russellian notes that conscious experience acquaints us with intrinsic phenomenal properties. In taking this point further, there are several possible views to take of the intrinsic nature of basic entities.

1. First, a monistic panpsychism posits that this intrinsic nature is essentially mental or phenomenal, and thus the mental dimension of human experience derives from the inherent mental quality of all nature.

2. A Russellian version of physicalism might say that the mental emerges or otherwise is a consequence of a configuration of physical (defined here minimally as non-mental) intrinsic properties.

3. Finally, a neutral monist would say the mental derives from basic intrinsic properties that themselves are neither mental nor physical.

Looking critically initially at the second (physicalist) option, Holman notes the objection that it might be seen to offer an epistemic gap just as large as conventional mental/physical gap. Well, perhaps not, since a combination of non-mental intrinsic properties giving rise to mental intrinsic properties may be seen as superior to trying to get mental intrinsic properties from non-mental extrinsic ones.

Still, isn’t the panpsychist option superior? Here, however, Holman looks critically upon another “gap” faced by this option. Since the idea of an elementary particle having mental states like we have seems ridiculous, advocates of panpsychist theories typically speak of the basic intrinsic properties as “proto-phenomenal” or “proto-experiential” etc. Elementary entities are sometimes said to have a “low-level” version of experience or mentality, or have properties that are in some way just analogous to human mentality. As long as we lack an account of how the simpler properties give rise to the more robust ones, a troubling gap persists here.

How about a neutral monism? Holman sketches his own version of how a neutral monist approach to the nature of these intrinsic properties might go. “The relevant fundamental intrinsic property [would be] a determinable property which is itself neither experiential nor non-experiential”(emphasis original). Its determinates can manifest degrees of experientiality along a zero to one scale.

Before pursuing this line of thought further, Holman circles back to see if any panpsychist accounts introduced concepts more amenable to coming in degrees than simply the concepts of the phenomenal, the experiential, or the like. One concept he sees which may have promise is the notion of “subjective unity”, which he finds, among other places, in Gregg Rosenberg’s work. Conscious states differ from the rest of nature in that they have (quoting Rosenberg) a “complex but not composite” character. Conscious states can be very rich and diverse (including various sensory modalities) yet they have a special kind of unity which binds them together. They can’t be simply decomposed into parts the way physical systems can be. The components can only exist as part of the unified state.

If this idea has merit, then maybe subjective unity can be “scaled down” gradually or in degrees in way that appears more intelligible than was the case for the notions of mentality or experience as such. In fact this could perhaps support a neutral monism in the following sense: the most basic entity considered alone has no experiential value. But once multiple entities are considered, they can be participants in a subjective unity with an attendant degree of experientiality.

Holman doesn’t take this past a sketch, and he also considers some other variations very briefly. He concludes that more work needs to be done on Russellian theories and it is not clear to him which avenue is superior. But he maintains that it is not clear to him from his survey that panpsychist versions are inherently to be preferred. At the same time, he doesn’t’ think much of the “physicalist” option either. Holman’s sympathies appear to lie with neutral monism, but he awaits a more full-fledged theory to consider here.

Postscript 1: Like many others who have written about this subject, Holman’s main references to Russell himself are to 1927’s Analysis of Matter. I note that Carey Carlson’s book relies much more on 1948’s Human Knowledge, which I have not read. I wonder if Carlson’s much greater emphasis on the importance of viewing RTM though the lens of an event ontology vs. an object ontology can be linked at least in part to this fact. I’ll have to get the book.
{UPDATE 14 June 2008: Having read both books -- the event ontology is firmly in place in both. The basic framework did not change from 1927 to 1948. To pick up the neutral monism from Russell without picking up the event ontology on which it rests is simply to miss too much of the point.]

Postscript 2: Note that Holman’s lone example of the “physicalist” version of RTM is from an older paper from Daniel Stoljar. Stoljar’s position in his more recent book (Ignorance and Imagination) has evolved and he doesn’t specifically champion this version of RTM. His endorsement of physicalism in the book relies on the epistemic argument that we just don’t know enough to say that the non-mental couldn’t be the supervenience base for the mental. The unknown “experience-relevant non-experiential facts” could be intrinsic, extrinsic or both. I have a series of posts on this book under the label Stoljar.

Tuesday, May 13, 2008

Russell and Whitehead Solved the Problem

I was struck by Carey R. Carlson’s opening lines in the preface to his book, The Mind-Body Problem and Its Solution:

“The mind-body problem demands a description of how the mental and physical parts of the world go together to make up the whole. The problem was solved around 1927 by Bertrand Russell and Alfred North Whitehead.”

(1927 saw the publication of Russell’s Analysis of Matter, and was the year of Whitehead’s Gifford lectures which formed the basis of his Process and Reality.)

Since I think the Russellian stance on the mind-body problem is superior to the traditional options of dualism and materialism and also think Whitehead’s speculative process metaphysics was far ahead of its time, I was excited to see this passage and curious as to how well Carlson would back it up over the course of a short book. After reading the book I can say I think he did an excellent job in showing how the ideas from these thinkers can be put together into a compelling argument for a more coherent view of the world: that of a causal network of events which share a character which naturally underpins what we characterize as the mental and physical.

Carlson is an independent thinker and writer from Minneapolis who studied philosophy some years ago and concluded Russell and Whitehead had it right. He says he was perplexed that this seemed unappreciated. While he ended up pursuing a career outside philosophy, he finally was able to put together his views on the matter on paper (he put out this book in 2004). He and I corresponded via e-mail recently and that led to my interest in reading his work.

Both Russell and Whitehead explained why you cannot identify the world with our mathematical descriptions of it: you leave out the intrinsic qualitative character of the world we know via experience. Both philosophers showed, in somewhat different ways, that what we think of as mental events and physical events can both fit into a picture of a causal network, whereas our usual intuition of the world as a spatial container holding static objects or substance won’t work – whether one posits one kind of object or two.

Carlson’s outstanding contribution is to carefully describe what this ontology of causal relations can do: it can describe space-time and all that’s in it while also accommodating mental events. He then shows how scientific theory really is an elucidation of a causal web and how it must actually fit into our network of experiences in order to be formulated. This leads to the final postulate that all nature has a sentient character, and that this best explains how mind and world are unified. While I was already sold on this idea, I think Carlson’s book may convince other readers of the merits of a panexperientialist solution to the mind-body problem inspired by sound philosophy of science.

All in all, this book does credit to its ambitious title. Along the way, it is also a fine exposition of some of the work of two of our greatest twentieth century thinkers.

My chapter-by-chapter notes and comments follow below.

---------------------------------------------------------------------------------
In the first chapter, he lays out the problem. Like others, he locates the “hard” problem for materialism in the raw feeling of being or sentience. He speaks of “the sensorium” of unified experience. If materialism or physicalism sees the mathematically characterized world as all that exists, then it does not include nor will it explain sentience – that which is given in phenomenal experience. The mind-body problem could be called the “phenomenology-physics problem”, he says (p.13).

Chapter 2 deepens this discussion, taking a closer look at the history of physics. Even with the advent of modern physics, taking quantum field theory for example, there seems to be a left-over tendency to conceive the world as physical matter or energy existing in a container of physical space. There certainly is no role for mental features in such a worldview.

After touching very briefly on some historical philosophical perspectives in Chapter 3, we start to move toward the solution in Chapter 4 with a review of Russell’s analysis of the world in terms of structure and relations (and specifically causal structure and relations). Here, Carlson pulls relevant passages from Russell’s Human Knowledge (this book I had not read – I had read the Analysis of Matter). Structure is defined as a pattern of relations. Relations relate terms of a class (which is just any definite set of entities). Russell discusses a variety of types of relations: dyadic and higher-order, symmetric/asymmetric, transitive/intransitive. Relata, relations and structure together form a complex, which is a whole or a fact. This approach can be seen as applying to pure mathematics, and it can also be seen as characterizing the phenomena of the world, as presented to us. In physics, we apply mathematics to real world phenomena. But one can’t identify the phenomena as identical with the mathematical description. Here’s a quote from Russell: “There is, however, a very definite limit to the process of turning physics into logic and mathematics; it is set by the fact that physics is an empirical science, depending for its credibility upon relations to our perceptive experiences.”

While Russell’s philosophy of science is obviously not unknown, commentators I have read don’t always stress the role of relations in his analysis. The fact that the world is made of relations (rather than the static objects or “stuff” of our intuitions) is a key point for Carlson, which will also provide the link to the work of Whitehead (Carlson notes with interest the history here: while Russell and Whitehead were famous for their collaboration on the Principia Mathematica, that seemed to be the end of their formal work together – yet the “fingerprints” of this shared history writing the Principia can be seen in their similar later critiques of the role of logic and mathematics in science).

In Chapter 5, Carlson argues that the most basic feature of physics, space-time, can be analyzed in terms of causal structure. Here he gives credit to the role the theory of special relativity played in showing the way forward: in it we can see how causal relations could be responsible for both spatial and temporal order. Since causal relations are relations in time, it is the temporal ordering of events which is fundamental, while the spatial is derived.

Since “real” causation is an asymmetric relation, Carlson shows how we can use arrows to graphically illustrate causal relations. One can see how “space-like” and “time-like” relations between events arise from a causal structure. He then shows how a “particle” could be defined as a recurring pattern of activity -- a certain geometric pattern repeated along a time-like route in a larger causal lattice). One of Carlson’s interesting ideas is that since different numbers of arrows could connect two given events, one gets a notion of relative temporal frequency. This allows for the introduction of the concept of energy into a causal network of events.

With these examples, we can see how the notion of the physical world (traditionally seen as a spatial container plus stuff inside it) can be replaced by a network of time-like causal relations alone. This is consistent with a basic notion of what science does: it looks at the world and analyzes what comes before what. Carlson notes that progress in physics eventually arrived at quantum events – discrete events which “do not admit of further before-and-after analysis” (p.71). These elementary events and the causal connections among them constitute the world. So the physical world is not space plus what it contains, but rather time-like causal relations only. And this picture is one which allows for the solution to the mind-body problem.

In Chapter 6, Carlson argues that mental events can be located in the causal network in the same way as “physical” events. Our experience can be seen as participation in a sequence of time-ordered mental events. So, do mental and physical events “interact”? Interactionist dualism is often seen as an incoherent perspective on the mind-body problem. But we’re not talking about static substances. Russell reminds us that mental events are unavoidably part of the causal network which describes the world, since the structure of physics has to empirically connect to the sensations of the physicist. Mental and physical events are interspersed in the causal network. Now, our human experiences seem very different than the entities of micro-physics, but when you break down it all down, both can be situated in a causal network. “The distinction between mental and physical now hinges upon the assumption that mental events are characterized by sensory qualities, while physical events are not. (p.87)”

Carlson does a good job in this chapter in trying to break down our intuitions, and it’s hard to do it justice in this summary. But I think he does a good job showing how an event ontology can situate the mental as well as the physical in one world-web. Some questions arise here: what are the base-level indivisible quantum events connected by the arrows of causal relations, such that they can form the basis for everything? What is the quantum event’s intrinsic nature?

Heading toward an answer to this is the discussion in Chapter 7. While in Chapter 6 Carlson tries to show how mental events can be situated in a world we think of as physical, in Chapter 7 he (again leveraging Russell) shows that we really do need to invert this. All physical theorizing and experimentation does take place within the context of mental events. This shows there really is no reason to distinguish between the two kinds of events. Science gives us nothing but causal structure. We know some causal events have a mental character and the physical events are only known via the mental.

Russell “stops” at this point. He was agnostic about whether the intrinsic character of the physical (non-directly experienced) events was also of mental or sentient character in any way. He said we just don’t know. Phenomenal character is the only kind of intrinsic quality we’re familiar with, but we can’t know the intrinsic character of the rest. Given this stance, some have cast Russell’s theory as a form of neutral monism.

But Carlson wants to go further, so he states that his final chapter (no.8) “belongs to Whitehead”. Whitehead’s critique of materialism was very similar to Russell’s (for example the first several chapters in his Science and the Modern World). But in Process and Reality, he was much more ambitious. Among other things in that work, he goes ahead and postulates that all events have a sentient character. Carlson endorses the idea that it is both fitting and simpler to posit this, rather than to assume there is some other unknown form of intrinsic content. Further, under this assumption, the world just makes more sense in terms of unifying the human experience with the rest of the world. In Whitehead’s theory, the causal structure of all events are grounded and ordered in the same way.

Carlson does a good job in just a few pages in summarizing some of Whitehead’s ideas. This is difficult because of all of Whitehead’s invented terminology and his hard-to-penetrate prose “style”. Whitehead’s treatment of causation is richer than what Carlson has discussed so far, including the transition from subjective to objective poles for each event, and the inclusion of purpose and self-determination as a causal factor in the setting the course of events. He also discusses Whitehead’s treatment of the unactualized possibilities (eternal objects) which are presupposed by the view that the temporal process establishes the contingent facts of the world. He finally then discusses Whitehead’s theory of how events (occasions) are organized into enduring societies and how the parts work to satisfy the determination of the whole -- a template for describing the human mind.

I don’t have very much critical to say about the book except as it relates to my thinking about what might be added to expand the discussion. Events and causation are the primitives in the theory. Until the final chapter on Whitehead, how causation works is not described. We can probably improve on Whitehead in this area (Gregg Rosenberg’s theory would be one to possibly consider). Related to this would be drawing more connections to an interpretation of quantum mechanics to better describe what is going on in an elementary quantum event and what the status is of wavefunction associated with a quantum system. Also, we need to someday improve on Whitehead with regard to the combination or composition problem – explaining how micro-events clump up into macro-events like the ones making up human experience.


Thursday, May 01, 2008

Cephalopod Consciousness Update

An article entitled “Cephalopod consciousness: Behavioral evidence” by Jennifer A. Mather has appeared in the latest issue of the journal Consciousness and Cognition. In it, she looks at a variety of studies to assess the evidence that cephalopods have a form of primary (as opposed to higher-order) consciousness. This is intended as a follow-up on the previous paper on animal consciousness by Seth, Baars and Edelman published in the journal in 2005 (looking at indications of consciousness from the perspective of Bernard Baars’ “global workspace theory”). I had briefly blogged about this article and others on animal consciousness in old posts here and here.



Cephalopods are particularly interesting. First, they’re just cool. More importantly, the convergent evolution of the mind in a lineage so distant from the human tells us something important, I think, about how the building blocks of consciousness are embedded deeply in the fabric of the natural world.

Setting aside higher-order reflective self-consciousness and language-possession, humans likely share with some animals a primary or core consciousness tied to their active engagement with the world. With vertebrates, the symmetries in neurological structure are one aid to assessing the potential for such a shared trait. With cephalopods this is more difficult, but certain brain anatomy-behavior linkages can be explored, and beyond that the full range of behaviors can be surveyed for clues. Mather conducts a meta-analysis of a long list of cephalopod studies to search for this evidence. She finds several areas which offer good support for the existence of primary consciousness in these animals.

With regard to brain anatomy-behavior linkages, she finds similarities in lateralization of functions (which may be linked to consciousness): this is demonstrated in the eye use of octopi in particular but another example is found in the skin displays of squid.

Sleep pattern is another clue to consciousness in animals, and there are similarities found between cephalopods and mammals here.

The developmental aspect of mind is explored: cuttlefish develop improved memory as their brains develop with some parallel to the development of young mammals and human infants.

Learning is another general area explored in many studies and Mather looks at these in some depth. Parallels to “higher” vertebrates are found. Another section explores the sense of self and self-monitoring. Also of interest is that octopi have demonstrated personality differences, and squid have shown intriguing signs of primitive language capability in their skin displays.

All this would suggest cephalopods are good candidates for animals which possess primary consciousness.

UPDATE: I see there was good blog commentary on an earlier online version of this work here and here.

Monday, April 28, 2008

Common Sense: Both Wrong and Right

When it comes to our “folk” intuitions about the conscious self and free will, they can be wrong on the surface but still correct on a deeper level. I was reading Peter’s post on Conscious Entities about a neuroscientific paper on decision-making and conscious awareness (see additional discussion here). Like Libet’s work, the authors of the study appear to show that brain activity indicative of decision making precedes the subject’s awareness of making the choice. This again suggests the folk intuition that free will takes place at the level of reflective conscious awareness is flawed.

I concur with most of the commenters who expressed the view that this outcome has little bearing on the question of determinism and freedom at the metaphysical level. The human brain/body system is very complex and our higher order introspective awareness is a fragile construct embedded in this much larger context. And a simplistic division of the mind into a conscious and unconscious doesn’t do justice to the gradations of awareness.

When it comes to free will, our most fundamental theory of the natural world is indeterministic --it makes me wonder why we are even still debating determinism. There is no principle of quantum physics that states that indeterminism magically vanishes at macroscopic scales (the fact that we don’t observe macroscopic superpositions is not evidence that the large-scale world is described by classical physics). We may not be free in the way we think, but I think we are correct in viewing the future as open and believing that the sequence of natural events in which we participate is undetermined.

But does this imply that events are “just random”? Free choice on the part of a participating system would look like randomness from a third-person perspective. For a formal argument that there is a linkage between microscopic and human freedom, see this old post on the “Free-Will Theorem”.

[Update 4 June 2008: I changed the title of the post to the one I meant to have in the first place, swapping the order of 'right' and 'wrong']