Why is reality mysterious? Why should difficult questions persist for so long despite the successes of physical science?
An answer to these meta-questions may lie in the concept of a phase transition. As discussed in prior posts (like this recent one), a school of quantum gravity research has arisen which explores the idea that the visible cosmos (of matter bound in space-time geometry) arises at lower energies from a more fundamental quantum world. This more fundamental level is usually characterized by a network of quantum systems, subject to a directional causal arrow, but otherwise connected in a highly non-local fashion (little or no recognizable spatial geometry).
This model is inspired by the myriad examples of phase transitions observed in nature, and particularly those in the field of condensed matter physics (which utilizes the toolkit of quantum field theory to describe the phenomena). Superconductors, superfluids, etc. display remarkable emergent features which arise under certain pressure/temperature conditions.
Picture our familiar physical cosmos as a portion of reality which condensed into a “classical” phase, but retained subtleties in its nature which reflected its pre-transition roots. If this analogy works, then it would explain our situation: while classical explanations usually work well, some phenomena defy such analysis because their foundations go deeper. This could be the case, for instance, for the arrow of time and for conscious experience itself.
Wednesday, December 23, 2009
Thursday, December 10, 2009
2 Ontology Papers
1. I discovered on the web the writings of Ian Thompson. He is a physicist by career but someone who is also well versed in philosophy and has been setting out his own stance on things metaphysical. In “Power and Substance” he says some things I thought made a lot of sense. He likes the idea that dispositional properties (or powers) are ontologically essential. He sets out an argument that dispositions could be taken to constitute a substance rather than as properties. I am intrigued by this argument, but agnostic about it on first reading. What I really liked (hopefully not just because I agree) is his take on how a dispositions-based ontology comports with the picture of quantum physics (section 6).
Friday, December 04, 2009
This Just Doesn’t Ring True
Karen Armstrong is a prolific writer about religion, and stakes out a conciliatory position in contemporary debates between "new" atheists and theists. I have not read any of her books and I don’t claim any expertise on the subject matter! But nonetheless her take on religious history as presented in this op-ed piece bothered me. Perhaps someone more familiar with her work and/or with history can set me straight.
Here are excerpts:
Here are excerpts:
In the past, many of the most influential Jewish, Christian and Muslim thinkers understood that what we call "God" is merely a symbol that points beyond itself to an indescribable transcendence…
Monday, November 30, 2009
String Theorist Turns to Emergent Gravity Approach
Inspired by phase transitions displayed in condensed matter physics, Petr Hořava has constructed a model where the time dimension is decoupled from space at high energy/short distance while the space-time characteristic of relativity (and Lorentz invariance) emerges at low energy/long distances. The model, which is a quantum field theory, is able to be renormalized in a way GR itself cannot be. The paper, “Quantum Gravity at a Lifshitz Point,” sets out the theory.
Friday, November 20, 2009
Re-defining Where We Live
(A short rant for a Friday)
If you’re like me, you learned the following story.
The universe, or cosmos, consists of a four-dimensional space-time continuum which contains matter and energy. It all began with a big bang singularity: time as well as space started then, so it doesn’t make sense to ask what happened “before”. The universe probably extends beyond what is observable, but the same physical laws prevail everywhere. Nothing exists outside the universe.
Every statement in that paragraph is likely wrong.
If you’re like me, you learned the following story.
The universe, or cosmos, consists of a four-dimensional space-time continuum which contains matter and energy. It all began with a big bang singularity: time as well as space started then, so it doesn’t make sense to ask what happened “before”. The universe probably extends beyond what is observable, but the same physical laws prevail everywhere. Nothing exists outside the universe.
Every statement in that paragraph is likely wrong.
Tuesday, November 10, 2009
Spinoza: Are Finite Things Contingent After All?
I’ve never studied Spinoza in any depth, so have been ignorant of the many subtleties/ambiguities in his work and the competing interpretations put forward by scholars on many points. I enjoyed reading two articles by Samuel Newlands: first, his SEP article “Spinoza’s Modal Metaphysics”, and the recent preprint, “Another Kind of Spinozistic Monism.” The latter paper argues that the various forms of metaphysical dependence employed by Spinoza (causation, inherence in, following from, etc.) are each connected to a broad notion of conceptual dependence, which is the key to understanding their role. This paper, interesting in its own right, helped me understand Newlands’ work in the modal metaphysics article, which is directly about the aspect of Spinoza I’m most interested in at the moment.
Spinoza is widely seen as a necessitarian (everything which exists does so necessarily), and for good reason: he has passages which explicitly affirm this. But evidently if you dig deeper there is a subtlety with regard to the status of “finite modes”, the ontological category which would include everyday concrete things. (Please note the use of “modes” here is unrelated to the “modal” dimensions of necessity and contingency). The one substance (God) and its infinite modes are absolutely necessary, but finite modes don’t follow from God in the same way as infinite modes. On the one hand, Spinoza affirms that God created everything, but elsewhere he specifies that finite modes follow only from other finite modes. What does this mean for the modal status of finite things (and how can he affirm both of these statements?)
Spinoza is widely seen as a necessitarian (everything which exists does so necessarily), and for good reason: he has passages which explicitly affirm this. But evidently if you dig deeper there is a subtlety with regard to the status of “finite modes”, the ontological category which would include everyday concrete things. (Please note the use of “modes” here is unrelated to the “modal” dimensions of necessity and contingency). The one substance (God) and its infinite modes are absolutely necessary, but finite modes don’t follow from God in the same way as infinite modes. On the one hand, Spinoza affirms that God created everything, but elsewhere he specifies that finite modes follow only from other finite modes. What does this mean for the modal status of finite things (and how can he affirm both of these statements?)
Monday, October 26, 2009
Logicomix!
I loved Logicomix, the graphic novel of ideas written by Apostolos Doxiadis and Christos H. Papadimitriou and illustrated by Alecos Papadatos and Annie Di Donna (here are reviews from the NYT and Guardian). It dramatizes the quest of Bertrand Russell and some of his contemporaries to build a certain basis for knowledge, beginning with the project of providing a complete and consistent logical foundation for mathematics (or at least arithmetic). It was a quest tinged with tragic overtones, as the effort itself led to the uncovering of its own impossibility (culminating with Gödel’s theorems). The authors play up a second kind of dramatic theme as well, dwelling on the specter of madness as it haunted figures in turn of the 20th century mathematics: it depicts Cantor’s insanity, Frege’s paranoia, and Russell’s fear of inheritable madness in his family.
I guess I rate Logicomix highly in part just because it was such a nice surprise that it exists! I’m not sure where it would rank if there were 10 graphic novels dramatizing historical intellectual or scientific quests. If I were to come up with criticism, I would say first that it has an excess of framing devices: the story is delivered by Russell via reminisces at a 1939 lecture; then, an outer frame consists of ingressions of the authors themselves as they debate how to present the story, and then at the end digress into a discussion of Aeschylus’ Oresteia (!). Also, I think the madness theme is too forced (e.g. by having Russell seeking out Cantor without knowing he was committed to an asylum – Russell never met him).
The ideas themselves are presented accurately, I think, although not explored in great depth: the focus is more on storytelling. However, a key conclusion is delivered correctly IMO: reality outruns its abstract description (the map should not be mistaken for the territory). Both Russell and his Principia Mathematica collaborator Alfred North Whitehead separately would critique metaphysical materialism in the 1920’s developing this theme.
I guess I rate Logicomix highly in part just because it was such a nice surprise that it exists! I’m not sure where it would rank if there were 10 graphic novels dramatizing historical intellectual or scientific quests. If I were to come up with criticism, I would say first that it has an excess of framing devices: the story is delivered by Russell via reminisces at a 1939 lecture; then, an outer frame consists of ingressions of the authors themselves as they debate how to present the story, and then at the end digress into a discussion of Aeschylus’ Oresteia (!). Also, I think the madness theme is too forced (e.g. by having Russell seeking out Cantor without knowing he was committed to an asylum – Russell never met him).
The ideas themselves are presented accurately, I think, although not explored in great depth: the focus is more on storytelling. However, a key conclusion is delivered correctly IMO: reality outruns its abstract description (the map should not be mistaken for the territory). Both Russell and his Principia Mathematica collaborator Alfred North Whitehead separately would critique metaphysical materialism in the 1920’s developing this theme.
Tuesday, October 20, 2009
GPPC Calendar: Update
Work is underway to get the GPPC website updated (a new webmaster was needed); in the meantime, below is some information for the two fall 2009 conferences.
The Greater Philadelphia Philosophy Consortium is a cooperative effort of 15 area University philosophy departments, which puts on 3 topical philosophy conferences, an annual undergraduate philosophy conference, and a public issues forum on a topic of interest to philosophers and folks at large. All conferences are free and open to the public.
1. "Instrumental Reasoning: A Conversation with John Broome"
Sunday, November 1, 2009, 1pm to 5pm
Clayton Hall, University of Delaware, Newark, DE
This conference, organized by Mark Greene at Delaware, has its own home page here.
2. GPPC Symposium, “The Medical Humanities”
Saturday, November 14, 2009, 1:00 to 5:30 P.M. Followed immediately by a reception until 6:00 P.M.
Temple University School of Medicine, 3500 N. Broad Street, Philadelphia. Room 105 & Stone Commons
SPEAKERS:
Sherwin Nuland, M.D., Yale University,
“Half a Millennium of Artists Portraying
Diseases and Healers: 1500-2000”
Hilde Lindemann, Ph.D., Michigan State University,
“Caring and Coercion: What Counts as Autonomy
at the End of Life?”
Rebecca Kukla, Ph.D., University of South Florida,
“Paper is Complete – Author TBD: The Death of the Author in Contemporary Biomedical Research”
Scott Burris, J.D., Temple University,
“When ‘Ethics’ Becomes ‘Law’”
CHAIRS:
Miriam Solomon, Temple University (contact for more info.)
W. Mark Goodwin, Rowan University
The Greater Philadelphia Philosophy Consortium is a cooperative effort of 15 area University philosophy departments, which puts on 3 topical philosophy conferences, an annual undergraduate philosophy conference, and a public issues forum on a topic of interest to philosophers and folks at large. All conferences are free and open to the public.
1. "Instrumental Reasoning: A Conversation with John Broome"
Sunday, November 1, 2009, 1pm to 5pm
Clayton Hall, University of Delaware, Newark, DE
This conference, organized by Mark Greene at Delaware, has its own home page here.
2. GPPC Symposium, “The Medical Humanities”
Saturday, November 14, 2009, 1:00 to 5:30 P.M. Followed immediately by a reception until 6:00 P.M.
Temple University School of Medicine, 3500 N. Broad Street, Philadelphia. Room 105 & Stone Commons
SPEAKERS:
Sherwin Nuland, M.D., Yale University,
“Half a Millennium of Artists Portraying
Diseases and Healers: 1500-2000”
Hilde Lindemann, Ph.D., Michigan State University,
“Caring and Coercion: What Counts as Autonomy
at the End of Life?”
Rebecca Kukla, Ph.D., University of South Florida,
“Paper is Complete – Author TBD: The Death of the Author in Contemporary Biomedical Research”
Scott Burris, J.D., Temple University,
“When ‘Ethics’ Becomes ‘Law’”
CHAIRS:
Miriam Solomon, Temple University (contact for more info.)
W. Mark Goodwin, Rowan University
Monday, October 12, 2009
3 Links: Math and Physics
[UPDATE 13 October 2009: Edited for typos and clarity]
First, since I’m on the record as a skeptic regarding the existence of actual or concrete infinities, I’m on the lookout for discussion of this topic. Here’s a talk given by mathematician Edward Nelson (hat tip: Not Even Wrong). In it, he expresses deep skepticism not only (in passing) regarding the idea of an actual physical infinity, but also (very controversially) on the concept as used in mathematics itself. I wouldn’t think skepticism about the former need have anything to do with the latter (and I’m certainly no mathematician), but I thought this was interesting reading.
Second, I enjoyed reading this (lengthy) overview of quantum gravity research by R.P.Woodard. The main focus of the paper is a “pedagogical explanation” of just why the techniques used in creating quantum versions of classical theories didn’t work when it came to general relativity (I thought this was helpful even if one can't follow all the formalisms). There is also a short section on the state of current research. Woodard makes this comment in the section discussing Causal Dynamical Triangulations (p.67): “…exact calculations are unlikely to be unattainable for quantum gravity, so the most fruitful way of questioning perturbation theory [i.e. the QFT method which is also the basis of original string theory – Steve] is to develop better approximation techniques.” The idea of finding a theory of everything (TOE) which consists of a set of equations with exact solutions looks like it is not going to happen. Finding a well-motivated approximate description of the ultra-high energy regime from which GR and QFT matter fields co-emerge at lower energies is probably the way things will go (I fearlessly predict).
Lastly, here’s a link which is just plain cool. Experimental physicists have been trying to place larger and larger molecules in quantum superposition: here’s a proposal for designing an experiment which could achieve this for a virus. Hat tip goes to the Physics and Cake blog.
First, since I’m on the record as a skeptic regarding the existence of actual or concrete infinities, I’m on the lookout for discussion of this topic. Here’s a talk given by mathematician Edward Nelson (hat tip: Not Even Wrong). In it, he expresses deep skepticism not only (in passing) regarding the idea of an actual physical infinity, but also (very controversially) on the concept as used in mathematics itself. I wouldn’t think skepticism about the former need have anything to do with the latter (and I’m certainly no mathematician), but I thought this was interesting reading.
Second, I enjoyed reading this (lengthy) overview of quantum gravity research by R.P.Woodard. The main focus of the paper is a “pedagogical explanation” of just why the techniques used in creating quantum versions of classical theories didn’t work when it came to general relativity (I thought this was helpful even if one can't follow all the formalisms). There is also a short section on the state of current research. Woodard makes this comment in the section discussing Causal Dynamical Triangulations (p.67): “…exact calculations are unlikely to be unattainable for quantum gravity, so the most fruitful way of questioning perturbation theory [i.e. the QFT method which is also the basis of original string theory – Steve] is to develop better approximation techniques.” The idea of finding a theory of everything (TOE) which consists of a set of equations with exact solutions looks like it is not going to happen. Finding a well-motivated approximate description of the ultra-high energy regime from which GR and QFT matter fields co-emerge at lower energies is probably the way things will go (I fearlessly predict).
Lastly, here’s a link which is just plain cool. Experimental physicists have been trying to place larger and larger molecules in quantum superposition: here’s a proposal for designing an experiment which could achieve this for a virus. Hat tip goes to the Physics and Cake blog.
Friday, October 02, 2009
Power Holism
This makes a nice follow up to my recent reading of C.B. Martin’s book: I found this paper, “Puzzling Powers: The Problem of Fit” by Neil E. Williams. In it, Williams identifies and seeks to address a trouble spot encountered by Martin and other advocates of a powers/dispositions-based ontology.
To start, Williams describes three key features of the ontology. First: the powers are intrinsic properties of their bearers. They don’t need external or relational connections for support. Second, the manifestations which they are capable of producing are essential features of the power: those potential manifestations make a power what it is. Third, the actual manifestations occur as a result of reciprocity. Martin in particular stressed the mutual nature of powers working together to produce manifestations, noting different pairings or combinations of powers will lead to different outcomes.
Given these three features, Williams sees “a problem of fit.” He says: “Stated briefly, the problem is that powers have to work together when they produce manifestations (reciprocity), but as they are not relations (intrinsicality), and they cannot change with the circumstances (essentialism), the fact that they are causally harmonious is without explanation.” Powers fit together to produce mutual manifestation, but the fact that they fit is not accounted for by the three features they possess. He uses a jigsaw puzzle analogy to suggest that, as it stands, there’s no reason to expect a fit to occur without adding more to the story.
So, what to do? Williams doesn’t think it’s advantageous to drop one of the three features he began with. With regard to essentialism, he does mention an alternate idea that the potential manifestations essential to powers are not determinate but are instead “TBD” (to-be-determined) via hooking up with other powers. But he still sees a gap with this idea: what finally makes the particular determinate manifestation finally occur?
So, instead, Williams puts forward a solution: “power holism”. The nature of powers is determined holistically: “the specific, determinate nature of each power (that is, the set of manifestations a power is for and the precise partners required for those manifestations) depends on the specific, determinate nature of other powers with which it is arranged in a system of powers.”
He notes we also need to discuss the bigger picture of “…what kind of world allows for or provides for the fit that power holism bestows.” How do the powers get their holism on, in other words?
He looks at three ways one might address this. First, we could just posit holistic coordination as a brute addition to ontology. But this isn’t very satisfying. Second, he suggests one might look to a platonic account: collaboration between powers takes place in the platonic realm. Third, one might retain naturalism, but suggest a form of monism which could support the coordinated fit. For instance, if all powers ultimately ontologically depended on the prior existence of the whole world (a la Jonathan Schaffer), then this shared basis could explain the harmonious fit. Williams is sympathetic to this option but says any of the choices might be worthy of consideration.
A very good paper. I liked the appeal to holism and (although Williams doesn’t use this terminology) non-local connections. In my opinion, though, in order to really nail it down, some indeterminism needs to be added to the model. Fit (correlations) can be explained by the non-local connections, but powers need to be seen as capable of more than one outcome per partner—an irreducible indeterminism only resolved when the manifestation (event) occurs. This would then nicely comport with quantum mechanics.
(I see Williams also has a paper co-authored with Andrea Borghini advocating a single-world modal actualism, explained using a powers ontology. I’ll have to check that out – my last post on that topic was here).
Friday, September 25, 2009
First Gunk, now Junk: Infinite Chains of Metaphysical Explanation
I recently read a few philosophy papers which share a common theme. They advocate the idea that there may be no foundational or basic metaphysical level of reality (whether monistic or pluralistic), and that therefore one should (or at least can) embrace infinite chains of metaphysical explanation.
Einar Duenger Bohn, in the draft paper “Must There Be a Top Level?” notes that many philosophers have discussed the conceivability of “gunk”, which is infinitely divisible stuff (a world is gunky if everything in it has, mereologically speaking, proper parts). Bohn thinks that “junk” is also conceivable, where everything is a proper part of something else.
The term junk is drawn from the paper “Monism: The Priority of the Whole” by Jonathan Schaffer (here’s a post which discussed an earlier draft of this paper). Schaffer argued against the conceivability of junk as part of a larger set of arguments in favor of the monistic whole as the foundational entity in a world. (To fill out the glossary, Bohn introduces the adjective “hunky” to describe a world both gunky and junky).
Schaffer says that in discussing possible worlds, a “junky world” makes no sense, since “world” refers to a single entity. Junk has no upward “cut-off” point to contain it within a world. Bohn objects that there is no need to constrain the term world as singular: maybe it can refer to a set or some other plural entity, which allows for junk.
But is junk really conceivable? Bohn offers thought experiments which put junk on a par with gunk with regard to conceivability (imagine each atom in a world is itself a world: now take this idea both upward and downward). What’s wrong with a “hunky” world extending without foundation in both directions?
Schaffer’s paper is also the target of Matteo Morganti’s just published “Ontological Priority, Fundamentality and Monism” (hat tip; no draft available online that I can see). Morganti sees no compelling argument for a foundational level of reality vs. the alternative of “metaphysical infinitism”. He also takes issue with Ross Cameron’s conclusion that we should postulate a fundamental level for methodological reasons if we are to have any satisfying metaphysical explanations (I discussed the relevant Cameron paper in this post).
A third paper, by Francesco Orilia, makes an analogous argument in a different philosophical argument “thread”: he compares whether facts (or states of affairs), which bind an object with its attribute (or relate multiple objects), are basic ontological entities (as in Armstrong), or whether they give rise to an infinite regress of binding relations (Bradley’s regress). Orilia thinks we should accept “fact infinitism” as a live option. (For a very deep dive on this see Bill Vallicella's thoughts and dialogue with Orilia here and here).
So is there a foundational entity or entities or not? Do you really have an explanation when you invoke an infinite chain? There’s a lot to digest here, and I plan on rereading this set of papers and others. My intuition has always been that there must be a fundamental level of reality, but an argument is needed here, not an intuition. I think we can reject gunky/junky worlds. I will go ahead and sketch my speculative argument below (a previous gunk-inspired post with a good comment thread is here).
First, though, let me note that I’m very happy to continue to find members of a new generation of philosophers who are taking up “meaty” metaphysics as a primary focus. Philosophers like Schaffer and Cameron are doing serious and innovative metaphysics, and are provoking responses from other young philosophers (definition of young = anyone younger than me): great stuff.
My tentative position is this (what follows assumes modal realism). When we employ our (remarkable) ability to conceive of infinities we are in danger of making a particular error. As background, I endorse conceivability as a guide to possibility, and specifically think that our concept of what is logically possible maps to what is metaphysically possible. I think this total space of metaphysical possibilities is infinite and this is what grounds our concept of the infinite. Now, philosophers usually refer to this space of possibilities specifically as the space of possible worlds (I think this is rooted in methodological usefulness). However, our attempt to conceive of what is possible in a particular world may be actually importing content which may not fit in one world. We need to be careful about how we define a world.
Consider the actual world. Following David Lewis in one respect (but not in others), I think the best sense of “actual” to use is as an indexical. If I take the actual world to be the causally connected region centered on my point of view, then, based on a posteriori reasons, the actual world should arguably be considered finite: quantum physics as applied to space-time implies no infinite divisibility, and causal connections only extend so far. If all possible worlds are likewise defined as centered causally connected patches, I hypothesize that all “worlds” should be considered likewise finite.
Let me pause here for a moment. Many (most?) thinkers assume that a world is an isolated, bounded spatio-temporal object of arbitrary size. But I assert there actually isn’t a good reason to think there is some clean definition of a world boundary beyond this concept of a causally connected region or patch. (Please note I’m not saying physicists/cosmologists need to adopt my definition of a world or universe, this is just to help make sense of a philosophical problem.) If one asserts a world contains a concrete infinity, then one has exceeded the boundary for what can fit in a world. This is the error.
Now, let’s go back and consider the question of whether there is a foundational “level” of reality using this model of an infinite space of metaphysical possibilities broken down into finite worlds. When considering a world, its “parts” have a good claim for being basic given indivisibility. Since a world can have an irregular and changing boundary as events move into or out of causal contact with the center, the “whole” of the world seems to have less of a claim to be foundational.
On the other hand, let’s consider all of reality, i.e., the total space of metaphysical possibilities (all possible worlds). In this case, there exists an infinite number of ways to parse it, and the parts no longer seem to have as good a grip on being basic. Here it is the total space which is the ultimate source of the reality individual worlds take part in. So in the bigger picture, the monistic whole takes priority, and has the best overall claim to be the foundational entity.
Einar Duenger Bohn, in the draft paper “Must There Be a Top Level?” notes that many philosophers have discussed the conceivability of “gunk”, which is infinitely divisible stuff (a world is gunky if everything in it has, mereologically speaking, proper parts). Bohn thinks that “junk” is also conceivable, where everything is a proper part of something else.
The term junk is drawn from the paper “Monism: The Priority of the Whole” by Jonathan Schaffer (here’s a post which discussed an earlier draft of this paper). Schaffer argued against the conceivability of junk as part of a larger set of arguments in favor of the monistic whole as the foundational entity in a world. (To fill out the glossary, Bohn introduces the adjective “hunky” to describe a world both gunky and junky).
Schaffer says that in discussing possible worlds, a “junky world” makes no sense, since “world” refers to a single entity. Junk has no upward “cut-off” point to contain it within a world. Bohn objects that there is no need to constrain the term world as singular: maybe it can refer to a set or some other plural entity, which allows for junk.
But is junk really conceivable? Bohn offers thought experiments which put junk on a par with gunk with regard to conceivability (imagine each atom in a world is itself a world: now take this idea both upward and downward). What’s wrong with a “hunky” world extending without foundation in both directions?
Schaffer’s paper is also the target of Matteo Morganti’s just published “Ontological Priority, Fundamentality and Monism” (hat tip; no draft available online that I can see). Morganti sees no compelling argument for a foundational level of reality vs. the alternative of “metaphysical infinitism”. He also takes issue with Ross Cameron’s conclusion that we should postulate a fundamental level for methodological reasons if we are to have any satisfying metaphysical explanations (I discussed the relevant Cameron paper in this post).
A third paper, by Francesco Orilia, makes an analogous argument in a different philosophical argument “thread”: he compares whether facts (or states of affairs), which bind an object with its attribute (or relate multiple objects), are basic ontological entities (as in Armstrong), or whether they give rise to an infinite regress of binding relations (Bradley’s regress). Orilia thinks we should accept “fact infinitism” as a live option. (For a very deep dive on this see Bill Vallicella's thoughts and dialogue with Orilia here and here).
So is there a foundational entity or entities or not? Do you really have an explanation when you invoke an infinite chain? There’s a lot to digest here, and I plan on rereading this set of papers and others. My intuition has always been that there must be a fundamental level of reality, but an argument is needed here, not an intuition. I think we can reject gunky/junky worlds. I will go ahead and sketch my speculative argument below (a previous gunk-inspired post with a good comment thread is here).
First, though, let me note that I’m very happy to continue to find members of a new generation of philosophers who are taking up “meaty” metaphysics as a primary focus. Philosophers like Schaffer and Cameron are doing serious and innovative metaphysics, and are provoking responses from other young philosophers (definition of young = anyone younger than me): great stuff.
My tentative position is this (what follows assumes modal realism). When we employ our (remarkable) ability to conceive of infinities we are in danger of making a particular error. As background, I endorse conceivability as a guide to possibility, and specifically think that our concept of what is logically possible maps to what is metaphysically possible. I think this total space of metaphysical possibilities is infinite and this is what grounds our concept of the infinite. Now, philosophers usually refer to this space of possibilities specifically as the space of possible worlds (I think this is rooted in methodological usefulness). However, our attempt to conceive of what is possible in a particular world may be actually importing content which may not fit in one world. We need to be careful about how we define a world.
Consider the actual world. Following David Lewis in one respect (but not in others), I think the best sense of “actual” to use is as an indexical. If I take the actual world to be the causally connected region centered on my point of view, then, based on a posteriori reasons, the actual world should arguably be considered finite: quantum physics as applied to space-time implies no infinite divisibility, and causal connections only extend so far. If all possible worlds are likewise defined as centered causally connected patches, I hypothesize that all “worlds” should be considered likewise finite.
Let me pause here for a moment. Many (most?) thinkers assume that a world is an isolated, bounded spatio-temporal object of arbitrary size. But I assert there actually isn’t a good reason to think there is some clean definition of a world boundary beyond this concept of a causally connected region or patch. (Please note I’m not saying physicists/cosmologists need to adopt my definition of a world or universe, this is just to help make sense of a philosophical problem.) If one asserts a world contains a concrete infinity, then one has exceeded the boundary for what can fit in a world. This is the error.
Now, let’s go back and consider the question of whether there is a foundational “level” of reality using this model of an infinite space of metaphysical possibilities broken down into finite worlds. When considering a world, its “parts” have a good claim for being basic given indivisibility. Since a world can have an irregular and changing boundary as events move into or out of causal contact with the center, the “whole” of the world seems to have less of a claim to be foundational.
On the other hand, let’s consider all of reality, i.e., the total space of metaphysical possibilities (all possible worlds). In this case, there exists an infinite number of ways to parse it, and the parts no longer seem to have as good a grip on being basic. Here it is the total space which is the ultimate source of the reality individual worlds take part in. So in the bigger picture, the monistic whole takes priority, and has the best overall claim to be the foundational entity.
Monday, September 21, 2009
Notes on C.B.Martin’s The Mind in Nature
Philosopher C.B.Martin died last year and left us a great book. The Mind in Nature summarizes his philosophy and its applications to mind, causality and more. For background see Paul Snowdon’s obituary and a brief note on the book by Gualtiero Piccinini; here is a very good draft review by Jessica Wilson. It was not an easy book for me to read but I found it very rewarding. Martin proposes an ontology featuring dispositions (sometimes referred to as powers). Notably for him, dispositions are inherently qualitative, and are also capable of producing myriad manifestations depending on the context.
I have previously read the work of philosophers who feature dispositional properties/powers as a basic element in their ontologies. (Martin has an early section recounting arguments for the irreducibility of dispositions– see the Wilson review for more on this). An attraction of these proposals is their potential to support a theory of real causality and of intentionality – topics mainstream analytic philosophy has trouble with IMO (see my series on George Molnar for more). John Heil adds the twist (along with Martin) that we could consider dispositions to also be qualities, which helps solve an additional aspect of the mind/body problem. Despite these accomplishments I have continued to view disposition-based accounts as falling a bit short when it came to mind and also to modality (see this recent post). Martin’s distinctive account takes another step toward addressing these issues.
Dispositions Offer Possibilities
I think the key to Martin’s ontology is that his dispositions don’t just point to one kind of manifestation, they are prolific. He says that whatever the fundamental elements of nature are, they have multiple internal properties which are dispositions to potentially an infinite number of different manifestations depending on context. Different “reciprocal disposition partners” will give you different manifestations. At any point, what exists is a “…manifestation tip of a disposition iceberg (p.9)”.
One doesn’t need “possible worlds”, since dispositions and their “projectivity” of actual and non-actual manifestations with various partners constitute a web or “power-net (p.29)”. In fact actual readinesses exist for infinite potential (Wilson points out this assertion is not backed up by an argument), so you don’t need any further grounding for possibility. (Upon reading this my own thought was why couldn’t Martin just redefine possible worlds as possible power-nets?). I note Martin himself isn’t vehement in asserting he’s definitely provided a full and adequate grounding for modality, compared to his tone when defending other aspects of his view.
Emergence/Reduction
For Martin, the creative power of dispositions means you don’t need “levels of reality” or the notion of supervenience. He sees all reality as a continuum “going from the many directional readiness of the quark, most of which will never be manifested, to the capacities and dispositions for many representations of some English speaker, most of which also will never be manifested (p.29)”. (Note that in contrast to the mainstream of analytic philosophy, language is not primary: the ontology comes first). The basic ontology is rich enough that emergence occurs through processes at the one level of reality; emergence is not a concept implying or requiring multiple levels.
Causality and Events
The manifestation of a disposition is the basis unit of causality. At one point (Ch.5), Martin stresses that causality in his view is not a temporally separated thing. The reciprocal partnering of dispositions creates a mutual manifestation; the model is of a “cause-effect” rather than cause-then-effect. (I would say, in other words, an event.) Martin doesn’t much address the problem of time and its perceived unidirectional flow. He takes the space-time of Einstein to be a substratum carrying the dispositional properties.
Mind
With regard to mind, Martin stresses that dispositions (which are inherently directional) give you intentionality at level below what we think of as “mental”. He also gives an account of how a natural system utilizing representation comes into being; he is inspired by results in neurobiology which he takes to show that “vegetative” (i.e. unconscious) systems of the brain effectively utilize representations already. So intentionality and representation are not distinctive hallmarks of human consciousness.
And neither are qualities. Martin first says dispositions cannot be identified with structural properties (such a view leads to an empty “Pythagoreanism” in his view). Even structureless elementary units (quarks or whatever) have multiple internal dispositions. And there is no need for purely qualitative, non-dispositional (categorical) properties. It is dispositions through-and-through and these dispositions are themselves simultaneously qualities (Wilson’s review also discusses criticism of this dual-aspect idea citing Armstrong).
So what does distinguish the mental? Here Martin offers a model which says the difference between mental and non-mental lies in the kind of qualitative material used in a representation. If the material of use is appropriately sensory, we get consciousness. I’m giving this model short-shrift here a bit in my notes (chs.13 -15), but I was a bit disappointed by it. Given Martin’s emphasis on gradualism, i.e. having nature exploit its base level qualities to give rise to all phenomena, I found his account of mind to be ad hoc. Since we’re making some brute assumptions in our ontology anyway, I would prefer to just couple the experiential aspect of reality to the qualitative aspect at the base level as two views of the same thing (possessed by each mutual manifestation). Then the emergence of human consciousness would be likewise gradual and not wholly dependent on functional criteria. (Wilson has a somewhat similar critique in the last part of her review).
Propensities and Quantum Physics
I was happy to see Martin tries to grapple (briefly) with quantum physics (in his chapter 6). In a section called “Dispositions and Quantum Theory”, Martin briefly discusses quantum theory and the interpretational question of assigning ontological status to the wave function vs. the measurement events. He thinks a system as described by the wave function could be taken as a propensity, which is a bit different idea than a disposition which points to specific manifestation. There seems to be a metaphysical gap here (correlating to the problem of the “collapse of the wave function”). He suggests that a “dispositional flutter” or oscillation might cause the appearance of irreducible probability. He says the flutter could be a consequence of practical limits on detection, or an intrinsic random oscillation. Unfortunately, I think we know this kind of interpretation of QM doesn’t hold up.
I thought it would have been consistent with his views if he identified his dispositions as wave function-type propensities and than had his “mutual manifestation” event map to the quantum measurement event. There still is an element of apparent mystery or gap surrounding the triggering of events, but I think we have to accept that this is just a given feature of the world, given QM.
I have previously read the work of philosophers who feature dispositional properties/powers as a basic element in their ontologies. (Martin has an early section recounting arguments for the irreducibility of dispositions– see the Wilson review for more on this). An attraction of these proposals is their potential to support a theory of real causality and of intentionality – topics mainstream analytic philosophy has trouble with IMO (see my series on George Molnar for more). John Heil adds the twist (along with Martin) that we could consider dispositions to also be qualities, which helps solve an additional aspect of the mind/body problem. Despite these accomplishments I have continued to view disposition-based accounts as falling a bit short when it came to mind and also to modality (see this recent post). Martin’s distinctive account takes another step toward addressing these issues.
Dispositions Offer Possibilities
I think the key to Martin’s ontology is that his dispositions don’t just point to one kind of manifestation, they are prolific. He says that whatever the fundamental elements of nature are, they have multiple internal properties which are dispositions to potentially an infinite number of different manifestations depending on context. Different “reciprocal disposition partners” will give you different manifestations. At any point, what exists is a “…manifestation tip of a disposition iceberg (p.9)”.
One doesn’t need “possible worlds”, since dispositions and their “projectivity” of actual and non-actual manifestations with various partners constitute a web or “power-net (p.29)”. In fact actual readinesses exist for infinite potential (Wilson points out this assertion is not backed up by an argument), so you don’t need any further grounding for possibility. (Upon reading this my own thought was why couldn’t Martin just redefine possible worlds as possible power-nets?). I note Martin himself isn’t vehement in asserting he’s definitely provided a full and adequate grounding for modality, compared to his tone when defending other aspects of his view.
Emergence/Reduction
For Martin, the creative power of dispositions means you don’t need “levels of reality” or the notion of supervenience. He sees all reality as a continuum “going from the many directional readiness of the quark, most of which will never be manifested, to the capacities and dispositions for many representations of some English speaker, most of which also will never be manifested (p.29)”. (Note that in contrast to the mainstream of analytic philosophy, language is not primary: the ontology comes first). The basic ontology is rich enough that emergence occurs through processes at the one level of reality; emergence is not a concept implying or requiring multiple levels.
Causality and Events
The manifestation of a disposition is the basis unit of causality. At one point (Ch.5), Martin stresses that causality in his view is not a temporally separated thing. The reciprocal partnering of dispositions creates a mutual manifestation; the model is of a “cause-effect” rather than cause-then-effect. (I would say, in other words, an event.) Martin doesn’t much address the problem of time and its perceived unidirectional flow. He takes the space-time of Einstein to be a substratum carrying the dispositional properties.
Mind
With regard to mind, Martin stresses that dispositions (which are inherently directional) give you intentionality at level below what we think of as “mental”. He also gives an account of how a natural system utilizing representation comes into being; he is inspired by results in neurobiology which he takes to show that “vegetative” (i.e. unconscious) systems of the brain effectively utilize representations already. So intentionality and representation are not distinctive hallmarks of human consciousness.
And neither are qualities. Martin first says dispositions cannot be identified with structural properties (such a view leads to an empty “Pythagoreanism” in his view). Even structureless elementary units (quarks or whatever) have multiple internal dispositions. And there is no need for purely qualitative, non-dispositional (categorical) properties. It is dispositions through-and-through and these dispositions are themselves simultaneously qualities (Wilson’s review also discusses criticism of this dual-aspect idea citing Armstrong).
So what does distinguish the mental? Here Martin offers a model which says the difference between mental and non-mental lies in the kind of qualitative material used in a representation. If the material of use is appropriately sensory, we get consciousness. I’m giving this model short-shrift here a bit in my notes (chs.13 -15), but I was a bit disappointed by it. Given Martin’s emphasis on gradualism, i.e. having nature exploit its base level qualities to give rise to all phenomena, I found his account of mind to be ad hoc. Since we’re making some brute assumptions in our ontology anyway, I would prefer to just couple the experiential aspect of reality to the qualitative aspect at the base level as two views of the same thing (possessed by each mutual manifestation). Then the emergence of human consciousness would be likewise gradual and not wholly dependent on functional criteria. (Wilson has a somewhat similar critique in the last part of her review).
Propensities and Quantum Physics
I was happy to see Martin tries to grapple (briefly) with quantum physics (in his chapter 6). In a section called “Dispositions and Quantum Theory”, Martin briefly discusses quantum theory and the interpretational question of assigning ontological status to the wave function vs. the measurement events. He thinks a system as described by the wave function could be taken as a propensity, which is a bit different idea than a disposition which points to specific manifestation. There seems to be a metaphysical gap here (correlating to the problem of the “collapse of the wave function”). He suggests that a “dispositional flutter” or oscillation might cause the appearance of irreducible probability. He says the flutter could be a consequence of practical limits on detection, or an intrinsic random oscillation. Unfortunately, I think we know this kind of interpretation of QM doesn’t hold up.
I thought it would have been consistent with his views if he identified his dispositions as wave function-type propensities and than had his “mutual manifestation” event map to the quantum measurement event. There still is an element of apparent mystery or gap surrounding the triggering of events, but I think we have to accept that this is just a given feature of the world, given QM.
Thursday, September 10, 2009
Stenger vs. Quantum Gods, Part Two
As I mentioned at the outset of the last post, Victor Stenger’s second goal in his book Quantum Gods is to critically examine “quantum theology”. This refers to attempts to rework traditional notions of God’s role as creator and/or intervening agent given modern physics. My review of this part of the book is below.
The Demise of Classical Deism
Trying to accommodate belief in God with a scientific worldview is not a new endeavor of course, and Stenger includes a discussion of “enlightenment deism” in the book. Newtonian physics provided a good foundation for the view that God created and planned the universe, but doesn’t further intervene (the clockwork universe). The theory of natural selection strengthened the case for deism vs. theism by weakening the perceived need for special divine action in the biological world. Quantum mechanics, however, made enlightenment deism untenable by introducing irreducible indeterminism; it appears God could not have ensured his planned outcomes in an essentially chancy world.
What About Emergence?
Stenger has a good discussion of the topics of complexity, chaos and emergent phenomena. Some have argued that emergence introduces “something more” into the makeup of the world beyond the basic physical entities. Some theologians think emergence may leave an opening for God to act in the world. Reviewing a few examples of emergent behavior (e.g. thermodynamics and fluid mechanics), Stenger argues all are cases of what he labels “material reductive” emergence. His main point is this: he says the fact that macro-principles cannot be deduced from microphysics is true and notable, but they nevertheless follow from, and thus are implicit in, the microphysics. The demonstration of this comes with the increasing power of computers to simulate emergent phenomena from the micro-facts: nothing “extra” is needed to do this. There is not a reasonable basis for “top-down” causality between levels of nature based on the phenomenon of emergence. Thus no opening is left here for theology to improvise a story about divine action.
I have found this topic to be difficult in the past, but I think Stenger’s position is well defended. My opinion is that the only really persuasive instance of ontological emergence is the crystallization of actual outcomes from the set of possibilities represented in a quantum state. (But while I don’t see a good case for further levels of ontological emergence or top-down causality once we’re dealing with larger (decohered) macroscopic systems, it is still highly suggestive that so many remarkable phenomena manage to be implicit in the micro-realm.)
Special Divine Action via QM?
In his Chapter 14 (“Where Can God Act”), Stenger reviews attempts to locate divine action in the world of modern physics. He briefly summarizes a number of articles from a Vatican/Center for Theology and the Natural Sciences (CTNS) multi-volume book series which address this and related issues.
Some authors think quantum indeterminism allows scope for subtle interventions, which, when combined with chaotic amplification, might lead to undetectable yet significant effects. Stenger first notes a common observation about such interventions: it seems that this kind of action is “God acting against God,” since the advocates of this idea also view God as the creator of the universe and its physical laws. Also, given the attribute of omnipotence, why should God restrict himself in this way? And it’s not clear this would work anyway. Stenger points out the time lag involved with chaotic amplification: given continued micro-chanciness, how can God be sure he’ll get the macroscopic effect he wants? If he acts to guarantee the outcome, then we’re back to wondering why he doesn’t just use his power to orchestrate all micro-processes to being with? Then, of course, we wouldn’t have indeterminism after all. (And this causes the additional problem of undermining the case for indeterminism-based free will, which is also advocated by some theologians).
I’m obviously oversimplifying the discussion in this summary, but I’ve read a number of these accounts over the years (many from my subscription to CTNS’ Theology and Science) and I agree with Stenger that attempts to implement special divine action with modern scientific tools seem fraught with difficulties.
Quantum Deism
In Chapter 15 (“The God Who Plays Dice”), Stenger notes that the problems with models of divine action do not rule out the possibility of a deist creator God. Perhaps God endowed the universe with indeterministic law as part of this creative plan. Of course, then, there was no guarantee humans would arise – so we couldn’t have been a part of a special plan, could we?
Stephen Jay Gould used to argue that evolution lacks any necessary directedness toward complexity or sophistication: if we “replayed the tape”, we might get a completely different outcome with no guarantees of intelligent life. Stenger notes that Simon Conway Morris is a recent advocate of a contrary view, however. Conway Morris argues, based on observed cases of convergent evolution, that humankind (or something very close) was inevitable. Is he right?
Stenger quotes Elliott Sober’s critique of Conway Morris, which centers on the fact that one can’t show the probability of evolutionary events: we only know the numerator, not the denominator (we unfortunately cannot run additional trials). Stenger adds that since out of the millions of species on earth, most are microbial, “intelligence would not seem to be very high at all on the universe’s agenda.”
Still, we could have a deist God if we accept that God created a cosmos with lots of chancy potential, and was willing to let the chips fall where they may. Such a God, of course, isn’t very attractive to those who yearn for a more traditional deity.
Is there an adequate basis for believing in the existence of (at least) this new kind of deistic God? Stenger doesn’t think so. He spends his last chapter (“Nothingism”) exploring his favorite ideas for a naturalistic account of the universe’s origin. I’m going to skip over these speculative ideas in this review. I think in the coming years work in cosmology and quantum gravity research will be offering new scenarios for how the observable universe arose from a pre-existing context (see for instance here). The point for Stenger is that if a naturalistic account of the universe’s origin is available, then we don’t need a deist God either. And he would answer the question: “who created the laws” by responding that laws are human inventions to describe regularities we observe. This is also a very defensible position.
What about the Multiverse?
Stenger doesn’t devote significant space to the idea of the multiverse apart from his brief section on the Many-Worlds Interpretation. But in addition to interpretations of QM, an increasing number of physical and cosmological theories motivate the possibility of a multiverse. There are also independent philosophical reasons for postulating that our universe is a subset of a larger reality.
I agree with most of Stenger’s criticisms of the various conceptions of God. However, the multiverse is the one conceptual place where I see the potential for a naturalistic worldview to make contact with a notion of God (albeit one which is non-traditional and impersonal): a transcendent and creative entity of which we are but a small part.
The Demise of Classical Deism
Trying to accommodate belief in God with a scientific worldview is not a new endeavor of course, and Stenger includes a discussion of “enlightenment deism” in the book. Newtonian physics provided a good foundation for the view that God created and planned the universe, but doesn’t further intervene (the clockwork universe). The theory of natural selection strengthened the case for deism vs. theism by weakening the perceived need for special divine action in the biological world. Quantum mechanics, however, made enlightenment deism untenable by introducing irreducible indeterminism; it appears God could not have ensured his planned outcomes in an essentially chancy world.
What About Emergence?
Stenger has a good discussion of the topics of complexity, chaos and emergent phenomena. Some have argued that emergence introduces “something more” into the makeup of the world beyond the basic physical entities. Some theologians think emergence may leave an opening for God to act in the world. Reviewing a few examples of emergent behavior (e.g. thermodynamics and fluid mechanics), Stenger argues all are cases of what he labels “material reductive” emergence. His main point is this: he says the fact that macro-principles cannot be deduced from microphysics is true and notable, but they nevertheless follow from, and thus are implicit in, the microphysics. The demonstration of this comes with the increasing power of computers to simulate emergent phenomena from the micro-facts: nothing “extra” is needed to do this. There is not a reasonable basis for “top-down” causality between levels of nature based on the phenomenon of emergence. Thus no opening is left here for theology to improvise a story about divine action.
I have found this topic to be difficult in the past, but I think Stenger’s position is well defended. My opinion is that the only really persuasive instance of ontological emergence is the crystallization of actual outcomes from the set of possibilities represented in a quantum state. (But while I don’t see a good case for further levels of ontological emergence or top-down causality once we’re dealing with larger (decohered) macroscopic systems, it is still highly suggestive that so many remarkable phenomena manage to be implicit in the micro-realm.)
Special Divine Action via QM?
In his Chapter 14 (“Where Can God Act”), Stenger reviews attempts to locate divine action in the world of modern physics. He briefly summarizes a number of articles from a Vatican/Center for Theology and the Natural Sciences (CTNS) multi-volume book series which address this and related issues.
Some authors think quantum indeterminism allows scope for subtle interventions, which, when combined with chaotic amplification, might lead to undetectable yet significant effects. Stenger first notes a common observation about such interventions: it seems that this kind of action is “God acting against God,” since the advocates of this idea also view God as the creator of the universe and its physical laws. Also, given the attribute of omnipotence, why should God restrict himself in this way? And it’s not clear this would work anyway. Stenger points out the time lag involved with chaotic amplification: given continued micro-chanciness, how can God be sure he’ll get the macroscopic effect he wants? If he acts to guarantee the outcome, then we’re back to wondering why he doesn’t just use his power to orchestrate all micro-processes to being with? Then, of course, we wouldn’t have indeterminism after all. (And this causes the additional problem of undermining the case for indeterminism-based free will, which is also advocated by some theologians).
I’m obviously oversimplifying the discussion in this summary, but I’ve read a number of these accounts over the years (many from my subscription to CTNS’ Theology and Science) and I agree with Stenger that attempts to implement special divine action with modern scientific tools seem fraught with difficulties.
Quantum Deism
In Chapter 15 (“The God Who Plays Dice”), Stenger notes that the problems with models of divine action do not rule out the possibility of a deist creator God. Perhaps God endowed the universe with indeterministic law as part of this creative plan. Of course, then, there was no guarantee humans would arise – so we couldn’t have been a part of a special plan, could we?
Stephen Jay Gould used to argue that evolution lacks any necessary directedness toward complexity or sophistication: if we “replayed the tape”, we might get a completely different outcome with no guarantees of intelligent life. Stenger notes that Simon Conway Morris is a recent advocate of a contrary view, however. Conway Morris argues, based on observed cases of convergent evolution, that humankind (or something very close) was inevitable. Is he right?
Stenger quotes Elliott Sober’s critique of Conway Morris, which centers on the fact that one can’t show the probability of evolutionary events: we only know the numerator, not the denominator (we unfortunately cannot run additional trials). Stenger adds that since out of the millions of species on earth, most are microbial, “intelligence would not seem to be very high at all on the universe’s agenda.”
Still, we could have a deist God if we accept that God created a cosmos with lots of chancy potential, and was willing to let the chips fall where they may. Such a God, of course, isn’t very attractive to those who yearn for a more traditional deity.
Is there an adequate basis for believing in the existence of (at least) this new kind of deistic God? Stenger doesn’t think so. He spends his last chapter (“Nothingism”) exploring his favorite ideas for a naturalistic account of the universe’s origin. I’m going to skip over these speculative ideas in this review. I think in the coming years work in cosmology and quantum gravity research will be offering new scenarios for how the observable universe arose from a pre-existing context (see for instance here). The point for Stenger is that if a naturalistic account of the universe’s origin is available, then we don’t need a deist God either. And he would answer the question: “who created the laws” by responding that laws are human inventions to describe regularities we observe. This is also a very defensible position.
What about the Multiverse?
Stenger doesn’t devote significant space to the idea of the multiverse apart from his brief section on the Many-Worlds Interpretation. But in addition to interpretations of QM, an increasing number of physical and cosmological theories motivate the possibility of a multiverse. There are also independent philosophical reasons for postulating that our universe is a subset of a larger reality.
I agree with most of Stenger’s criticisms of the various conceptions of God. However, the multiverse is the one conceptual place where I see the potential for a naturalistic worldview to make contact with a notion of God (albeit one which is non-traditional and impersonal): a transcendent and creative entity of which we are but a small part.
Tuesday, September 08, 2009
Stenger vs. Quantum Gods, Part One
I read Quantum Gods: Creation, Chaos, and the Search for Cosmic Consciousness, by Victor Stenger. Stenger is a physicist who is in the popular-book writing business with a focus on the science vs. religion debate. I found the book to be thought-provoking and worth reading, although I have a couple of substantial criticisms.
The motivation behind the book is a good one. Stenger’s most recent prior book (he has written quite a few) was God: The Failed Hypothesis -- an entry in the recent mini-boom of books by materialist-atheists criticizing traditional Judeo-Christian-Islamic religion. (I didn’t read this book having read Dawkins, Dennett and Harris in the genre). Quantum Gods is intended as a sequel which moves beyond traditional religion to criticize some newer and less traditional ideas about God and spirituality.
Stenger spends the first chapter looking at surveys on belief (including this Pew study), and notes they at least suggest that a substantial number of self-identified Christians have non-traditional ideas about God (in particular indicating beliefs more reminiscent of deism than theism). Also, he suspects that a significant portion of “unaffiliated” respondents have replaced traditional religion with various spiritual and/or paranormal beliefs (someone is buying lots of books by new age-type authors with these themes.)
With this as backdrop, Stenger explains that one strand which ties together some newer theistic/deistic ideas as well as the new age spiritual ones is their efforts to incorporate or accommodate modern physics (hence “quantum gods”). He identifies, then, two targets: the first, which he labels “quantum spirituality”, is the group of new age-type ideas which invoke quantum physics to support ideas about personal spiritual powers and/or cosmic consciousness; the second target, called “quantum theology”, is a set of attempts to accommodate God’s putative role as creator or intervening agent with modern science.
Before proceeding with more detail I need to mention one annoyance I had with the book: it felt somewhat padded to get to book-length. At 263 pages, about half (roughly the middle half) consists of encyclopedia entry-style sections on topics in the history of science. The content is unobjectionable, but this wouldn’t be the book you’d choose to read to learn about classical physics, relativity, or the history of quantum mechanics (QM) and the standard model of particle physics.
I thought the best part of the book were the late chapters criticizing “quantum theology” – i.e. some modern theological ideas about accommodating religion and science. I just wish he had a lengthier survey of these. A common objection to the work of the “new atheists” is to say that they haven’t grappled with more sophisticated modern theology but only criticize traditional beliefs (which are, of course, those held by the large majority of laypeople). Stenger has a good first effort to fill this gap, I thought. I will say a bit more about this section in a follow-up post. Below I will address Stenger’s criticism of new-agers and “quantum spirituality”, which I didn’t think was as effective.
Mind over Matter?
Let me quickly say that one of the reasons the chapters criticizing “quantum spirituality” aren’t as satisfying is not Stenger’s fault. Some of the ideas in books like those of Deepak Chopra and Gary Zukav, and movies like “What the Bleep Do We Know” just aren’t serious. Inspired by an apparent link between observation and the outcome of quantum measurements, a major claim of these folks is that quantum physics shows that human beings “create their own reality”: thus one can heal illnesses and become wealthy (or perhaps do some yogic flying) through the power of one’s mind. I personally think the details of QM could be implicated in the eventual scientific explanations of life, mind and freedom. However, to the extent that extrapolations from QM are used to portray humans as capable of paranormal powers, we cross into crackpot territory, and there’s not a lot more one can say (Stenger provides a section in Ch. 12 reviewing paranormal claims).
My main problem with Stenger’s discussion, though, is that his own opinion about the interpretation of quantum mechanics is highly idiosyncratic: he tries to hew as close as humanly possible to the worldview of classical materialism. As keen as he is to refute the quackery, he is as just as critical of the idea that the interpretation of QM might have any relevance to explaining the relation of mind to nature. He also is dismissive of the common notion that the ontology of QM or quantum field theory is suggestive of a more holistic cosmos than classical materialism. Here he is on less firm ground, in my opinion.
Saying No to Holism
With regard to holism, Stenger devotes a couple of sections discussing Fritjof Capra, one of the progenitors of the quantum new-age publishing genre with his book, The Tao of Physics (1975). In the book Capra stressed connections between QM and eastern philosophies, which he said both emphasize interconnectedness and process. Stenger points out that Capra was inspired by his work on a research program which was proposed as an alternative to quantum field theory called S-matrix/bootstrap theory. He points out that this program didn’t pan out, and this undercuts Capra. Stenger also has a brief discussion of quantum luminary David Bohm, who late in his career also espoused a holistic philosophy inspired by his work on the interpretation of QM (for instance in his book The Undivided Universe, written with Basil Hiley). Stenger notes the fact that Bohm’s “hidden variable” view is not widely embraced, and implies this undermines Bohm's philosophy.
It seems to me that most interpretations of QM and QFT, not just Capra's and Bohm's, can reasonably be taken to imply a more non-locally connected and hence holistic view of reality compared to the classical picture (what, if any, implications this has for the world beyond particle physics is a separate discussion). Prior to measurement, "particles" are spread out in space-time and systems demonstrate entanglement. The wave aspect of reality has a holistic element. However, it turns out Stenger doesn’t want to concede even something this modest.
Forget About Waves!
We learn more about Stenger’s own views in Chapters 12 and 13, where he discusses the interpretation of QM. He gives some brief sketches of Copenhagen, hidden-variables, and many-worlds interpretations. His own view, surprisingly, begins with a claim that there is no wave-particle duality; there are only particles. He has a section entitled “The Fictional Wave Function”. He says the description of the quantum state is an abstract mathematical entity. He spends a paragraph downplaying the Schrödinger representation of QM, preferring Heisenberg’s and Dirac’s formulations, which lack the wave function (he does remind the reader that they all give equivalent results).
So, if only particles exist, and no ontological status is given to the quantum state, how do we interpret experimental results? Discussing a single-electron double-slit experiment, Stenger’s idea, inspired by Feynman’s diagrams, is that the single particle, traveling backward and forward in time, can produce the interference pattern. I ask: how can it do this with no guidance from a wave? (Also interesting to me is that Stenger throughout the book is quick to dismiss anything that might imply violation of special relativity, but he’s OK with time-reversal). That’s the end of the discussion, so I can only infer that Stenger thinks QM can be successfully interpreted as a particle-only ontology with time-reversal. I’d never heard this view before and it isn’t defended at any length (it may be given more discussion in a previous book).
Can QM Explain Consciousness?
What about mind? Stenger does mention in passing that Von Neumann and Wigner suspected mind was involved in the phenomenon of quantum measurement. Beyond that he mentions the Penrose/Hameroff proposal for microtubule-based quantum coherence in the brain. He dismisses this with the standard reference to Max Tegmark’s 1999 paper calculating decoherence scales. I personally think Hameroff’s speculations are unlikely to be true, but recently it has been established that quantum coherence can be maintained and exploited by biological systems (see also here). While it still seems unlikely that anything as large as neuronal assemblies can maintain coherence, the science of tracing the role of quantum effects throughout biology is only getting started. I think it is quite plausible that distinctive features of life and mind may yet have at least a partial (non-trivial) quantum-mechanically-based explanation.
So, my main question for Stenger is this: why throw out the baby with the bathwater? We don’t live in a classical world, and we should continue to probe the implications of living in a quantum world wherever it takes us. Let’s not let the fact that New-age folks are making unsupported claims close our minds to the possibilities.
The motivation behind the book is a good one. Stenger’s most recent prior book (he has written quite a few) was God: The Failed Hypothesis -- an entry in the recent mini-boom of books by materialist-atheists criticizing traditional Judeo-Christian-Islamic religion. (I didn’t read this book having read Dawkins, Dennett and Harris in the genre). Quantum Gods is intended as a sequel which moves beyond traditional religion to criticize some newer and less traditional ideas about God and spirituality.
Stenger spends the first chapter looking at surveys on belief (including this Pew study), and notes they at least suggest that a substantial number of self-identified Christians have non-traditional ideas about God (in particular indicating beliefs more reminiscent of deism than theism). Also, he suspects that a significant portion of “unaffiliated” respondents have replaced traditional religion with various spiritual and/or paranormal beliefs (someone is buying lots of books by new age-type authors with these themes.)
With this as backdrop, Stenger explains that one strand which ties together some newer theistic/deistic ideas as well as the new age spiritual ones is their efforts to incorporate or accommodate modern physics (hence “quantum gods”). He identifies, then, two targets: the first, which he labels “quantum spirituality”, is the group of new age-type ideas which invoke quantum physics to support ideas about personal spiritual powers and/or cosmic consciousness; the second target, called “quantum theology”, is a set of attempts to accommodate God’s putative role as creator or intervening agent with modern science.
Before proceeding with more detail I need to mention one annoyance I had with the book: it felt somewhat padded to get to book-length. At 263 pages, about half (roughly the middle half) consists of encyclopedia entry-style sections on topics in the history of science. The content is unobjectionable, but this wouldn’t be the book you’d choose to read to learn about classical physics, relativity, or the history of quantum mechanics (QM) and the standard model of particle physics.
I thought the best part of the book were the late chapters criticizing “quantum theology” – i.e. some modern theological ideas about accommodating religion and science. I just wish he had a lengthier survey of these. A common objection to the work of the “new atheists” is to say that they haven’t grappled with more sophisticated modern theology but only criticize traditional beliefs (which are, of course, those held by the large majority of laypeople). Stenger has a good first effort to fill this gap, I thought. I will say a bit more about this section in a follow-up post. Below I will address Stenger’s criticism of new-agers and “quantum spirituality”, which I didn’t think was as effective.
Mind over Matter?
Let me quickly say that one of the reasons the chapters criticizing “quantum spirituality” aren’t as satisfying is not Stenger’s fault. Some of the ideas in books like those of Deepak Chopra and Gary Zukav, and movies like “What the Bleep Do We Know” just aren’t serious. Inspired by an apparent link between observation and the outcome of quantum measurements, a major claim of these folks is that quantum physics shows that human beings “create their own reality”: thus one can heal illnesses and become wealthy (or perhaps do some yogic flying) through the power of one’s mind. I personally think the details of QM could be implicated in the eventual scientific explanations of life, mind and freedom. However, to the extent that extrapolations from QM are used to portray humans as capable of paranormal powers, we cross into crackpot territory, and there’s not a lot more one can say (Stenger provides a section in Ch. 12 reviewing paranormal claims).
My main problem with Stenger’s discussion, though, is that his own opinion about the interpretation of quantum mechanics is highly idiosyncratic: he tries to hew as close as humanly possible to the worldview of classical materialism. As keen as he is to refute the quackery, he is as just as critical of the idea that the interpretation of QM might have any relevance to explaining the relation of mind to nature. He also is dismissive of the common notion that the ontology of QM or quantum field theory is suggestive of a more holistic cosmos than classical materialism. Here he is on less firm ground, in my opinion.
Saying No to Holism
With regard to holism, Stenger devotes a couple of sections discussing Fritjof Capra, one of the progenitors of the quantum new-age publishing genre with his book, The Tao of Physics (1975). In the book Capra stressed connections between QM and eastern philosophies, which he said both emphasize interconnectedness and process. Stenger points out that Capra was inspired by his work on a research program which was proposed as an alternative to quantum field theory called S-matrix/bootstrap theory. He points out that this program didn’t pan out, and this undercuts Capra. Stenger also has a brief discussion of quantum luminary David Bohm, who late in his career also espoused a holistic philosophy inspired by his work on the interpretation of QM (for instance in his book The Undivided Universe, written with Basil Hiley). Stenger notes the fact that Bohm’s “hidden variable” view is not widely embraced, and implies this undermines Bohm's philosophy.
It seems to me that most interpretations of QM and QFT, not just Capra's and Bohm's, can reasonably be taken to imply a more non-locally connected and hence holistic view of reality compared to the classical picture (what, if any, implications this has for the world beyond particle physics is a separate discussion). Prior to measurement, "particles" are spread out in space-time and systems demonstrate entanglement. The wave aspect of reality has a holistic element. However, it turns out Stenger doesn’t want to concede even something this modest.
Forget About Waves!
We learn more about Stenger’s own views in Chapters 12 and 13, where he discusses the interpretation of QM. He gives some brief sketches of Copenhagen, hidden-variables, and many-worlds interpretations. His own view, surprisingly, begins with a claim that there is no wave-particle duality; there are only particles. He has a section entitled “The Fictional Wave Function”. He says the description of the quantum state is an abstract mathematical entity. He spends a paragraph downplaying the Schrödinger representation of QM, preferring Heisenberg’s and Dirac’s formulations, which lack the wave function (he does remind the reader that they all give equivalent results).
So, if only particles exist, and no ontological status is given to the quantum state, how do we interpret experimental results? Discussing a single-electron double-slit experiment, Stenger’s idea, inspired by Feynman’s diagrams, is that the single particle, traveling backward and forward in time, can produce the interference pattern. I ask: how can it do this with no guidance from a wave? (Also interesting to me is that Stenger throughout the book is quick to dismiss anything that might imply violation of special relativity, but he’s OK with time-reversal). That’s the end of the discussion, so I can only infer that Stenger thinks QM can be successfully interpreted as a particle-only ontology with time-reversal. I’d never heard this view before and it isn’t defended at any length (it may be given more discussion in a previous book).
Can QM Explain Consciousness?
What about mind? Stenger does mention in passing that Von Neumann and Wigner suspected mind was involved in the phenomenon of quantum measurement. Beyond that he mentions the Penrose/Hameroff proposal for microtubule-based quantum coherence in the brain. He dismisses this with the standard reference to Max Tegmark’s 1999 paper calculating decoherence scales. I personally think Hameroff’s speculations are unlikely to be true, but recently it has been established that quantum coherence can be maintained and exploited by biological systems (see also here). While it still seems unlikely that anything as large as neuronal assemblies can maintain coherence, the science of tracing the role of quantum effects throughout biology is only getting started. I think it is quite plausible that distinctive features of life and mind may yet have at least a partial (non-trivial) quantum-mechanically-based explanation.
So, my main question for Stenger is this: why throw out the baby with the bathwater? We don’t live in a classical world, and we should continue to probe the implications of living in a quantum world wherever it takes us. Let’s not let the fact that New-age folks are making unsupported claims close our minds to the possibilities.
Tuesday, August 11, 2009
An Improved Supervenience Base?
Brian Weatherson (home page, blog) has authored an SEP article on David Lewis. In it he ably takes on the difficult task of summarizing in a few dozen pages the work of one of the most prominent and productive philosophers of the latter part of the twentieth century.
One section addressed something I had idly wondered about Lewis. As Weatherson discusses in his section 5, entitled “Humean Supervenience”, much of Lewis work involved reduction: whether the subject was the mind, language, laws of nature, or causation (modality was a rather spectacular exception), he presented arguments that all the truths about a world ultimately supervene on a set of (perfectly natural) properties and relations in that world. He then argued that these, in turn, are intrinsic properties of point-sized objects and spatiotemporal relations. (The “Humean” aspect of this is that he follows the spirit of Hume’s “dictum” that there are no necessary relations between distinct entities.)
What I had wondered was: did Lewis ever acknowledge that his picture of locally distributed point-sized objects in a space-time container was a classical view, inconsistent with modern physics?
Those who challenged this aspect of Lewis’ work over the years argued that his supervenience base is incapable of supporting some of the truths about the world, and so his attempts at reduction fail. However, most critics haven’t quibbled with the assumptions about physics, rather they argue that some additional higher-level properties and/or relations need to be added to explain the world. Even many materialist philosophers think additional brute metaphysical structure beyond the basic physical entities is required (this debate was the subject of this old post); some philosophers think necessary connections are needed to explain causation; dualists will see a need for fundamental mental properties and perhaps psycho-physical laws.
But, Weatherson notes that a second way to argue against Lewis’ project is indeed to point out that modern physics is inconsistent with Lewis’ notions. At a minimum, he says, quantum physics seems to need non-spatiotemporal relations to explain Bell’s theorem.
Weatherson explains that Lewis was indeed aware that his picture was inconsistent with quantum physics, but still thought it was extremely valuable to defend the thesis regardless of that fact. Weatherson explains the idea as follows: to the extent his picture of physics is wrong, it is because physics has more content in it (e.g. entanglement relations) rather than less. So, if Lewis can successfully defend reduction of various folk theories on his terms, it will remain a valuable accomplishment.
That makes sense, but I still wonder if explicitly reckoning with quantum physics and its interpretation couldn’t helpfully reframe these philosophical debates. There is more to the shift from classical physics to quantum physics than just adding some non-local connections. There is the quantum measurement event itself, which I see as an ineliminable new addition to the basic constituents of nature. I think that mind and causation might be the targets of successful reduction given an improved supervenience base which featured a network of measurement events in its basic ontology. (Also, one neat thing about taking this approach is that we could hold on to Hume’s dictum. He may not have known about QM, but I think the indeterminism involved in measurement events is consistent with his insight.)
One section addressed something I had idly wondered about Lewis. As Weatherson discusses in his section 5, entitled “Humean Supervenience”, much of Lewis work involved reduction: whether the subject was the mind, language, laws of nature, or causation (modality was a rather spectacular exception), he presented arguments that all the truths about a world ultimately supervene on a set of (perfectly natural) properties and relations in that world. He then argued that these, in turn, are intrinsic properties of point-sized objects and spatiotemporal relations. (The “Humean” aspect of this is that he follows the spirit of Hume’s “dictum” that there are no necessary relations between distinct entities.)
What I had wondered was: did Lewis ever acknowledge that his picture of locally distributed point-sized objects in a space-time container was a classical view, inconsistent with modern physics?
Those who challenged this aspect of Lewis’ work over the years argued that his supervenience base is incapable of supporting some of the truths about the world, and so his attempts at reduction fail. However, most critics haven’t quibbled with the assumptions about physics, rather they argue that some additional higher-level properties and/or relations need to be added to explain the world. Even many materialist philosophers think additional brute metaphysical structure beyond the basic physical entities is required (this debate was the subject of this old post); some philosophers think necessary connections are needed to explain causation; dualists will see a need for fundamental mental properties and perhaps psycho-physical laws.
But, Weatherson notes that a second way to argue against Lewis’ project is indeed to point out that modern physics is inconsistent with Lewis’ notions. At a minimum, he says, quantum physics seems to need non-spatiotemporal relations to explain Bell’s theorem.
Weatherson explains that Lewis was indeed aware that his picture was inconsistent with quantum physics, but still thought it was extremely valuable to defend the thesis regardless of that fact. Weatherson explains the idea as follows: to the extent his picture of physics is wrong, it is because physics has more content in it (e.g. entanglement relations) rather than less. So, if Lewis can successfully defend reduction of various folk theories on his terms, it will remain a valuable accomplishment.
That makes sense, but I still wonder if explicitly reckoning with quantum physics and its interpretation couldn’t helpfully reframe these philosophical debates. There is more to the shift from classical physics to quantum physics than just adding some non-local connections. There is the quantum measurement event itself, which I see as an ineliminable new addition to the basic constituents of nature. I think that mind and causation might be the targets of successful reduction given an improved supervenience base which featured a network of measurement events in its basic ontology. (Also, one neat thing about taking this approach is that we could hold on to Hume’s dictum. He may not have known about QM, but I think the indeterminism involved in measurement events is consistent with his insight.)
Monday, July 20, 2009
Kriegel on Animal Rights, Part Two
(Note: part one of this post is here)
In section 5 of his paper, Uriah Kriegel outlines a proposal for a different, non-consequentialist, ethical framework for animal rights. He starts by saying he admires the part of Kantian (deontological) moral philosophy known as the “humanity formula”: one should treat another’s humanity (or one’s own) as an end in itself, not as a means to an end. To extend this approach, he suggests substituting “conscious creatures” for “humanity”.
With some alterations (including what he calls a “virtue-ethical” twist), the formula becomes:
“One should have the stable, dominating disposition to treat conscious creatures as ends in themselves and not merely as means to other ends.”
The “stable, dominating disposition” phraseology allows us to avoid absolutism in the formula. Including “merely” allows for cases where an animal might be treated in some fashion as a means as well as an end in itself. Kriegel notes that much more could be said about all of the nuances here, but he moves on to give examples of how this kind of formula would result in different results compared to a consequentialist formula. Importantly, they differ on whether it is right to kill or exploit an animal if could be done painlessly. By assigning conscious animals intrinsic moral worth, one expands the animal rights case (for those animals considered conscious).
I liked this proposal very much, since I have long been attracted to the idea of assigning intrinsic moral status on the basis of consciousness (although I came to the idea from a different direction – see an old post on this here and other posts with the Morals tag). And while I certainly don’t read a lot of philosophical papers on this topic, I hadn’t seen this proposed before in the literature.
In the last section of the paper, Kriegel returns to the risk that we err in our empirical assessment of the presence of consciousness in various animals. Here he proposes that, given the uncertainty involved, we should assign probabilities to the presence of consciousness. Further we should use a function which maps this probability to an even more generous probability that a given animal should be afforded rights. While all of the numbers we might assign are going to be imprecise, by being cautious about what we know and building in a cushion in our formulas, we can reduce mistakes while pressing the circle of moral generosity outward.
My own view is that in our judgments we should explicitly consider that consciousness comes in degrees, rather than being an all-or-nothing phenomenon. But working with Kriegel’s probabilistic formulas, one could easily get to the same answers in most cases.
In section 5 of his paper, Uriah Kriegel outlines a proposal for a different, non-consequentialist, ethical framework for animal rights. He starts by saying he admires the part of Kantian (deontological) moral philosophy known as the “humanity formula”: one should treat another’s humanity (or one’s own) as an end in itself, not as a means to an end. To extend this approach, he suggests substituting “conscious creatures” for “humanity”.
With some alterations (including what he calls a “virtue-ethical” twist), the formula becomes:
“One should have the stable, dominating disposition to treat conscious creatures as ends in themselves and not merely as means to other ends.”
The “stable, dominating disposition” phraseology allows us to avoid absolutism in the formula. Including “merely” allows for cases where an animal might be treated in some fashion as a means as well as an end in itself. Kriegel notes that much more could be said about all of the nuances here, but he moves on to give examples of how this kind of formula would result in different results compared to a consequentialist formula. Importantly, they differ on whether it is right to kill or exploit an animal if could be done painlessly. By assigning conscious animals intrinsic moral worth, one expands the animal rights case (for those animals considered conscious).
I liked this proposal very much, since I have long been attracted to the idea of assigning intrinsic moral status on the basis of consciousness (although I came to the idea from a different direction – see an old post on this here and other posts with the Morals tag). And while I certainly don’t read a lot of philosophical papers on this topic, I hadn’t seen this proposed before in the literature.
In the last section of the paper, Kriegel returns to the risk that we err in our empirical assessment of the presence of consciousness in various animals. Here he proposes that, given the uncertainty involved, we should assign probabilities to the presence of consciousness. Further we should use a function which maps this probability to an even more generous probability that a given animal should be afforded rights. While all of the numbers we might assign are going to be imprecise, by being cautious about what we know and building in a cushion in our formulas, we can reduce mistakes while pressing the circle of moral generosity outward.
My own view is that in our judgments we should explicitly consider that consciousness comes in degrees, rather than being an all-or-nothing phenomenon. But working with Kriegel’s probabilistic formulas, one could easily get to the same answers in most cases.
Wednesday, July 15, 2009
Kriegel’s “Animal Rights and Conscious Experience”
(Note: this is half of what will be a two-part post. UPDATE 20 July 2009: the second part is here)
Uriah Kriegel, assistant professor of philosophy at the University of Arizona, has posted this draft paper which seeks to advance the discussion on animal rights in light of progress in the study of consciousness. The most important part of the paper, in my view, is his formulation of how to explicitly make the (likely) presence of conscious experience the key component of a framework for assigning moral status to animals.
Before getting to his own proposal, Kriegel considers how the most influential consequentialist approach to animal rights (prominently associated with Peter Singer) suffers from a lack of an up-to-date analysis of the science of consciousness. This consequentialist framework, which emphasizes maximizing pleasure and minimizing pain, typically doesn’t incorporate a sophisticated understanding of exactly which animals might have conscious experience of pleasure and pain, as opposed to an unconscious functional analogue of these feelings. The former are presumably the intended recipient of moral consideration, not the latter.
Kriegel uses his own work on (human) consciousness to show how the view of animal rights would be influenced by a more detailed account of consciousness. Kriegel starts with his own proposal for locating the neural correlate of consciousness (NCC) in humans. This proposal, which he calls the cross-order integration hypothesis (COI), is detailed in a paper published in the journal Consciousness and Cognition. Then he argues that a comparison of brain structures between animals and humans should inform a view of whether animals share the NCC, and hence are conscious.
Without getting into all the details here, Kriegel’s COI theory implicates activity in cortical structures, presumed to be involved in higher order monitoring work, as a necessary part of the NCC. Because our mammal cousins share these structures while non-mammals lack it, this would provide evidence that the former are conscious, but the latter are not. In considering the consequentialist approach to animal rights sketched above this analysis would serve to help define the circle of moral consideration: in this case, mammals are on the inside, other animals are on the outside (“looking in”).
While Kriegel has some confidence in his approach, he clearly notes in the paper that his moral stance will be tempered by the uncertainty around the empirical conclusions underlying such a moral calculus (his preference is to couch any moral formulas incorporating the empirical claims in terms of probability). And I should stress that the general point on methodology has value even if one’s theory of the NCC differs: just plug in your own preferred NCC and try to judge how far it extends into the animal kingdom. For myself, while I like Kriegels’s “top-down” approach to the question of locating the NCC (i.e. beginning with a theory before you look at the experimental data – see an old post on Kriegel’s philosophy of mind here), I don’t have a high level of confidence in his (or any other) specific NCC proposal yet. The field is still in its early stages.
Still, even if one gains confidence in identifying the NCC in humans, there is a very big step to cross when considering the consciousness of animals, which Kriegel may underestimate. Because brains and bodies have demonstrated a robust ability to adaptively implement analogous functions using distinct structures, I would be very cautious in assuming that the work performed by our late (evolutionary) vintage neural structures couldn’t be implemented differently in animals. Even cephalopods, whose nervous systems have little in common with ours, show behaviors very suggestive of consciousness. Reinforcing this concern is my own philosophical bias to see consciousness as something which comes in degrees. An animal without a cortex could have an attenuated version of consciousness, as opposed to a lack of consciousness.
After the lengthy discussion of the how his kind of empirical analysis can help in evaluating the consequentialist ethical approach, Kriegel shifts gears and discusses a different non-consequentialist framework for the problem.
(To be continued.)
Uriah Kriegel, assistant professor of philosophy at the University of Arizona, has posted this draft paper which seeks to advance the discussion on animal rights in light of progress in the study of consciousness. The most important part of the paper, in my view, is his formulation of how to explicitly make the (likely) presence of conscious experience the key component of a framework for assigning moral status to animals.
Before getting to his own proposal, Kriegel considers how the most influential consequentialist approach to animal rights (prominently associated with Peter Singer) suffers from a lack of an up-to-date analysis of the science of consciousness. This consequentialist framework, which emphasizes maximizing pleasure and minimizing pain, typically doesn’t incorporate a sophisticated understanding of exactly which animals might have conscious experience of pleasure and pain, as opposed to an unconscious functional analogue of these feelings. The former are presumably the intended recipient of moral consideration, not the latter.
Kriegel uses his own work on (human) consciousness to show how the view of animal rights would be influenced by a more detailed account of consciousness. Kriegel starts with his own proposal for locating the neural correlate of consciousness (NCC) in humans. This proposal, which he calls the cross-order integration hypothesis (COI), is detailed in a paper published in the journal Consciousness and Cognition. Then he argues that a comparison of brain structures between animals and humans should inform a view of whether animals share the NCC, and hence are conscious.
Without getting into all the details here, Kriegel’s COI theory implicates activity in cortical structures, presumed to be involved in higher order monitoring work, as a necessary part of the NCC. Because our mammal cousins share these structures while non-mammals lack it, this would provide evidence that the former are conscious, but the latter are not. In considering the consequentialist approach to animal rights sketched above this analysis would serve to help define the circle of moral consideration: in this case, mammals are on the inside, other animals are on the outside (“looking in”).
While Kriegel has some confidence in his approach, he clearly notes in the paper that his moral stance will be tempered by the uncertainty around the empirical conclusions underlying such a moral calculus (his preference is to couch any moral formulas incorporating the empirical claims in terms of probability). And I should stress that the general point on methodology has value even if one’s theory of the NCC differs: just plug in your own preferred NCC and try to judge how far it extends into the animal kingdom. For myself, while I like Kriegels’s “top-down” approach to the question of locating the NCC (i.e. beginning with a theory before you look at the experimental data – see an old post on Kriegel’s philosophy of mind here), I don’t have a high level of confidence in his (or any other) specific NCC proposal yet. The field is still in its early stages.
Still, even if one gains confidence in identifying the NCC in humans, there is a very big step to cross when considering the consciousness of animals, which Kriegel may underestimate. Because brains and bodies have demonstrated a robust ability to adaptively implement analogous functions using distinct structures, I would be very cautious in assuming that the work performed by our late (evolutionary) vintage neural structures couldn’t be implemented differently in animals. Even cephalopods, whose nervous systems have little in common with ours, show behaviors very suggestive of consciousness. Reinforcing this concern is my own philosophical bias to see consciousness as something which comes in degrees. An animal without a cortex could have an attenuated version of consciousness, as opposed to a lack of consciousness.
After the lengthy discussion of the how his kind of empirical analysis can help in evaluating the consequentialist ethical approach, Kriegel shifts gears and discusses a different non-consequentialist framework for the problem.
(To be continued.)
Tuesday, June 30, 2009
Contemplating the Deeply Strange
I watched this bloggingheads.tv dialogue between Robert Wright, whose latest book (The Evolution of God) I reviewed in the prior post, and Tyler Cowen, economics professor and prolific blogger at Marginal Revolution. They discussed Wright’s book but also expanded somewhat on each of their own views. One thing both men have in common is that they are non-believers, but take the challenge of responding to religious impulses seriously. I wanted to highlight here a good point made by Cowen. (The relevant parts of the dialogue span a couple of minutes beginning at 38 minutes in, then a few more starting at the 47 minute mark.)
Paraphrasing, Cowen says most non-believers should think more about religion and should specifically take the design/fine-tuning argument seriously: contemplation of this often leads to the concept of a complicated multiverse. He says we need to consider that our common-sense view of the world is wrong, and that there is room for a “deep strangeness” in reality. He mentions quantum mechanics and says one has to come to terms with a reality that seems absurd. He says his alternative to believing in God turns out to be believing something strange as well.
I would add that the multiverse concept in particular, is not only strange, but, if embraced, commits one to acknowledge something transcendent (a reality far beyond our observable universe). Investigations of our world have led to several ways to motivate the multiverse: in addition to the fine-tuning argument and the interpretation of QM, there are extensions of specific cosmological theories (eternal inflation, the string theory “landscape”), and there is the modal realism of philosophers. If a non-believer is motivated to explore deeper explanations of reality, he or she will almost surely end up somewhere well beyond a common-sense starting point.
Once you rule out the supernatural entities and interventions of traditional theism, there’s still a lot of hard work to do to explain what’s given to us: and the journey may lead to places one didn’t intend to go.
Paraphrasing, Cowen says most non-believers should think more about religion and should specifically take the design/fine-tuning argument seriously: contemplation of this often leads to the concept of a complicated multiverse. He says we need to consider that our common-sense view of the world is wrong, and that there is room for a “deep strangeness” in reality. He mentions quantum mechanics and says one has to come to terms with a reality that seems absurd. He says his alternative to believing in God turns out to be believing something strange as well.
I would add that the multiverse concept in particular, is not only strange, but, if embraced, commits one to acknowledge something transcendent (a reality far beyond our observable universe). Investigations of our world have led to several ways to motivate the multiverse: in addition to the fine-tuning argument and the interpretation of QM, there are extensions of specific cosmological theories (eternal inflation, the string theory “landscape”), and there is the modal realism of philosophers. If a non-believer is motivated to explore deeper explanations of reality, he or she will almost surely end up somewhere well beyond a common-sense starting point.
Once you rule out the supernatural entities and interventions of traditional theism, there’s still a lot of hard work to do to explain what’s given to us: and the journey may lead to places one didn’t intend to go.
Tuesday, June 23, 2009
The Evolution of God, by Robert Wright
Robert Wright is an intellectually curious journalist and a fine writer whose previous books I enjoyed (these days he is also editor-in-chief of bloggingheads.tv). In 1994’s The Moral Animal, he summarized and popularized ideas from the burgeoning field of evolutionary psychology. In 2001’s Nonzero: the Logic of Human Destiny, he examined how game theory helps make sense of the development of human nature and human societies. In addition to presenting a digest of interesting research in these books, he laid out his own prism for viewing these topics: while our values can be explained entirely as natural phenomena, there is evidence of historical progress in the moral dimension of human affairs which seems to possibly point to something more transcendent. Wright’s new book, The Evolution of God, fits right into this theme: he explores the character of religion through history, and, by marshalling summaries of scholarly work, shows how religious ideas developed in response to changing social and political circumstances. The explanations make no appeal to the supernatural. But, as in his other books, Wright sees progress (however haphazard and intermittent) in the moral dimension of religion through time, which leads him to speculate that this phenomenon actually points to the existence of something worthy of being named divine.
The bulk of the book is an interesting run through research findings from anthropology, archaeology and textual analysis on the topic of historical religious ideas and practices. The tour begins with a look at hunter-gatherer style animism and the role of gods and religion in tribal cultures. Religious ideas in these milieus are seen to fit into and lubricate social organization. Then Wright examines the development of the various pantheons of gods in civilizations beginning with Mesopotamia and Egypt: the theme is how religion changed hand in hand with conquests, trade, and internal politics. In all cases there seems to be a plausible explanation of religious ideas (and the lineup of gods acknowledged) changing in concert with the “facts on the ground”.
This sets things up for the larger sections on the Abrahamic traditions. Wright, drawing on scholarly research, looks at the development of the religion of Israel from polytheistic roots, to monolatry (veneration of only one god), to the beginnings of true monotheism. The relationship of the tribe/nation of Israel to its neighbors through episodes of conquest, defeat, vassalship, and exile is seen as the prime driver of these developments. Christianity gets its turn in the barrel, as Wright sets out to show that its incorporation of the ideals of love and ethnic fellowship (and their backward projection onto the figure of Jesus) are best seen as an outgrowth of the mission to the gentiles in the context of a multi-cultural Roman empire. The expression of ideas in the Koran (including the fluctuations from tolerance to intolerance regarding infidels) is likewise tracked to the stages in the earthly career of Muhammad and his followers. (Eastern religions are left mostly undiscussed, but it is a long book as it is*).
The general theme is reinforced throughout: changes in religious ideas track earthly events. As nations make war, their gods intone contempt for non-believers. As empires digest conquests, they co-opt the gods of their new subjects. More positively, as societies enter into non-zero sum relationships with a wider circle of neighbors, their gods become more universal and more supportive of a broader moral vision.
I enjoyed these parts of the book and raced through 400 pages quickly. I found most of conclusions very plausible. I predict that one group of people who will have some problems with this material are the actual scholars who work in these fields: to them Wright’s account will surely seem superficial and far too quick to seize on conclusions which are the subject of significant debate and controversy. I think The Moral Animal could be criticized for confidently presenting plausible-sounding conclusions from evolutionary psychology that look shakier today. In any case, however, Wright has his eyes on the big picture, and the thrust of the ideas here was persuasive to me even if some of the details are in error.
In the last fifty pages of the book Wright presents his own thoughts on what it all means (some of these have of course been broached in previous chapters). Here the terrain became more difficult and I found some of his discussion to be repetitive. But watching him grapple with the ideas was nonetheless thought-provoking. First off (repeating the theme from Nonzero), Wright argues that with the passage of time, humans have expanded their circle of moral consideration, and that this constitutes an arrow of moral progress through history. I think he is on firm ground here: as we include more and more people (first extended family, then tribe, then nation, then international trading partners) in our field of empathy, we can see this properly as progress. However, I think it’s difficult to tie this to the main topic of the book: can we point to the evolution of our ideas regarding gods or God (more loving, less vengeful), and say that this adds anything to the story of moral progress? He does show that religion mirrors the state of play in the moral calculations of nations and peoples. But his analysis doesn’t provide evidence that religion drives moral progress – it seems to mainly reflect it.
In the final section (the Afterword), Wright proposes that the existence of an historical arrow of moral progress might be evidence for an objective moral order which transcends nature. He argues that even if the traditional idea of a personal God seems highly implausible given naturalism, it might nonetheless point (however imperfectly) towards truth. Maybe there is some kind of Logos of divine origin present in the temporal unfolding of human events. He draws an analogy between a traditional religion’s imperfect conception of God and a physicist’s imperfect conception of an electron (it’s not like a particle or a wave or anything else we can picture, yet we know it’s there from its manifestations).
Wright doesn’t commit to a conclusion that God exists, but he clearly wants to embrace the idea that our moral progress points to something transcendent. His arguments for this position aren’t strong, however, consisting as they do of analogies and a repeated appeal that something special must be going on to account for the moral axis of human affairs. I don’t think many traditional materialist-atheists will be convinced.
This is unfortunate because I think his intuition is sound. I think that any naturalist worldview needs to be expansive enough to account for first person experience and the meaning and values which arise from our engagement with the world. A purely third-person materialist description is incomplete. What’s missing from Wright’s argument (and it would take another book, of course) is a convincing metaphysical story of how this all fits together. Specifically, we need to account for how these first-person truths are rooted in the fundamental (pre-biological) fabric of nature -- a main goal of this blog. If we can do this, then we’ll build a foundation for Wright’s further story of how biological natural selection and cultural dynamics have shaped humanity.
Still, I admire Wright’s contribution in these books. And in particular I find his vision of moral progress to be inspiring. As he says, the forces of interconnectedness and globalization in today’s world offer the possibility that we can expand our circle of moral concern to finally cover the planet. As I’ve drafted this review over the past week, millions of people from all corners of the world have been morally identifying with protestors in Iran via the internet. A vision of progress toward a world of peace is right there before us.
----------------------------------------------------------------------------------
* I should also mention that there is a good appendix which examines the roots of our religious impulses in evolutionary prehistory. He doubts we’ll find a “god gene” but discusses how different aspects of human and primate nature set the stage for religious impulses.
The bulk of the book is an interesting run through research findings from anthropology, archaeology and textual analysis on the topic of historical religious ideas and practices. The tour begins with a look at hunter-gatherer style animism and the role of gods and religion in tribal cultures. Religious ideas in these milieus are seen to fit into and lubricate social organization. Then Wright examines the development of the various pantheons of gods in civilizations beginning with Mesopotamia and Egypt: the theme is how religion changed hand in hand with conquests, trade, and internal politics. In all cases there seems to be a plausible explanation of religious ideas (and the lineup of gods acknowledged) changing in concert with the “facts on the ground”.
This sets things up for the larger sections on the Abrahamic traditions. Wright, drawing on scholarly research, looks at the development of the religion of Israel from polytheistic roots, to monolatry (veneration of only one god), to the beginnings of true monotheism. The relationship of the tribe/nation of Israel to its neighbors through episodes of conquest, defeat, vassalship, and exile is seen as the prime driver of these developments. Christianity gets its turn in the barrel, as Wright sets out to show that its incorporation of the ideals of love and ethnic fellowship (and their backward projection onto the figure of Jesus) are best seen as an outgrowth of the mission to the gentiles in the context of a multi-cultural Roman empire. The expression of ideas in the Koran (including the fluctuations from tolerance to intolerance regarding infidels) is likewise tracked to the stages in the earthly career of Muhammad and his followers. (Eastern religions are left mostly undiscussed, but it is a long book as it is*).
The general theme is reinforced throughout: changes in religious ideas track earthly events. As nations make war, their gods intone contempt for non-believers. As empires digest conquests, they co-opt the gods of their new subjects. More positively, as societies enter into non-zero sum relationships with a wider circle of neighbors, their gods become more universal and more supportive of a broader moral vision.
I enjoyed these parts of the book and raced through 400 pages quickly. I found most of conclusions very plausible. I predict that one group of people who will have some problems with this material are the actual scholars who work in these fields: to them Wright’s account will surely seem superficial and far too quick to seize on conclusions which are the subject of significant debate and controversy. I think The Moral Animal could be criticized for confidently presenting plausible-sounding conclusions from evolutionary psychology that look shakier today. In any case, however, Wright has his eyes on the big picture, and the thrust of the ideas here was persuasive to me even if some of the details are in error.
In the last fifty pages of the book Wright presents his own thoughts on what it all means (some of these have of course been broached in previous chapters). Here the terrain became more difficult and I found some of his discussion to be repetitive. But watching him grapple with the ideas was nonetheless thought-provoking. First off (repeating the theme from Nonzero), Wright argues that with the passage of time, humans have expanded their circle of moral consideration, and that this constitutes an arrow of moral progress through history. I think he is on firm ground here: as we include more and more people (first extended family, then tribe, then nation, then international trading partners) in our field of empathy, we can see this properly as progress. However, I think it’s difficult to tie this to the main topic of the book: can we point to the evolution of our ideas regarding gods or God (more loving, less vengeful), and say that this adds anything to the story of moral progress? He does show that religion mirrors the state of play in the moral calculations of nations and peoples. But his analysis doesn’t provide evidence that religion drives moral progress – it seems to mainly reflect it.
In the final section (the Afterword), Wright proposes that the existence of an historical arrow of moral progress might be evidence for an objective moral order which transcends nature. He argues that even if the traditional idea of a personal God seems highly implausible given naturalism, it might nonetheless point (however imperfectly) towards truth. Maybe there is some kind of Logos of divine origin present in the temporal unfolding of human events. He draws an analogy between a traditional religion’s imperfect conception of God and a physicist’s imperfect conception of an electron (it’s not like a particle or a wave or anything else we can picture, yet we know it’s there from its manifestations).
Wright doesn’t commit to a conclusion that God exists, but he clearly wants to embrace the idea that our moral progress points to something transcendent. His arguments for this position aren’t strong, however, consisting as they do of analogies and a repeated appeal that something special must be going on to account for the moral axis of human affairs. I don’t think many traditional materialist-atheists will be convinced.
This is unfortunate because I think his intuition is sound. I think that any naturalist worldview needs to be expansive enough to account for first person experience and the meaning and values which arise from our engagement with the world. A purely third-person materialist description is incomplete. What’s missing from Wright’s argument (and it would take another book, of course) is a convincing metaphysical story of how this all fits together. Specifically, we need to account for how these first-person truths are rooted in the fundamental (pre-biological) fabric of nature -- a main goal of this blog. If we can do this, then we’ll build a foundation for Wright’s further story of how biological natural selection and cultural dynamics have shaped humanity.
Still, I admire Wright’s contribution in these books. And in particular I find his vision of moral progress to be inspiring. As he says, the forces of interconnectedness and globalization in today’s world offer the possibility that we can expand our circle of moral concern to finally cover the planet. As I’ve drafted this review over the past week, millions of people from all corners of the world have been morally identifying with protestors in Iran via the internet. A vision of progress toward a world of peace is right there before us.
----------------------------------------------------------------------------------
* I should also mention that there is a good appendix which examines the roots of our religious impulses in evolutionary prehistory. He doubts we’ll find a “god gene” but discusses how different aspects of human and primate nature set the stage for religious impulses.
Tuesday, June 09, 2009
Physics Links and Notes
Here are three interesting things I read recently.
1. Lee Smolin has an article titled “The unique universe” in physicsworld (hat tip: Not Even Wrong). It covers some of the same ground as the video I had earlier posted here. In it he argues against some ideas which have been recently popular among physicists when considering the shape of the next fundamental theory of cosmology. First, many now argue that our universe is just one of a vast or infinite number of others: the multiverse. Also, it is argued that the fundamental theory will be timeless, since they see our experience of the flow of time as an emergent local phenomenon. This leaves us with a vision of a timeless and static multiverse.
Smolin says advocates of this vision are led by mistaken reasoning. One problem arises when physicists take the essentially Newtonian schema we use to evaluate systems within the universe (deterministic laws + initial conditions) and try to apply it to the entire cosmos. This leads them to try to describe a process for selecting our universe from a landscape of many universes (anthropically or otherwise). Smolin argues that we would do better to explore theories which take time to be fundamental, and where laws can vary in a process of cosmic evolution.
2. Many people are optimistic that an information-theoretic perspective will lead to new insights in exploring the foundations of quantum mechanics, and this multi-authored paper, called “A new physical principle: Information Causality”, is an interesting effort in this regard. Information causality is, according to the authors, a principle which helps pick out QM from a space of possible theories which, like QM, feature entangled correlations but allow no faster than light signaling. While the principle seems simple when stated (“communication of m classical bits causes information gain of at most m bits”), the fact that other (hypothetical) theories which feature strong correlations don’t meet it is notable. Hat tip goes to this post by the Quantum Pontiff which has some helpful discussion.
3. I had come across the essay “Free will, undecidability, and the problem of time in quantum gravity” by Rodolfo Gambini, which was submitted to the FQXi contest (see here), but I didn’t immediately catch on to his arguments. But now having reviewed two papers on Arxiv by Gambini and colleagues (here and here) I have a better idea what his program is. The key starting point is this: the mathematics of quantum mechanics treat time as an external infinitely divisible classical variable; Gambini et.al. think that fundamental limitations on the practical measurement of time within the physical world have implications for how we should interpret the problem of quantum measurement. For instance, if we look at decoherence theory, we see that a quantum superposition involving a system, a measuring device and the environment can evolve such that the degrees of freedom responsible for interference are dispersed. But decoherence itself says nothing about a measurement taking place -- the system is still evolving unitarily. Gambini et.al. argue that a point comes where no possible mechanism is available to tell whether or not a measurement outcome (or event) has or has not taken place. They think this undecidability threshold can be seen as the marker for when an event has occurred. (Then, in the essay, Gambini waxes philosophical and speculates that this undecidability between evolution and collapse might create space for free will.) A thread about this in physicsforums is here.
I liked reading Gambini’s papers, but I think the calculations regarding the undecidability point are controversial, given that a full explanation would require a theory of quantum gravity. And if my preferred approach to QG is right -- where time is a fundamental aspect of a pre-gravity microscopic quantum theory, and the particles and space-time geometry of current theories are emergent regularities -- then I suspect that the constraints on describing a physical clock would not arise in the same way as it does here.
1. Lee Smolin has an article titled “The unique universe” in physicsworld (hat tip: Not Even Wrong). It covers some of the same ground as the video I had earlier posted here. In it he argues against some ideas which have been recently popular among physicists when considering the shape of the next fundamental theory of cosmology. First, many now argue that our universe is just one of a vast or infinite number of others: the multiverse. Also, it is argued that the fundamental theory will be timeless, since they see our experience of the flow of time as an emergent local phenomenon. This leaves us with a vision of a timeless and static multiverse.
Smolin says advocates of this vision are led by mistaken reasoning. One problem arises when physicists take the essentially Newtonian schema we use to evaluate systems within the universe (deterministic laws + initial conditions) and try to apply it to the entire cosmos. This leads them to try to describe a process for selecting our universe from a landscape of many universes (anthropically or otherwise). Smolin argues that we would do better to explore theories which take time to be fundamental, and where laws can vary in a process of cosmic evolution.
2. Many people are optimistic that an information-theoretic perspective will lead to new insights in exploring the foundations of quantum mechanics, and this multi-authored paper, called “A new physical principle: Information Causality”, is an interesting effort in this regard. Information causality is, according to the authors, a principle which helps pick out QM from a space of possible theories which, like QM, feature entangled correlations but allow no faster than light signaling. While the principle seems simple when stated (“communication of m classical bits causes information gain of at most m bits”), the fact that other (hypothetical) theories which feature strong correlations don’t meet it is notable. Hat tip goes to this post by the Quantum Pontiff which has some helpful discussion.
3. I had come across the essay “Free will, undecidability, and the problem of time in quantum gravity” by Rodolfo Gambini, which was submitted to the FQXi contest (see here), but I didn’t immediately catch on to his arguments. But now having reviewed two papers on Arxiv by Gambini and colleagues (here and here) I have a better idea what his program is. The key starting point is this: the mathematics of quantum mechanics treat time as an external infinitely divisible classical variable; Gambini et.al. think that fundamental limitations on the practical measurement of time within the physical world have implications for how we should interpret the problem of quantum measurement. For instance, if we look at decoherence theory, we see that a quantum superposition involving a system, a measuring device and the environment can evolve such that the degrees of freedom responsible for interference are dispersed. But decoherence itself says nothing about a measurement taking place -- the system is still evolving unitarily. Gambini et.al. argue that a point comes where no possible mechanism is available to tell whether or not a measurement outcome (or event) has or has not taken place. They think this undecidability threshold can be seen as the marker for when an event has occurred. (Then, in the essay, Gambini waxes philosophical and speculates that this undecidability between evolution and collapse might create space for free will.) A thread about this in physicsforums is here.
I liked reading Gambini’s papers, but I think the calculations regarding the undecidability point are controversial, given that a full explanation would require a theory of quantum gravity. And if my preferred approach to QG is right -- where time is a fundamental aspect of a pre-gravity microscopic quantum theory, and the particles and space-time geometry of current theories are emergent regularities -- then I suspect that the constraints on describing a physical clock would not arise in the same way as it does here.
Monday, June 01, 2009
5th Blogiversary Post
Thank you to everyone who has visited. I’m extremely grateful to those who have commented: the discussions that have taken place have been very valuable.
(Brief retrospective meta-blogging follows.)
Actually, I “cheated” at the beginning. The first ten posts from June 2004 were drawn from an essay I had previously written and distributed to some friends and family members, which I called (tongue-in-cheek) “Steve’s Guide to Reality”. After a little while, though, the blog started to show some life, becoming more interactive (thanks to Justin and Peter for early comments). While the equilibrium frequency of posting turned out to be pretty low, it has held fairly steady.
I’m happy with the results. In addition to the comment and e-mail dialogues the blog has engendered, it has helped organize and record my thoughts much more effectively than the previous battered-notebook approach. Also, blogging has obviously facilitated taking advantage of the explosion of resources available to read and link to on the internet. I’m indebted to all the scholars, students and fellow laypeople who blog or otherwise post their work online.
Blogging is, then, a valuable tool which is helping me toward a goal: the development of a metaphysical worldview. This has been my number one “hobby” since I was a teenager, and I think I’ve made more progress in these last five years then ever before.
Thanks again.
(Brief retrospective meta-blogging follows.)
Actually, I “cheated” at the beginning. The first ten posts from June 2004 were drawn from an essay I had previously written and distributed to some friends and family members, which I called (tongue-in-cheek) “Steve’s Guide to Reality”. After a little while, though, the blog started to show some life, becoming more interactive (thanks to Justin and Peter for early comments). While the equilibrium frequency of posting turned out to be pretty low, it has held fairly steady.
I’m happy with the results. In addition to the comment and e-mail dialogues the blog has engendered, it has helped organize and record my thoughts much more effectively than the previous battered-notebook approach. Also, blogging has obviously facilitated taking advantage of the explosion of resources available to read and link to on the internet. I’m indebted to all the scholars, students and fellow laypeople who blog or otherwise post their work online.
Blogging is, then, a valuable tool which is helping me toward a goal: the development of a metaphysical worldview. This has been my number one “hobby” since I was a teenager, and I think I’ve made more progress in these last five years then ever before.
Thanks again.
Tuesday, May 19, 2009
The Free Will of Fruit Flies
I liked this short essay in Nature by Martin Heisenberg on free will (HT). Heisenberg is a neurobiologist (and the son of Werner), and his perspective is shaped by the work he’s done on more primitive organisms – in the essay he talks about bacteria and also the fruit flies (the famous drosophilia) which have been the subject of his own work. In short, the combination of microscopic randomness (ultimately sourced from the quantum realm) and an adaptive self-directedness (primitive intentionality -- see also here) comprise freedom.
In the human case, the discussion is obscured by the focus on the will found via introspective self-consciousness and its relation to our actions. But introspection is an extreme latecomer in evolution. Heisenberg suggests we have a kind of real freedom, shared with many other organisms, irrespective of whether our introspective picture is accurate or flawed. He says: “I maintain that we need not be conscious of our decision-making to be free. What matters is that our actions are self-generated…Why should an action become free from one moment to the next simply because we reflect upon it?”
In the human case, the discussion is obscured by the focus on the will found via introspective self-consciousness and its relation to our actions. But introspection is an extreme latecomer in evolution. Heisenberg suggests we have a kind of real freedom, shared with many other organisms, irrespective of whether our introspective picture is accurate or flawed. He says: “I maintain that we need not be conscious of our decision-making to be free. What matters is that our actions are self-generated…Why should an action become free from one moment to the next simply because we reflect upon it?”
Wednesday, May 13, 2009
John P. Meier: A Marginal Jew Volume 4
While it's off the main topic of the blog, I've long been interested in research into the history of early Christianity, and have a few posts in the archives on books and articles I've read. I have a series of posts about the first three volumes of A Marginal Jew -- John P. Meier's opus on the historical Jesus (here, here, and here). Now the fourth volume is out, and I'm unlikely to get to it for some time. It happens there is a great summary and review at Loren Rossen's blog, the busybody; so I will happily outsource the work this time.
On a related topic, April DeConick has an interesting series of posts on the beginnings of Christianity going at her blog (first post in series here). I had a post reviewing her book on the Gospel of Judas here.
On a related topic, April DeConick has an interesting series of posts on the beginnings of Christianity going at her blog (first post in series here). I had a post reviewing her book on the Gospel of Judas here.
Monday, May 11, 2009
Fine Idea on the Measurement Problem
Following up on the topic of the last post, I found this paper by Arthur Fine, called “Measurement and Quantum Silence”. I found it to be a clearly written summary of Fine’s approach to the quantum measurement problem (which was I was introduced to by the work of Mauricio Suárez.)
The term “quantum silence” refers to the Copenhagen interpretation-motivated suggestion that we just shouldn’t ask about the reality of the quantum system in superposition: there is no need “to talk about the value of an observable unless the state of the system is an eigenstate, or a mixture of eigenstates, of the observable in question.” We prepare a quantum system and calculate that its pure state evolves according to the Schrödinger equation. The problem is that when we measure the system it behaves “as if” what was evolving was only a mixture of the eigenstates of the particular observable we’re measuring. Fine introduces an interesting perspective on this, which is that the measurement process involves an information loss. It certainly seems that the rest of the information contained in the evolving pure state vanishes or becomes irrelevant when a measurement takes place.
Fine reviews other proposals (hidden variable and GRW-type collapse theories) and assesses how they deal with this loss of information, and he finds that while they try to replace the “brute” measurement process with a physical process meant to be more explicable, they don’t really explain the loss of information. And perhaps this can’t be done.
So, Fine makes his own proposal, which I described in the last post (although in this particular paper Fine doesn’t label the idea as “selective interactions”). Since the system behaves “as if” it was an evolving mixture rather than a pure state, why not assumes that it actually what happens. Rather than look to explain the loss of information which takes place in a measurement “farther down the line”, he proposes that the preparation actually replaces the full state with the mixture in advance.
Here’s how he motivates the idea: “There is a physical rationale for this procedure. It is that in making a measurement we do not interact with all the variables of the measured object. We only observe the particular aspect of the object that corresponds to the variable being measured….” The information that is lost pertains to aspects of the object to which the measuring device does not respond. The “aspects” here are what Suarez adopts as “propensities” in his work.
So, what to think? Well, in a nice aside in this paper, Fine credits philosopher of science Heinz Post* for suggesting a “conservation law” for problems in quantum theory: if you resolve one aspect of quantum strangeness, this tends to just shift it elsewhere. In Fine’s case, he addresses the puzzle of the measurement process with an account which says that the way we prepare a system alters the system from a pure state to a mixture prior to evolution. It’s an interesting idea and the pursuit of it offers additional illumination of the landscape of quantum mechanics. But in my case I’m happy to have collapse as an additional natural process which describes physical interactions between systems – I don’t feel a need to shift the strangeness in the way Fine suggests. And I don't think interpreting quantum systems as bearers of propensities turns on whether you adopt Fine's proposal.
*I would provide a link for mentioning Heinz Post, but very little comes up in an internet search. He was a professor at Chelsea College (which later merged into King's College, London). You get the impression he was an influential colleage and mentor, but published hardly at all.
The term “quantum silence” refers to the Copenhagen interpretation-motivated suggestion that we just shouldn’t ask about the reality of the quantum system in superposition: there is no need “to talk about the value of an observable unless the state of the system is an eigenstate, or a mixture of eigenstates, of the observable in question.” We prepare a quantum system and calculate that its pure state evolves according to the Schrödinger equation. The problem is that when we measure the system it behaves “as if” what was evolving was only a mixture of the eigenstates of the particular observable we’re measuring. Fine introduces an interesting perspective on this, which is that the measurement process involves an information loss. It certainly seems that the rest of the information contained in the evolving pure state vanishes or becomes irrelevant when a measurement takes place.
Fine reviews other proposals (hidden variable and GRW-type collapse theories) and assesses how they deal with this loss of information, and he finds that while they try to replace the “brute” measurement process with a physical process meant to be more explicable, they don’t really explain the loss of information. And perhaps this can’t be done.
So, Fine makes his own proposal, which I described in the last post (although in this particular paper Fine doesn’t label the idea as “selective interactions”). Since the system behaves “as if” it was an evolving mixture rather than a pure state, why not assumes that it actually what happens. Rather than look to explain the loss of information which takes place in a measurement “farther down the line”, he proposes that the preparation actually replaces the full state with the mixture in advance.
Here’s how he motivates the idea: “There is a physical rationale for this procedure. It is that in making a measurement we do not interact with all the variables of the measured object. We only observe the particular aspect of the object that corresponds to the variable being measured….” The information that is lost pertains to aspects of the object to which the measuring device does not respond. The “aspects” here are what Suarez adopts as “propensities” in his work.
So, what to think? Well, in a nice aside in this paper, Fine credits philosopher of science Heinz Post* for suggesting a “conservation law” for problems in quantum theory: if you resolve one aspect of quantum strangeness, this tends to just shift it elsewhere. In Fine’s case, he addresses the puzzle of the measurement process with an account which says that the way we prepare a system alters the system from a pure state to a mixture prior to evolution. It’s an interesting idea and the pursuit of it offers additional illumination of the landscape of quantum mechanics. But in my case I’m happy to have collapse as an additional natural process which describes physical interactions between systems – I don’t feel a need to shift the strangeness in the way Fine suggests. And I don't think interpreting quantum systems as bearers of propensities turns on whether you adopt Fine's proposal.
*I would provide a link for mentioning Heinz Post, but very little comes up in an internet search. He was a professor at Chelsea College (which later merged into King's College, London). You get the impression he was an influential colleage and mentor, but published hardly at all.
Thursday, April 30, 2009
Suárez on Quantum Propensities
Interpreting dispositional or power properties as propensities seems to me to be a very promising avenue for ontology. This is because theories employing powers (the ones I’ve seen) don’t get the modal structure of the world correct: by assuming that powers entail their manifestations, they fail to provide truthmakers for possibilities. Taking powers to be probabilistically manifested propensities solves this problem. It also bolsters a realist account of causality: actualizing propensities into specific outcomes gives causation some “real work” to do.
Finally, propensities can serve as a link between a philosopher’s ontology and the interpretation of quantum mechanics.* I was happy to find recently (via Philpapers) that propensities have a champion in Mauricio Suárez, a philosopher of science at Complutense University of Madrid. In several papers he has explored and advocated the propensity approach to understanding the properties of quantum systems. He also has pointed out the value of propensities to the dispositional/power property approach to ontology.
Popper and propensities
Propensity theory seems to be a relatively neglected topic these days. The work of Karl Popper may be one reason for this. Given Popper’s stature, the fact that his propensity interpretation of probability is widely regarded as a failure is discouraging. But Suárez makes a strong case that the problems with Popper’s theory are the result of his emphasis on interpreting quantum probabilities specifically, and also due to some particular assumptions which can be set aside or modified.
Popper wanted to interpret quantum probabilities using propensities, and he also thought propensities could be used to interpret probability in an objective manner generally. This effort has been roundly criticized. An important criticism is that known as Humphreys' paradox (after Paul Humphreys, see this paper). Humphrey pointed out that the asymmetric causal nature of propensities made them inconsistent with the symmetric character of conditional probability. But this paradox is only a problem for propensity interpretations of probability. When it comes to quantum theory, Suárez makes a wise point when he says that what we want to do is interpret quantum mechanics, not quantum probabilities. The probabilities observed in experiments would be explained by our account of quantum mechanics. This “clicked” for me: in my own reading of papers on interpreting quantum probability, I had found that the arguments tended to point toward subjective or Bayesian interpretations (see posts here and here), but this work didn’t seem to help one progress toward a satisfactory ontological interpretation of the physics. Perhaps it is better to interpret the ontology first.
With regard to some of Popper’s other assumptions about the nature of quantum propensities, Suárez explains in the paper “On Quantum Propensities: Two Arguments Revisited” how two other criticisms of Popper’s view may be avoided by a revised account of propensities – specifically Suárez’ ”selective propensities” proposal.
Selective Propensities
In addition to the paper mentioned above, Suárez has two other papers which discuss his selective propensities approach. In “Quantum Propensities”, he looks at some other historical attempts to employ propensities to interpret QM, and then contrasts his own proposal. 2004’s “Quantum Selections, Propensities, and the Problem of Measurement” develops the approach in the most detail, showing how it builds on Arthur Fine’s “selective interactions” solution to the quantum measurement problem.
Suárez’ approach is new to me and I’m still trying to understand it (I had not been exposed to Fine’s work before either). It seems that in the selective propensity interpretation, a quantum system possesses a number of dispositional properties coinciding with the observables we measure in experiments involving particles. These properties manifest themselves consistent with the probability distributions we observe in QM. We assert that in a measurement one interacts only with the property of the system selected. The interpretation then says that to explain the result, we can employ a mixed state of that property’s eigenstates to describe the initial preparation, rather than plugging in the full quantum state of the system. (The full quantum state encompasses all of the system’s properties.) We can still interpret the interference effects which result in some experimental setups as due to the interplay among the system’s various properties consistent with the full state superposition of the system.
This seems to imply that the description of the initial state of the system (setting up either a mixed state over one observable or the full state) is altered by how we set up the experiment. This seems strange at first glance, but I guess there’s always going to be something strange when you’re working with QM. I also wonder how to think about generalizing this scheme to understand how interactions work beyond the laboratory setting.
I’ll try follow up with more after re-reading and digesting this material further.
----------------------------------------------------------------------------------
* I’ve often thought about the issues involved when philosophers try to make sure their metaphysical ideas comport with physical theories. On the one hand, philosophers very much want to avoid proposals which seem to conflict with science. On the other hand, since our physical theories are provisional (and likely to be replaced in time by improved theories), perhaps philosophers shouldn’t worry if well-motivated ideas imply revision to current scientific understanding. I’ve seen relativity theory invoked to criticize philosophical positions (e.g. presentism in the discussion of time – see an abstract of what looks like an interesting paper here), but many recent research programs in quantum gravity explore the idea that relativity is an effective (low-energy regime) theory rather than something fundamental. The search for a theory of quantum gravity implies relativity, quantum mechanics or both will need to be revised.
So, while I personally want my metaphysical theory to accommodate quantum mechanics (and worry less about conflicts with relativity), I realize that this is tricky territory. It seems best to just be explicit about one’s presumptions.
Finally, propensities can serve as a link between a philosopher’s ontology and the interpretation of quantum mechanics.* I was happy to find recently (via Philpapers) that propensities have a champion in Mauricio Suárez, a philosopher of science at Complutense University of Madrid. In several papers he has explored and advocated the propensity approach to understanding the properties of quantum systems. He also has pointed out the value of propensities to the dispositional/power property approach to ontology.
Popper and propensities
Propensity theory seems to be a relatively neglected topic these days. The work of Karl Popper may be one reason for this. Given Popper’s stature, the fact that his propensity interpretation of probability is widely regarded as a failure is discouraging. But Suárez makes a strong case that the problems with Popper’s theory are the result of his emphasis on interpreting quantum probabilities specifically, and also due to some particular assumptions which can be set aside or modified.
Popper wanted to interpret quantum probabilities using propensities, and he also thought propensities could be used to interpret probability in an objective manner generally. This effort has been roundly criticized. An important criticism is that known as Humphreys' paradox (after Paul Humphreys, see this paper). Humphrey pointed out that the asymmetric causal nature of propensities made them inconsistent with the symmetric character of conditional probability. But this paradox is only a problem for propensity interpretations of probability. When it comes to quantum theory, Suárez makes a wise point when he says that what we want to do is interpret quantum mechanics, not quantum probabilities. The probabilities observed in experiments would be explained by our account of quantum mechanics. This “clicked” for me: in my own reading of papers on interpreting quantum probability, I had found that the arguments tended to point toward subjective or Bayesian interpretations (see posts here and here), but this work didn’t seem to help one progress toward a satisfactory ontological interpretation of the physics. Perhaps it is better to interpret the ontology first.
With regard to some of Popper’s other assumptions about the nature of quantum propensities, Suárez explains in the paper “On Quantum Propensities: Two Arguments Revisited” how two other criticisms of Popper’s view may be avoided by a revised account of propensities – specifically Suárez’ ”selective propensities” proposal.
Selective Propensities
In addition to the paper mentioned above, Suárez has two other papers which discuss his selective propensities approach. In “Quantum Propensities”, he looks at some other historical attempts to employ propensities to interpret QM, and then contrasts his own proposal. 2004’s “Quantum Selections, Propensities, and the Problem of Measurement” develops the approach in the most detail, showing how it builds on Arthur Fine’s “selective interactions” solution to the quantum measurement problem.
Suárez’ approach is new to me and I’m still trying to understand it (I had not been exposed to Fine’s work before either). It seems that in the selective propensity interpretation, a quantum system possesses a number of dispositional properties coinciding with the observables we measure in experiments involving particles. These properties manifest themselves consistent with the probability distributions we observe in QM. We assert that in a measurement one interacts only with the property of the system selected. The interpretation then says that to explain the result, we can employ a mixed state of that property’s eigenstates to describe the initial preparation, rather than plugging in the full quantum state of the system. (The full quantum state encompasses all of the system’s properties.) We can still interpret the interference effects which result in some experimental setups as due to the interplay among the system’s various properties consistent with the full state superposition of the system.
This seems to imply that the description of the initial state of the system (setting up either a mixed state over one observable or the full state) is altered by how we set up the experiment. This seems strange at first glance, but I guess there’s always going to be something strange when you’re working with QM. I also wonder how to think about generalizing this scheme to understand how interactions work beyond the laboratory setting.
I’ll try follow up with more after re-reading and digesting this material further.
----------------------------------------------------------------------------------
* I’ve often thought about the issues involved when philosophers try to make sure their metaphysical ideas comport with physical theories. On the one hand, philosophers very much want to avoid proposals which seem to conflict with science. On the other hand, since our physical theories are provisional (and likely to be replaced in time by improved theories), perhaps philosophers shouldn’t worry if well-motivated ideas imply revision to current scientific understanding. I’ve seen relativity theory invoked to criticize philosophical positions (e.g. presentism in the discussion of time – see an abstract of what looks like an interesting paper here), but many recent research programs in quantum gravity explore the idea that relativity is an effective (low-energy regime) theory rather than something fundamental. The search for a theory of quantum gravity implies relativity, quantum mechanics or both will need to be revised.
So, while I personally want my metaphysical theory to accommodate quantum mechanics (and worry less about conflicts with relativity), I realize that this is tricky territory. It seems best to just be explicit about one’s presumptions.
Monday, April 20, 2009
GPPC 2009 Public Issues Forum
This annual Greater Philadelphia Philosophy Consortium event is coming this Saturday afternoon April 25th, hosted at Drexel University. The topic this time is Just War Theory.
The three speakers:
Larry May (Washington University) on "Collective Responsibility in Warfare"
Robert D. Sloane (Boston University School of Law) on "The Cost of Conflation: The Dualism of Jus ad Bellum and Jus in Bello in the Contemporary Law of War"
Peter Tramel (U.S. Military Academy at West Point) on "Conscientious Objection and Volunteer Military Service"
Chair: Anil Kalhan (Earle Mack School of Law at Drexel)
April 25, 2009 1:00-5:00 pm Room 140 Earle Mack Law School, 3320 Market St., Drexel University, University City Campus, Philadelphia (Directions).
(Reception to follow).
The three speakers:
Larry May (Washington University) on "Collective Responsibility in Warfare"
Robert D. Sloane (Boston University School of Law) on "The Cost of Conflation: The Dualism of Jus ad Bellum and Jus in Bello in the Contemporary Law of War"
Peter Tramel (U.S. Military Academy at West Point) on "Conscientious Objection and Volunteer Military Service"
Chair: Anil Kalhan (Earle Mack School of Law at Drexel)
April 25, 2009 1:00-5:00 pm Room 140 Earle Mack Law School, 3320 Market St., Drexel University, University City Campus, Philadelphia (Directions).
(Reception to follow).
Subscribe to:
Posts (Atom)