This is a new topic for this blog, prompted by my recent re-reading of the three volumes of John P. Meier’s A Marginal Jew: Rethinking the Historical Jesus.
[UPDATE 16 March 2009: Please note that volume 4 will be published in May 2009 (amazon link). It focuses primarily on the relationship between Jesus and Mosaic Law.]
---------------------------------------------------------------------------------
Sometime in the late 80’s I was browsing in my local library and a book caught my eye: The Quest of the Historical Jesus, by Albert Schweitzer. Both the subject and the author sparked interest. I knew of Schweitzer through his reputation as a great humanitarian doctor in Africa, it turns out he was also a philosopher, theologian and New Testament scholar. And what more fascinating subject could one imagine than the study of what we might discover about the “historical Jesus” (as opposed to the Jesus of religious faith)?
A quick digression: I remembered once as a teenager reading the gospels on my own (possibly for the first time) and being absolutely blown away by noticing the passages in Matthew and Mark referring to Jesus’ brothers. Four are named in Mark 6:3: James, Joses, Judas and Simon; I happen to have four brothers, so did Jesus! How was it that in all those years of (catholic) church and CCD classes, I never heard anything about this?! The status of James as a leader of the early church in Jerusalem (Galatians 1:19) makes it yet more surprising that it took so long for me to learn about this. (I later learned that there is a long history in the catholic church of construing the Greek word translated as brother as meaning cousin, but there is little basis for this. Also, I should mention that the passages also refer to Jesus’ sisters – but they are not enumerated or named).
Anyway, while the historical Jesus was a topic I had thought about, I had never before read anything about it. In his book, Schweitzer summarized and critiqued what is now called the “First Quest” (or the “Old Quest’) for the historical Jesus which took place among mainly German scholars in the 18th and 19th centuries. The book, published in 1906, serves as a reference point for the state of scholarship on the subject at the beginning of the 20th century.
Some key themes in modern scholarship already appear in Schweitzer. Foremost is the challenge posed by the paucity of evidence. There are very few useful references outside the canonical gospels. The gospels themselves were written several decades after Jesus’ time. They are formed from composite sources with extensive creative redaction by the final authors – authors who were driven not by a desire to record history but by the priorities of their struggling early church communities. As Schweitzer showed when presenting the work of a great diversity of scholars in his review, the greatest danger is that of merely finding the Jesus of one’s desires or imagination within the skeleton of clues embedded in the gospels. A theme of the book was the conflict between the “rationalists”, who tended to discover a Jesus congenial to the standards of the enlightenment, and theologians who fought a rearguard action to find a Jesus consistent with their faith. Still, there was progress made in some of this early work. For example, a conclusion still regarded as well-founded by most (never all!) scholars is the two-source theory of the Synoptics. This refers to the priority of Mark as the earliest gospel, which was used in turn as a source for Matthew and Luke, and also to Matthew and Luke’s shared use of a written collection of sayings, called the ‘Q’ document. (A brief history of historical Jesus scholarship is here).
The late 80’s and 90’s turned out to be a very active period in the field (the “Third Quest”). The work has benefited from much greater sophistication in source and form criticism and from concurrent advances in the study of the social, political and religious environment of the time and place. But the spectrum of conclusions drawn by scholars is (unfortunately) still very wide.
A great amount of media attention was generated by the Jesus Seminar in this period. The Seminar, founded by Robert Funk in 1985, was composed of a group of 30 or more scholars who met to consider the historicity of the gospels. This effort stirred up much controversy, for 3 reasons that I could see. First, the group approached the task with revisionism in mind. One of the seven pillars guiding the effort mentioned in the introduction to their first work, The Five Gospels, was that the texts were deemed ahistorical until enough evidence showed otherwise; another stated that supernatural material was inherently not historical. Second, they adopted a voting method (using colored beads) which when implemented tended to give greater weight to skepticism (only 18% of the words of Jesus received the highest “red” treatment for historicity). Finally, Funk made public relations a fundamental part of the seminar’s agenda, and this naturally stirred up critics much more than if the debate remained contained in academia. I sometimes thought when reading The Five Gospels that the actual description of the proceedings on the various passages implied a more nuanced and reasonable study than the headlines (and final color coding) implied. The premium placed on generating headlines extended to the title of the book itself: the Seminar elevated the Coptic Gospel of Thomas (found at Nag Hammadi) to a place alongside the canonical gospels. Yet when you looked at the Seminar’s conclusions with regard to Thomas, there was very little in it that they deemed historical which didn’t already have a canonical parallel. Still, the seminar’s overall approach gave critics ammunition for the charge that their methods led to biased outcomes.
This highly publicized conclusion that only a small part of the gospels was historical and the resulting consignment of so many of Jesus’ memorable words and acts to theological and mythological invention by the early church met with vehement opposition. The unfortunate consequence is that it seemed an extreme dichotomy was set before the public: a rational historical look at Jesus leads to radical rejection of most of what was distinctive about the Jesus of the Christian faith; therefore the response of Christians should be to reject the legitimacy of the concept of such a historical project. One prominent critic whose book I read was this one by Luke Timothy Johnson. The book offers some criticism of the research and methodologies of the Seminar and other writers, but it ultimately came down to an argument that the historical project is misguided and the only “real” Jesus is the Christ of faith. Other conservative Christian scholars have engaged in more rigorous defense of the historicity of the gospels with the goal of providing apologetic material to their Christian readers (One example would be William Lane Craig).
But apologetics are of little use to an open-minded reader who wants the best and most objective study of the subject possible. While the Jesus Seminar’s approach may have been biased, apologetics by definition prejudge the matter.
Of the books I’ve read so far, I believe John Meier’s work comes closest to this admittedly impossible goal of objectivity. I will follow up with a another post where I’ll discuss some of the reasons for my opinion and give brief examples of his method and results. For now, here is a link to an article he wrote in 1999 about the third quest which gives a flavor for his approach.
Thursday, December 22, 2005
Wednesday, December 21, 2005
The ID Decision in Dover
I'm pleased with the outcome of the Dover case on Intelligent Design (the decision is here, and a blog post with links about the case is here). I want to emphasize one part of this. What I strongly object to is the effort to place ID in public school science classrooms ("full stop"). I have no objection to full and open debate about the merits of ID in terms of philosophy of science, metaphysics, or, of course, theology. While the judge was extremely harsh regarding the religious motives of the board and the claims that ID was a scientific alternative to evolutionary theory, I think it is important to emphasize this part of the decision's conclusion:
"With that said, we do not question that many of the leading advocates of ID have bona fide and deeply held beliefs which drive their scholarly endeavors. Nor do we controvert that ID should continue to be studied, debated, and discussed. As stated, our conclusion today is that it is unconstitutional to teach ID as an alternative to evolution in a public school science classroom."
"With that said, we do not question that many of the leading advocates of ID have bona fide and deeply held beliefs which drive their scholarly endeavors. Nor do we controvert that ID should continue to be studied, debated, and discussed. As stated, our conclusion today is that it is unconstitutional to teach ID as an alternative to evolution in a public school science classroom."
Thursday, December 08, 2005
Stapp and Quantum Agents
For several decades now, physicist Henry Stapp has been publishing books and papers which develop an explanation of human consciousness grounded in quantum mechanics (an online list of Stapp’s papers from recent years is here). I just read this new article, published in the Journal of Consciousness Studies. This particular article includes a review of his general approach in layman-accessible terms, and then highlights his proposed linkage of the efficacy of conscious will to the quantum zeno effect. This is interesting stuff, and I would highly recommend that students of consciousness read Stapp. For now, I have a brief comment (which is off the main topic of the paper) on a set of long standing questions I have about Stapp’s work: if quantum effects are needed to explain the workings of the human brain, aren’t they also needed to explain other macroscopic phenomena? Is it only in the human brain where a classical approximation fails to provide a full explanation? In terms of causation, are human beings the only quantum agents?
In other words, I feel like we’re missing some steps in our description of reality. Humans, after all, evolved from lower forms of life. Life itself was bootstrapped out of the inorganic world. While human consciousness is unique in so many ways, it seems most plausible that humans leverage capabilities inherent in other natural systems, rather than utilize utterly unique mechanisms.
Stapp seems surely right when he says that certain brain processes are grounded in the quantum realm (he cites the size of ionic channels between synapses as being on a scale where quantum effects must exist). But couldn’t this be true of cellular processes outside the brain, too? How about in single-celled animals? If quantum interactions (=measurements) are the raw material of the macroscopic world (as I speculated in my recent post), shouldn’t this be in evidence in realms other than the human brain?
Here’s a comment from Stapp on this (made somewhat as an aside):
"But if one considers the Von Neumann theory to be an ontological description of what is really going on, then one must of course relax the anthropocentric bias, and allow agents of many ilks. Yet the theory entails that it would be virtually impossible to determine, empirically, whether a large system that is strongly interacting with its environment is acting as an agent or not. This means that the theory, regarded as an ontological theory, has huge uncertainties.
However, our interest here is the nature of human agents. Hence the near impossibility determining the possible existence of other kinds of agents, will mean that our lack of information about the existence of those other possible kinds of agents will have little or no impact on our understanding of ourselves."
This seems too quick of a dismissal of the ubiquity of quantum agents. For what it’s worth, here’s my alternative view of how things could work. Macroscopic systems in nature can be described in terms of systems which coordinate quantum micro-agents. The raw material of first person experience and intentionality comes from small quantum interactions, which are then leveraged through special functional networks in human brains to give rise to the familiar large scale features of consciousness. This account would be consistent with the fact that consciousness is tied intimately to the specific structures of the brain while also addressing why the deepest mysteries of consciousness (why there is first person experience and intentionality at all) are impervious to description in classical terms.
The research agenda to get at these issues would include trying to figure out whether single-celled animals might utilize quantum effects and whether quantum physics plays any role in an account of the origin of life.
In other words, I feel like we’re missing some steps in our description of reality. Humans, after all, evolved from lower forms of life. Life itself was bootstrapped out of the inorganic world. While human consciousness is unique in so many ways, it seems most plausible that humans leverage capabilities inherent in other natural systems, rather than utilize utterly unique mechanisms.
Stapp seems surely right when he says that certain brain processes are grounded in the quantum realm (he cites the size of ionic channels between synapses as being on a scale where quantum effects must exist). But couldn’t this be true of cellular processes outside the brain, too? How about in single-celled animals? If quantum interactions (=measurements) are the raw material of the macroscopic world (as I speculated in my recent post), shouldn’t this be in evidence in realms other than the human brain?
Here’s a comment from Stapp on this (made somewhat as an aside):
"But if one considers the Von Neumann theory to be an ontological description of what is really going on, then one must of course relax the anthropocentric bias, and allow agents of many ilks. Yet the theory entails that it would be virtually impossible to determine, empirically, whether a large system that is strongly interacting with its environment is acting as an agent or not. This means that the theory, regarded as an ontological theory, has huge uncertainties.
However, our interest here is the nature of human agents. Hence the near impossibility determining the possible existence of other kinds of agents, will mean that our lack of information about the existence of those other possible kinds of agents will have little or no impact on our understanding of ourselves."
This seems too quick of a dismissal of the ubiquity of quantum agents. For what it’s worth, here’s my alternative view of how things could work. Macroscopic systems in nature can be described in terms of systems which coordinate quantum micro-agents. The raw material of first person experience and intentionality comes from small quantum interactions, which are then leveraged through special functional networks in human brains to give rise to the familiar large scale features of consciousness. This account would be consistent with the fact that consciousness is tied intimately to the specific structures of the brain while also addressing why the deepest mysteries of consciousness (why there is first person experience and intentionality at all) are impervious to description in classical terms.
The research agenda to get at these issues would include trying to figure out whether single-celled animals might utilize quantum effects and whether quantum physics plays any role in an account of the origin of life.
Friday, December 02, 2005
Local vs. Global Possibility and the Link to Causality
Below is a post inspired by reading this paper by Georgetown’s Alexander Pruss (thanks to Tychic for the pointer to this). It has a number of interesting ideas relating to modality and causality.
Early on in this paper, Pruss makes the point that our every-day modal claims are mainly about “local” or adjacent possibilities. For example, “were I to drop this glass, it would fall”. Importantly, our intuition is to view these claims as being about possibilities local to us in this concrete world, not located in another possible world.
He then discusses the philosophical tradition of analyzing of modal statements in terms of possible worlds -- we might call this a “global” treatment of modality. Pruss concurs with the consensus on this subject that there are compelling analytic reasons to approach modality in this way. He also discusses the idea of truthmakers for modal claims, which leads to a discussion of realist accounts of possible worlds.
He spends part of the paper critiquing two main versions of “possible worlds” modal realism; the concrete version of Lewis, and the abstract versions of Plantinga and Robert M. Adams (with emphasis on Adams). One of his two main critiques of Lewis is the charge that inductive inferences about the future are invalid in this scheme (the second is an argument about ethical paradoxes). We could be in a world where gravity fails tomorrow! This point is familiar from discussions of Hume’s treatment of causation. Lewis’ overall scheme of counterfactual analysis of causation plus modal realism is self-described by him as Humean in spirit, so presumably he wouldn’t have been bothered by this criticism.
Pruss mentions a prominent criticism of abstract possible worlds, which is that what distinguishes one possible maximal state of affairs as our concrete world is unexplained – it is a primitive of the theory.
His next criticism is the important one for where he’s going in this paper: He criticizes the abstract system for the fact that all of these possible worlds already exist, prior to any seeming need for them to explain a local possibility in our concrete world. The simultaneous existence of all these abstract possibilities is a problem for Pruss, because it conflicts with the Aristotelian maxim that “actuality is prior to possibility”. And if the maxim is valid and possibility is grounded in actuality, then it means that the actuality “has some powers, capacities or dispositions capable of producing that possibility, which of course once produced would no longer be a mere possibility.”
This again ties the discussion of modality back to a discussion of causality. Abstract entities are usually viewed as categorically unable to enter into causal relations. Aristotle’s system of causation is a real causal production system which follows from the causal capabilities and dispositions of actual entities in the (concrete) world.
If what Pruss says is right and this Aristotelian view can produce possibilities, could this be the basis for a full alternative treatment of modality which obviates the need for a scheme like Lewis’ or Adams? The idea is that “a non-actual state of affairs is possible if there actually was a substance capable of initiating a causal chain…that could lead to the state of affairs we claim is possible”.
Pruss himself sees two obstacles for this approach. First, while it works for local possibilities, it’s not clear you can get global possibilities (whole possible worlds) out of it. The second problem is the initial grounding of the contingent concrete things to begin with: how did this party get started?
Finishing the paper with a theistic turn, Pruss proposes that a possible solution to these two problems would involve a necessary first cause. A possible world which is not actual would be linked to the actual by following the chain of causation backward until we found a starting point which could serve as a branching point for both worlds. This would be the first cause.
So, starting with an analysis of what makes modal claims true, we’ve taken a roundabout route to the cosmological argument for the existence of God. Whew! (In a second paper here, Pruss has offers further arguments about the Principle of Sufficient Reason, the Cosmological Argument and related topics).
There is lots of thought-provoking stuff here. The point I want to emphasize again is the linkage between solving the problems of modality and causality. David Lewis’ system for handling both topics seems rigorous and consistent, but violates our intuition that real causation and possibility are active in our concrete world. It’s unclear that abstract possible worlds can be linked to causality given the normal view of abstract objects as causally inert, so this is a weakness of this approach. Using one view of real causation, the Aristotelian one, Pruss can get to a treatment of modality, but it requires a big commitment to a necessary first cause to make it work. I'm not sure about that move, but it seems right to me that the path to a solution does does depend on working out a system of real causality.
Early on in this paper, Pruss makes the point that our every-day modal claims are mainly about “local” or adjacent possibilities. For example, “were I to drop this glass, it would fall”. Importantly, our intuition is to view these claims as being about possibilities local to us in this concrete world, not located in another possible world.
He then discusses the philosophical tradition of analyzing of modal statements in terms of possible worlds -- we might call this a “global” treatment of modality. Pruss concurs with the consensus on this subject that there are compelling analytic reasons to approach modality in this way. He also discusses the idea of truthmakers for modal claims, which leads to a discussion of realist accounts of possible worlds.
He spends part of the paper critiquing two main versions of “possible worlds” modal realism; the concrete version of Lewis, and the abstract versions of Plantinga and Robert M. Adams (with emphasis on Adams). One of his two main critiques of Lewis is the charge that inductive inferences about the future are invalid in this scheme (the second is an argument about ethical paradoxes). We could be in a world where gravity fails tomorrow! This point is familiar from discussions of Hume’s treatment of causation. Lewis’ overall scheme of counterfactual analysis of causation plus modal realism is self-described by him as Humean in spirit, so presumably he wouldn’t have been bothered by this criticism.
Pruss mentions a prominent criticism of abstract possible worlds, which is that what distinguishes one possible maximal state of affairs as our concrete world is unexplained – it is a primitive of the theory.
His next criticism is the important one for where he’s going in this paper: He criticizes the abstract system for the fact that all of these possible worlds already exist, prior to any seeming need for them to explain a local possibility in our concrete world. The simultaneous existence of all these abstract possibilities is a problem for Pruss, because it conflicts with the Aristotelian maxim that “actuality is prior to possibility”. And if the maxim is valid and possibility is grounded in actuality, then it means that the actuality “has some powers, capacities or dispositions capable of producing that possibility, which of course once produced would no longer be a mere possibility.”
This again ties the discussion of modality back to a discussion of causality. Abstract entities are usually viewed as categorically unable to enter into causal relations. Aristotle’s system of causation is a real causal production system which follows from the causal capabilities and dispositions of actual entities in the (concrete) world.
If what Pruss says is right and this Aristotelian view can produce possibilities, could this be the basis for a full alternative treatment of modality which obviates the need for a scheme like Lewis’ or Adams? The idea is that “a non-actual state of affairs is possible if there actually was a substance capable of initiating a causal chain…that could lead to the state of affairs we claim is possible”.
Pruss himself sees two obstacles for this approach. First, while it works for local possibilities, it’s not clear you can get global possibilities (whole possible worlds) out of it. The second problem is the initial grounding of the contingent concrete things to begin with: how did this party get started?
Finishing the paper with a theistic turn, Pruss proposes that a possible solution to these two problems would involve a necessary first cause. A possible world which is not actual would be linked to the actual by following the chain of causation backward until we found a starting point which could serve as a branching point for both worlds. This would be the first cause.
So, starting with an analysis of what makes modal claims true, we’ve taken a roundabout route to the cosmological argument for the existence of God. Whew! (In a second paper here, Pruss has offers further arguments about the Principle of Sufficient Reason, the Cosmological Argument and related topics).
There is lots of thought-provoking stuff here. The point I want to emphasize again is the linkage between solving the problems of modality and causality. David Lewis’ system for handling both topics seems rigorous and consistent, but violates our intuition that real causation and possibility are active in our concrete world. It’s unclear that abstract possible worlds can be linked to causality given the normal view of abstract objects as causally inert, so this is a weakness of this approach. Using one view of real causation, the Aristotelian one, Pruss can get to a treatment of modality, but it requires a big commitment to a necessary first cause to make it work. I'm not sure about that move, but it seems right to me that the path to a solution does does depend on working out a system of real causality.
Monday, November 21, 2005
The Duality at the Heart of Physics
I’m speaking of the measurement problem or paradox in quantum physics. One the one hand we have the continuous deterministic dynamical evolution of the wave function (in Schrödinger’s formulation of quantum mechanics), and on the other we have the discontinuous process of measurement which “collapses” the wave function into a definite state. What, if anything, does quantum physics mean for the nature of reality?
Interpretations of this problem historically have tended to devalue the ontological status of one or the other side of this duality. Some versions of the Copenhagen interpretation treated the wave function as a mere calculation framework which shouldn’t be accorded the status of something real. Over recent times, variants of the many-worlds hypothesis have become more popular: these emphasize the reality of the well-behaved wave function, and dismiss our perception of the everyday classical world as either a limited or illusory view of true reality. As I wrote in (the last part of) this post, more careful analysis of Neils Bohr’s own views tend to show he understood that the two quantum processes were irreducibly interdependent.
My own view is that quantum measurements are the events which make up the concrete fabric of our world. The wave function also exists, however, and can be considered the space of abstract possibilities which provide the raw material for the actualization of each event. While we can only observe "inter-measurement" phenomena associated with the wave function (e.g. entanglement) in carefully constructed laboratory situations, it comprehensively enters into our everyday world as well.
In fact, each larger system in nature is an extended event complex which continually self-implements measurements as interactions with the environment. Each set of measurements gives rise to a new possibility space which is raw material for further events.
Interpretations of this problem historically have tended to devalue the ontological status of one or the other side of this duality. Some versions of the Copenhagen interpretation treated the wave function as a mere calculation framework which shouldn’t be accorded the status of something real. Over recent times, variants of the many-worlds hypothesis have become more popular: these emphasize the reality of the well-behaved wave function, and dismiss our perception of the everyday classical world as either a limited or illusory view of true reality. As I wrote in (the last part of) this post, more careful analysis of Neils Bohr’s own views tend to show he understood that the two quantum processes were irreducibly interdependent.
My own view is that quantum measurements are the events which make up the concrete fabric of our world. The wave function also exists, however, and can be considered the space of abstract possibilities which provide the raw material for the actualization of each event. While we can only observe "inter-measurement" phenomena associated with the wave function (e.g. entanglement) in carefully constructed laboratory situations, it comprehensively enters into our everyday world as well.
In fact, each larger system in nature is an extended event complex which continually self-implements measurements as interactions with the environment. Each set of measurements gives rise to a new possibility space which is raw material for further events.
Friday, November 18, 2005
Notes on Plantinga's Modal Realism
Here are some brief notes on Plantinga’s system of modal realism. I have not found this to be easy territory. Sources: Essays in the Metaphysics of Modality, and Christopher Menzels’s SEP article on Actualism.
In place of possible worlds we have a full set of abstract maximal states of affairs. These “worlds”, along with the concrete world we know, are all part of existence. The difference between an abstract maximal state of affairs and our everyday world is that ours happens to be the maximal state of affairs which “obtains” -- this obtaining is analogous to a propostion being true. Note: in some of Plantinga’s writings, he says our world is the actual world, and the abstract ones are not actual, yet they exist. It’s easy for “those of us at home” to be confused when the words ‘actual’ and ‘exists’ are divorced! But leaving this aside, it is a primitive fact that one world – ours—is picked out as the concrete world. Lewis was extremely critical of this aspect – in his system actual is a merely indexical term: the actual world is the world I happen to be in, but otherwise the worlds exist on even-handed terms relative to each other.
Another key aspect for Plantinga is that for any object/individual there exists an individual essence. This is unlike Lewis’ counterpart theory, where I have counterparts in other worlds, and there is no dogma saying precisely when it is that the counterpart’s properties differ enough from mine that it is no longer worthy of being called my counterpart. The postulate of individual essences also seems a brute primitive. It serves to replace possible individuals in modal statements with something ‘actual’. But it’s not an actual individual – it is an (platonic) essence. The notion of individual essence seems to me a way to avoid the question of what it is which makes a bundle of properties an individual. (Some technical objections to this part of the system are also summarized in Menzel’s article.)
Plantinga takes seriously that we need to have truthmakers for modal truths (which I like). He also thinks there are should be no non-existent objects or mere possibilia as truthmakers, so everything needs to be explained via things which exist. To make it work, he needs a panoply of abstract entities to cover all the angles. I don’t have a problem with this in and of itself, since I am open-minded regarding the existence of abstract objects. The 2 issues I have are those mentioned above: first, the way our world gets picked out of the set of abstract worlds is a primitive; and the postulate of individual essences (as distinct from individuals themselves) is another feature I find somewhat gratuitous.
In place of possible worlds we have a full set of abstract maximal states of affairs. These “worlds”, along with the concrete world we know, are all part of existence. The difference between an abstract maximal state of affairs and our everyday world is that ours happens to be the maximal state of affairs which “obtains” -- this obtaining is analogous to a propostion being true. Note: in some of Plantinga’s writings, he says our world is the actual world, and the abstract ones are not actual, yet they exist. It’s easy for “those of us at home” to be confused when the words ‘actual’ and ‘exists’ are divorced! But leaving this aside, it is a primitive fact that one world – ours—is picked out as the concrete world. Lewis was extremely critical of this aspect – in his system actual is a merely indexical term: the actual world is the world I happen to be in, but otherwise the worlds exist on even-handed terms relative to each other.
Another key aspect for Plantinga is that for any object/individual there exists an individual essence. This is unlike Lewis’ counterpart theory, where I have counterparts in other worlds, and there is no dogma saying precisely when it is that the counterpart’s properties differ enough from mine that it is no longer worthy of being called my counterpart. The postulate of individual essences also seems a brute primitive. It serves to replace possible individuals in modal statements with something ‘actual’. But it’s not an actual individual – it is an (platonic) essence. The notion of individual essence seems to me a way to avoid the question of what it is which makes a bundle of properties an individual. (Some technical objections to this part of the system are also summarized in Menzel’s article.)
Plantinga takes seriously that we need to have truthmakers for modal truths (which I like). He also thinks there are should be no non-existent objects or mere possibilia as truthmakers, so everything needs to be explained via things which exist. To make it work, he needs a panoply of abstract entities to cover all the angles. I don’t have a problem with this in and of itself, since I am open-minded regarding the existence of abstract objects. The 2 issues I have are those mentioned above: first, the way our world gets picked out of the set of abstract worlds is a primitive; and the postulate of individual essences (as distinct from individuals themselves) is another feature I find somewhat gratuitous.
Friday, November 11, 2005
Platonism on Tap at Maverick Philosopher
Monday, November 07, 2005
Driven to Abstraction
Plato just won’t go away.
I’ve been thinking about whether the truthmakers for modal truths could involve abstract possible worlds, but this requires backing up a bit in order to consider the status of abstract objects in general.
Defining what it is to be "abstract" is not trivial. Lewis made this point in On the Plurality of Worlds. His breakdown of several ways to approach the question is followed by Gideon Rosen in this brief SEP article on Abstract Objects. The most often used methods are to define abstract objects in terms of what they are not, and then work out the idea using examples. Abstract objects are neither physical nor mental – usually they are thought of as unchanging and causally inert (the potential role of abstract objects in a theory of causation is something to come back to, however). Numbers and universals (“redness”, “roundness”) are paradigm examples.
This SEP article on Platonism in Metaphysics, by Mark Balaguer, summarizes the state of abstract objects in modern metaphysics. He surveys the landscape by seeing how the main candidates for abstract object status (mathematical objects, properties, propositions, possible worlds) fare under Platonism and its main rivals (nominalism, immanent realism, conceptualism). It’s an exceptionally reader friendly article, and was a helpful review for me.
One item I thought was interesting is that, according to Balaguer, the strongest argument for Platonism is a truthmaker argument. The things denoted in literally true statements (“3 is prime”) must exist as abstract objects in order to make the statement true. Alternative accounts of how these statements work or attempts to deny their truth all have problems and objections.
Of course, Platonism has strong objections. Balaguer singles out the epistemological issue as the biggest problem: if abstract objects are non-physical (and exist outside of space-time) how can we have knowledge about them? Different accounts have been proposed regarding how this can work; objections have been lodged, and the debate continues.
There is a running “meta-theme” here which I’ve been thinking about as I’ve tried to survey different topics in metaphysics (ontology, causality, modality). These metaphysical questions are difficult, and simple solutions obviously don’t work or the debates would have ended long ago. What this means to me is that the common presumption that something like physicalist monism should be the “default” metaphysical position is unfounded. More “extravagant” metaphysical systems need to be weighed in the quest to find a better mousetrap for explaining how the world works.
I’ve been thinking about whether the truthmakers for modal truths could involve abstract possible worlds, but this requires backing up a bit in order to consider the status of abstract objects in general.
Defining what it is to be "abstract" is not trivial. Lewis made this point in On the Plurality of Worlds. His breakdown of several ways to approach the question is followed by Gideon Rosen in this brief SEP article on Abstract Objects. The most often used methods are to define abstract objects in terms of what they are not, and then work out the idea using examples. Abstract objects are neither physical nor mental – usually they are thought of as unchanging and causally inert (the potential role of abstract objects in a theory of causation is something to come back to, however). Numbers and universals (“redness”, “roundness”) are paradigm examples.
This SEP article on Platonism in Metaphysics, by Mark Balaguer, summarizes the state of abstract objects in modern metaphysics. He surveys the landscape by seeing how the main candidates for abstract object status (mathematical objects, properties, propositions, possible worlds) fare under Platonism and its main rivals (nominalism, immanent realism, conceptualism). It’s an exceptionally reader friendly article, and was a helpful review for me.
One item I thought was interesting is that, according to Balaguer, the strongest argument for Platonism is a truthmaker argument. The things denoted in literally true statements (“3 is prime”) must exist as abstract objects in order to make the statement true. Alternative accounts of how these statements work or attempts to deny their truth all have problems and objections.
Of course, Platonism has strong objections. Balaguer singles out the epistemological issue as the biggest problem: if abstract objects are non-physical (and exist outside of space-time) how can we have knowledge about them? Different accounts have been proposed regarding how this can work; objections have been lodged, and the debate continues.
There is a running “meta-theme” here which I’ve been thinking about as I’ve tried to survey different topics in metaphysics (ontology, causality, modality). These metaphysical questions are difficult, and simple solutions obviously don’t work or the debates would have ended long ago. What this means to me is that the common presumption that something like physicalist monism should be the “default” metaphysical position is unfounded. More “extravagant” metaphysical systems need to be weighed in the quest to find a better mousetrap for explaining how the world works.
Wednesday, October 26, 2005
The World is not Enough
Picking up the thread on modal metaphysics (most recent post here): If I follow Armstrong’s truth/truthmaker approach to metaphysics and I insist it apply adequately to modal truths regarding necessity and possibility, it seems to lead (contra Armstrong’s own conclusion) to a modal realism involving possible worlds. David Lewis is the leading proponent of the theory that all possible worlds exist concretely. So what are the specific strategies for those who want to avoid joining Lewis? I “hit” the Stanford Encyclopedia of Philosophy to learn more. A brief summary of two relevant articles is below (keep in mind that this stuff is new to me and I’m even more prone than usual to misreading something – please check out the source articles if you’re interested).
There are two broad options here. First, try to deny any need to appeal beyond our single actual world. Second, substitute an account of abstract entities for Lewis’ concrete ones. This latter approach I will come back to in a future post.
Christopher Menzel wrote the SEP article called “Actualism”. Actualism is the thesis that there are no non-actual individuals or worlds. The article discusses various strategies to defend this thesis. Menzel discusses modal logic, and the fact that the simplest modal logic can be made to generate theorems which are troubling to the actualist. One of them is something like this: the statement {it’s possible that a flying pig exists} implies the statement {there exists something which is possibly a flying pig}. Next, he then summarizes how Kripke and others introduced modifications to modal logic meant to prevent the derivation of these sorts of statements while preserving the ability of the logic to do its modal work. The article leads one to conclude these efforts have not met with success: there isn’t a way to have a fully robust modal logic which has an actualist metaphysics as its analogue.
The article also discusses other programs to defend actualism (“world stories"/"world propositions”) which deflate the status of troubling implied statements into something innocuously only about propositions, not individuals. According to Menzel’s account these attempts also have serious objections – mainly they don’t do justice to analyzing the role modal statements really play in our language and thought. (He also devotes a section on Plantinga’s system of abstract possible worlds; this is something I will spend more time on before commenting).
Another strategy, which Armstrong himself once advocated, is called “Modal Fictionalism” – the SEP Article on this is authored by Daniel Nolan. This is the idea that this talk of possible worlds is a useful fictional construct not meant to be taken at face value. While this sounds attractive, the strategy suffers from objections, too. These include technical objections (logical formulations of modal fictionalism can lead to contradictions or circularity). Also, modal statements have more “objectivity” than authored fictions - if they’re not actual, they would be more like “hard” abstractions (mathematics being the paradigm example). A related point is that modal fictions are incomplete since they suffer from the author’s ignorance about many modal statements. Several other objections are outlined in the article as well.
Now as I said above I’m new to all of this, and it’s possible that some strategy exists or will be developed in the future which does justice to analyzing modal language and its role in rational thought while keeping to a deflationary actualist metaphysics. All I can safely say here is that all the attempts in the literature mentioned in these articles are described as having serious outstanding objections.
There are two broad options here. First, try to deny any need to appeal beyond our single actual world. Second, substitute an account of abstract entities for Lewis’ concrete ones. This latter approach I will come back to in a future post.
Christopher Menzel wrote the SEP article called “Actualism”. Actualism is the thesis that there are no non-actual individuals or worlds. The article discusses various strategies to defend this thesis. Menzel discusses modal logic, and the fact that the simplest modal logic can be made to generate theorems which are troubling to the actualist. One of them is something like this: the statement {it’s possible that a flying pig exists} implies the statement {there exists something which is possibly a flying pig}. Next, he then summarizes how Kripke and others introduced modifications to modal logic meant to prevent the derivation of these sorts of statements while preserving the ability of the logic to do its modal work. The article leads one to conclude these efforts have not met with success: there isn’t a way to have a fully robust modal logic which has an actualist metaphysics as its analogue.
The article also discusses other programs to defend actualism (“world stories"/"world propositions”) which deflate the status of troubling implied statements into something innocuously only about propositions, not individuals. According to Menzel’s account these attempts also have serious objections – mainly they don’t do justice to analyzing the role modal statements really play in our language and thought. (He also devotes a section on Plantinga’s system of abstract possible worlds; this is something I will spend more time on before commenting).
Another strategy, which Armstrong himself once advocated, is called “Modal Fictionalism” – the SEP Article on this is authored by Daniel Nolan. This is the idea that this talk of possible worlds is a useful fictional construct not meant to be taken at face value. While this sounds attractive, the strategy suffers from objections, too. These include technical objections (logical formulations of modal fictionalism can lead to contradictions or circularity). Also, modal statements have more “objectivity” than authored fictions - if they’re not actual, they would be more like “hard” abstractions (mathematics being the paradigm example). A related point is that modal fictions are incomplete since they suffer from the author’s ignorance about many modal statements. Several other objections are outlined in the article as well.
Now as I said above I’m new to all of this, and it’s possible that some strategy exists or will be developed in the future which does justice to analyzing modal language and its role in rational thought while keeping to a deflationary actualist metaphysics. All I can safely say here is that all the attempts in the literature mentioned in these articles are described as having serious outstanding objections.
Monday, October 17, 2005
Materialists Missing the Point
I want to discuss 2 strategies I see used by advocates of materialism when trying to counter the devastating objection that materialism cannot account for the existence of first-person phenomenal experience. Both of the strategies make an insightful point, but then overstate the traction of this point on the essential problem thereby showing the materialists don’t really "get" the problem.
The first strategy is to improve upon the traditional materialist argument that experiential states are necessarily identical to brain states (“internalist materialism”). Rather, we should identify the contents of experience with external happenings. Here, brain states represent contents which themselves are external to the body (e.g. the impact of light waves on the surface of the apple). I just saw a reference to this strategy in this review of Gregg Rosenberg’s book by Paul Skokowski. Skokowski misses the point. Arguing that (at least some of) the contents of consciousness (“qualia”) are outside the cranium is a good idea, but experience itself is not thereby reduced to a material explanation. First off, I still believe the contents have a qualitative character which resists reduction; but perhaps more importantly, qualia are only one aspect of the 2-sided nature of experience itself: phenomenal contents exist only in order to be subjectively experienced. As Rosenberg explains better than I can here, experience is a process which by its nature has inseparable dual aspects of an experiencer as well as the experienced. Even if you could reduce one aspect of the process (qualia) to the physical, you cannot reduce both aspects of the process to the physical without effectively eliminating experience from your world-model altogether. Now, to repeat my point from above, I do think the externalist concept of qualia is an important insight to consider. The best description of this I have read is from Max Velmans, whose book Understanding Consciousness I recommend. Because Velmans truly understands the issues here, however, he is not a materialist.
The second strategy is most famously included in Daniel Dennett’s Consciousness Explained. It is a straw man argument which tries to show that the experiencer doesn’t exist. It proceeds by first arguing that the common sense notion of our first-person experience requires the existence of a homunculus which is the observer of qualia (thus breaking into distinct parts what really is an intrinsically dual-aspect process). Then it proceeds to show that our everyday idea of the self (which is like a homunculus) is an illusory concept. Here, the valid insight is that our reflective, introspective sense of self is a fragile and flawed construct which is at least one step removed from experience itself. Many studies show the “self” to be less unified and robust than we tend to think, and Libet’s experiments even put into question whether this self-construct has the ability to impact our actions! But while careful phenomenology will concur that the reflective (deliberative, introspective) self is a psychological construct of sorts (see for instance this post), it has as its precedent and its foundation real and unmediated first-person experience. So the argument that our naïve idea of self is a flawed concept has no impact on the reality of experience itself.
Note on Terminology: I’ve been noticing that both advocates and critics are returning to the use of the term “materialism” rather than “physicalism” to label the position. The idea is that even though we know that the word “material” is unable to do justice to the nature of the phenomena described by present day physics, materialism seems an easier and less obscure term compared to physicalism. BTW, while the incidence of this may be declining, I continue to emphasize that it is a mistake to conflate materialism with naturalism. Naturalism is a broader term which includes materialism as a sub-theory but can also encompass theories which expand upon the entities and/or processes of physics without invoking traditionally “supernatural” entities or processes such as mind/spirit substances or ad hoc divine interventions.
The first strategy is to improve upon the traditional materialist argument that experiential states are necessarily identical to brain states (“internalist materialism”). Rather, we should identify the contents of experience with external happenings. Here, brain states represent contents which themselves are external to the body (e.g. the impact of light waves on the surface of the apple). I just saw a reference to this strategy in this review of Gregg Rosenberg’s book by Paul Skokowski. Skokowski misses the point. Arguing that (at least some of) the contents of consciousness (“qualia”) are outside the cranium is a good idea, but experience itself is not thereby reduced to a material explanation. First off, I still believe the contents have a qualitative character which resists reduction; but perhaps more importantly, qualia are only one aspect of the 2-sided nature of experience itself: phenomenal contents exist only in order to be subjectively experienced. As Rosenberg explains better than I can here, experience is a process which by its nature has inseparable dual aspects of an experiencer as well as the experienced. Even if you could reduce one aspect of the process (qualia) to the physical, you cannot reduce both aspects of the process to the physical without effectively eliminating experience from your world-model altogether. Now, to repeat my point from above, I do think the externalist concept of qualia is an important insight to consider. The best description of this I have read is from Max Velmans, whose book Understanding Consciousness I recommend. Because Velmans truly understands the issues here, however, he is not a materialist.
The second strategy is most famously included in Daniel Dennett’s Consciousness Explained. It is a straw man argument which tries to show that the experiencer doesn’t exist. It proceeds by first arguing that the common sense notion of our first-person experience requires the existence of a homunculus which is the observer of qualia (thus breaking into distinct parts what really is an intrinsically dual-aspect process). Then it proceeds to show that our everyday idea of the self (which is like a homunculus) is an illusory concept. Here, the valid insight is that our reflective, introspective sense of self is a fragile and flawed construct which is at least one step removed from experience itself. Many studies show the “self” to be less unified and robust than we tend to think, and Libet’s experiments even put into question whether this self-construct has the ability to impact our actions! But while careful phenomenology will concur that the reflective (deliberative, introspective) self is a psychological construct of sorts (see for instance this post), it has as its precedent and its foundation real and unmediated first-person experience. So the argument that our naïve idea of self is a flawed concept has no impact on the reality of experience itself.
Note on Terminology: I’ve been noticing that both advocates and critics are returning to the use of the term “materialism” rather than “physicalism” to label the position. The idea is that even though we know that the word “material” is unable to do justice to the nature of the phenomena described by present day physics, materialism seems an easier and less obscure term compared to physicalism. BTW, while the incidence of this may be declining, I continue to emphasize that it is a mistake to conflate materialism with naturalism. Naturalism is a broader term which includes materialism as a sub-theory but can also encompass theories which expand upon the entities and/or processes of physics without invoking traditionally “supernatural” entities or processes such as mind/spirit substances or ad hoc divine interventions.
Wednesday, October 12, 2005
Chalmers Still on the Case
David Chalmers (home page, blog) has a new paper (The Two-Dimensional Argument Against Materialism) which provides a definitive and comprehensive update of his work on the “conceivability argument” and related issues. Defense of this argument over the last decade has led him to broaden the turf: the bulk of the paper goes beyond the mind/body issue to a general defense of the premise that conceivability implies metaphysical possibility; in other words, an epistemic premise (about what one can conceive) can lead to a modal conclusion (about what is possible).
I won’t try to summarize the paper in this post (other than a bare-bones sketch of the original conceivability and 2-dimensional arguments below). It is very thorough and I found the cumulative impact of Chalmer’s responses to various critics to be powerful.
I also think a section tucked in late in the paper is thought provoking(Section 10 – Modal Rationalism). Here he steps back and wonders about the implications of his defense of the premise that conceivability implies possibility. He talks about the crucial role modality plays in our capacity for rational thought. We need modal concepts to analyze various phenomena rationally -- he says you might call this logical modality and make use of logically possible worlds. And given Chalmer’s analysis in the body of the paper, there is no reason to think logically possible worlds and metaphysically possible worlds are different animals. Interesting stuff. I wonder, as always, about the ontological implications of this also. By virtue of what outside of ourselves is our capacity for rational modal thought based?
Sketch of conceivability and 2-dimensional arguments:
The conceivability argument has 3 premises (my paraphrase of the basic argument – the argument is presented in iterations of increasing precision over the course of the new paper): 1. A world physically just like ours with no first-person conscious experience is conceivable. 2. If such a world is conceivable, it is metaphysically possible. 3. If it is metaphysically possible, than materialism is false, since materialism is a metaphysical thesis which says physical facts necessarily entail all other facts, including experiential ones. The paper responds to a wide variety of objections posed, but most of the attention is on the second premise.
Let me just mention that the two-dimensional argument of the title (a version of which was already included in Chalmer’s 1996 book) was constructed to respond to what was the most common sort of this objection to the second premise. It is one of this sort (exemplified by using an example of Kripke’s): we can conceive of a world where water is not H2O, but it isn’t metaphysically possible -- we know that water is H2O. Chalmers breaks down the idea that {“water is not H2O” is conceivable} into two senses: the first (primary) is that there is a possible world where there is watery stuff (doing the job of water) which is not H2O. This is clearly conceivable. The secondary sense is that the stuff in the possible world isn’t watery stuff – it really is water! – then it is not conceivable that this water is not H2O. It is the primary sense of conceivability which we utilize in Chalmer’s second premise.
I won’t try to summarize the paper in this post (other than a bare-bones sketch of the original conceivability and 2-dimensional arguments below). It is very thorough and I found the cumulative impact of Chalmer’s responses to various critics to be powerful.
I also think a section tucked in late in the paper is thought provoking(Section 10 – Modal Rationalism). Here he steps back and wonders about the implications of his defense of the premise that conceivability implies possibility. He talks about the crucial role modality plays in our capacity for rational thought. We need modal concepts to analyze various phenomena rationally -- he says you might call this logical modality and make use of logically possible worlds. And given Chalmer’s analysis in the body of the paper, there is no reason to think logically possible worlds and metaphysically possible worlds are different animals. Interesting stuff. I wonder, as always, about the ontological implications of this also. By virtue of what outside of ourselves is our capacity for rational modal thought based?
Sketch of conceivability and 2-dimensional arguments:
The conceivability argument has 3 premises (my paraphrase of the basic argument – the argument is presented in iterations of increasing precision over the course of the new paper): 1. A world physically just like ours with no first-person conscious experience is conceivable. 2. If such a world is conceivable, it is metaphysically possible. 3. If it is metaphysically possible, than materialism is false, since materialism is a metaphysical thesis which says physical facts necessarily entail all other facts, including experiential ones. The paper responds to a wide variety of objections posed, but most of the attention is on the second premise.
Let me just mention that the two-dimensional argument of the title (a version of which was already included in Chalmer’s 1996 book) was constructed to respond to what was the most common sort of this objection to the second premise. It is one of this sort (exemplified by using an example of Kripke’s): we can conceive of a world where water is not H2O, but it isn’t metaphysically possible -- we know that water is H2O. Chalmers breaks down the idea that {“water is not H2O” is conceivable} into two senses: the first (primary) is that there is a possible world where there is watery stuff (doing the job of water) which is not H2O. This is clearly conceivable. The secondary sense is that the stuff in the possible world isn’t watery stuff – it really is water! – then it is not conceivable that this water is not H2O. It is the primary sense of conceivability which we utilize in Chalmer’s second premise.
Friday, October 07, 2005
Armstrong on Modality
I read D. M. Armstrong’s new book Truth and Truthmakers. Armstrong is a great philosopher and metaphysician. He is also someone who insists on a materialist-naturalist metaphysics: there is one actual spatio-temporal world, and nothing more. I wanted to see how he handled the topics of modal truths and causality (my recent posts relating to this topic are here and here). I came away very disappointed. So, below you will find the spectacle of an untrained layperson criticizing a brilliant and prominent scholar on a blog. I ask anyone interested to please be willing to point out where I’m mistaken here.
First, let me offer some praise and briefly lay some groundwork. Armstrong is engaged in realist metaphysics in a serious way. The idea of truthmakers is a great one: every proposition, if true, has a truthmaker which makes it true. There must be “some way the world is in virtue of which these truths are true” (p.1). By considering different kinds of propositions, we build up and fill out a metaphysical structure which provides the truthmakers needed. The book introduces and describes the truth-truthmaking relation and his methodology for using it for metaphysics.
Next, Armstrong reviews his building blocks for truthmakers, in particular, his “states of affairs”. His version of this idea, introduced in previous papers and books, is central to the system. I’ll sketch it here and mention a concern with this approach, before I get to the specific issues of modality and causality.
Armstrong asks: What is the truthmaker for a proposition that a particular has some property? It’s probably not enough to say that the truthmaker just is that particular. What if it has other properties besides the one in question – do they come into play? What if it has relational properties which involve other particulars – do they come into play? How do we combine and delineate the relevant parts needed for a truthmaker?
Armstrong digresses to briefly outline all the combinations of approaches to particulars and their properties which have been proposed: (are properties universals or tropes, do they adhere to a particular’s substance or is a particular just a bundle, etc.) Armstrong notes some of the advantages and disadvantages but also says not all of these are important for truthmaker theory (he happens to favor immanent universals as properties, by the way). The most crucial issue appears to involve how to think of the relation between the particular and its properties. Armstrong says there “must exist states of affairs to get us beyond the ‘loose and separate’ entities” and thereby provide the truthmaker. He says: “States of affairs must be introduced as additions to the ontology.” (Emphasis original p.49). So, states of affairs are introduced to hold the particular and its properties together by fiat.
Moving from old-fashioned individuals to states of affairs brings in extra ontology for free. It seems to me this is going beyond traditional materialism. I was impressed by Bill Vallicella’s in-depth critique of this notion in his recent book, but I won’t try to do justice to his argument now. I’ll just express my feeling that Armstrong’s binding of properties into individuals via states of affairs seems too easy.
You might also appreciate this when you see how Armstrong builds up to larger states of affairs. Larger states of affairs can encompass and bind smaller ones to provide appropriate truthmakers when needed. To provide a truthmaker for negative truths about the world (e.g. there are no unicorns), Armstrong utilizes the notion of a general or maximal state of affairs for the whole world. This mega-fact sets limits for what is actually in the world. The fact that there is no unicorn supervenes on the highest-order state of affairs. And just as we bound a property to a particular at no ontological cost, Armstrong asserts that the mega-state of affairs encompasses the conjunction of all its smaller constituent states of affairs with no additional cost, despite being “something more than the mereological sum of its constituents” (p.72). (there is an obvious regress issue here, but Armstrong argues it is a benign one.)
The issue of contingency vs. necessity comes up in a few contexts in the book, for instance in whether the connection between a particular and its property is contingent or necessary. But chapter 7 is explicitly on the question of what the truthmakers are for modal truths.
He starts it off thus: “It seems to me very surprising that so many good philosophers consider that huge metaphysical commitments must be made in order to give an account of these truths.” (p.83). He mentions David Lewis and Alvin Plantinga specifically. In Lewis’ system (simplified), something is necessarily true, if it is true in all possible worlds; it is possibly true if it is true in a subset of worlds. Something in our world is contingent if it is possibly not true in other worlds. Famously, Lewis says these possible worlds all exist concretely. Armstrong says “...we ought to be looking for quite modest truthmakers, fairly deflationary truthmakers, for these fairly unimportant truths of mere possibility.” (pp.83-4)
Here’s his truthmaker argument for a contingent truth (my paraphrase-see p.84 for this in his notation):
1. Assume that (state of affairs) ‘T’ is the truthmaker for truth ‘p’.
2. Assume p is contingent (my emphasis).
3. p entails 'it is possible that not-p' (from #2 and the definition of contingency).
4. Therefore, T is also a truthmaker for 'it is possible that not-p', and so T is a truthmaker for a contingent truth p. (This is via #1, #3, and from something Armstrong calls the 'entailment principle' that he argues for earlier in the book).
That's it: there's no more. But shouldn't truthmakers for modal truths distinguish a contingent truth from a necessary one? We could use this argument's structure equally well for the case where p is assumed to be necessary. Therefore there is nothing in the truthmaker which determines the truth to be contingent vs. necessary. Therefore it is NOT an adequate argument for a modal truthmaker at all!
I was stunned by what seemed to me to be the obvious inadequacy in the argument here. Could I have missed something? I have read the chapter more than once, but I'll keep at it. As I said in my introductory paragraph I invite anyone reading this that is familiar with this to “set me straight”.
Here’s something Armstrong says to close this section of the chapter (7.2, p.86): “…I still favor the hypothesis that particulars, properties, states of affairs and laws of nature are all of them contingent beings.” There is no argument for this, it is just a preference. While it is a hypothesis that certainly seems consonant with our human intuitions, it in no way follows from his system. With no way to distinguish the contingent from the necessary, I would think it would be more consistent and simpler for him to give up the intuition and argue that all of these constituents and relations are necessary.
I will skip over more material and just briefly touch on Chapter 10 which deals with causation. Armstrong wants to affirm real causal connections, not just Humean regularities. He terms this “singular” causation. How does one do this? He acknowledges the difficulty of the problem and mentions some possible approaches. You can’t just add causal connections between states of affairs without seeming to add still more gratuitous ontology. He favors an idea that the universals which are parts of states of affairs might be vehicles of causal or “cause-like” connections at a higher level than the state of affairs involved. Using universals to do the work would help explain the existence of lawful regularities across the states of affairs of the world.
This work on causal structure within his framework of states of affairs seems fine on its own. But I hold that without an explanation of modality, the discussion is relatively empty. I have thought the heart of real causality is the idea that things could have been some other way. So the earlier lack of adequate treatment of modality makes Armstrong’s discussion of causality of minor interest to me.
First, let me offer some praise and briefly lay some groundwork. Armstrong is engaged in realist metaphysics in a serious way. The idea of truthmakers is a great one: every proposition, if true, has a truthmaker which makes it true. There must be “some way the world is in virtue of which these truths are true” (p.1). By considering different kinds of propositions, we build up and fill out a metaphysical structure which provides the truthmakers needed. The book introduces and describes the truth-truthmaking relation and his methodology for using it for metaphysics.
Next, Armstrong reviews his building blocks for truthmakers, in particular, his “states of affairs”. His version of this idea, introduced in previous papers and books, is central to the system. I’ll sketch it here and mention a concern with this approach, before I get to the specific issues of modality and causality.
Armstrong asks: What is the truthmaker for a proposition that a particular has some property? It’s probably not enough to say that the truthmaker just is that particular. What if it has other properties besides the one in question – do they come into play? What if it has relational properties which involve other particulars – do they come into play? How do we combine and delineate the relevant parts needed for a truthmaker?
Armstrong digresses to briefly outline all the combinations of approaches to particulars and their properties which have been proposed: (are properties universals or tropes, do they adhere to a particular’s substance or is a particular just a bundle, etc.) Armstrong notes some of the advantages and disadvantages but also says not all of these are important for truthmaker theory (he happens to favor immanent universals as properties, by the way). The most crucial issue appears to involve how to think of the relation between the particular and its properties. Armstrong says there “must exist states of affairs to get us beyond the ‘loose and separate’ entities” and thereby provide the truthmaker. He says: “States of affairs must be introduced as additions to the ontology.” (Emphasis original p.49). So, states of affairs are introduced to hold the particular and its properties together by fiat.
Moving from old-fashioned individuals to states of affairs brings in extra ontology for free. It seems to me this is going beyond traditional materialism. I was impressed by Bill Vallicella’s in-depth critique of this notion in his recent book, but I won’t try to do justice to his argument now. I’ll just express my feeling that Armstrong’s binding of properties into individuals via states of affairs seems too easy.
You might also appreciate this when you see how Armstrong builds up to larger states of affairs. Larger states of affairs can encompass and bind smaller ones to provide appropriate truthmakers when needed. To provide a truthmaker for negative truths about the world (e.g. there are no unicorns), Armstrong utilizes the notion of a general or maximal state of affairs for the whole world. This mega-fact sets limits for what is actually in the world. The fact that there is no unicorn supervenes on the highest-order state of affairs. And just as we bound a property to a particular at no ontological cost, Armstrong asserts that the mega-state of affairs encompasses the conjunction of all its smaller constituent states of affairs with no additional cost, despite being “something more than the mereological sum of its constituents” (p.72). (there is an obvious regress issue here, but Armstrong argues it is a benign one.)
The issue of contingency vs. necessity comes up in a few contexts in the book, for instance in whether the connection between a particular and its property is contingent or necessary. But chapter 7 is explicitly on the question of what the truthmakers are for modal truths.
He starts it off thus: “It seems to me very surprising that so many good philosophers consider that huge metaphysical commitments must be made in order to give an account of these truths.” (p.83). He mentions David Lewis and Alvin Plantinga specifically. In Lewis’ system (simplified), something is necessarily true, if it is true in all possible worlds; it is possibly true if it is true in a subset of worlds. Something in our world is contingent if it is possibly not true in other worlds. Famously, Lewis says these possible worlds all exist concretely. Armstrong says “...we ought to be looking for quite modest truthmakers, fairly deflationary truthmakers, for these fairly unimportant truths of mere possibility.” (pp.83-4)
Here’s his truthmaker argument for a contingent truth (my paraphrase-see p.84 for this in his notation):
1. Assume that (state of affairs) ‘T’ is the truthmaker for truth ‘p’.
2. Assume p is contingent (my emphasis).
3. p entails 'it is possible that not-p' (from #2 and the definition of contingency).
4. Therefore, T is also a truthmaker for 'it is possible that not-p', and so T is a truthmaker for a contingent truth p. (This is via #1, #3, and from something Armstrong calls the 'entailment principle' that he argues for earlier in the book).
That's it: there's no more. But shouldn't truthmakers for modal truths distinguish a contingent truth from a necessary one? We could use this argument's structure equally well for the case where p is assumed to be necessary. Therefore there is nothing in the truthmaker which determines the truth to be contingent vs. necessary. Therefore it is NOT an adequate argument for a modal truthmaker at all!
I was stunned by what seemed to me to be the obvious inadequacy in the argument here. Could I have missed something? I have read the chapter more than once, but I'll keep at it. As I said in my introductory paragraph I invite anyone reading this that is familiar with this to “set me straight”.
Here’s something Armstrong says to close this section of the chapter (7.2, p.86): “…I still favor the hypothesis that particulars, properties, states of affairs and laws of nature are all of them contingent beings.” There is no argument for this, it is just a preference. While it is a hypothesis that certainly seems consonant with our human intuitions, it in no way follows from his system. With no way to distinguish the contingent from the necessary, I would think it would be more consistent and simpler for him to give up the intuition and argue that all of these constituents and relations are necessary.
I will skip over more material and just briefly touch on Chapter 10 which deals with causation. Armstrong wants to affirm real causal connections, not just Humean regularities. He terms this “singular” causation. How does one do this? He acknowledges the difficulty of the problem and mentions some possible approaches. You can’t just add causal connections between states of affairs without seeming to add still more gratuitous ontology. He favors an idea that the universals which are parts of states of affairs might be vehicles of causal or “cause-like” connections at a higher level than the state of affairs involved. Using universals to do the work would help explain the existence of lawful regularities across the states of affairs of the world.
This work on causal structure within his framework of states of affairs seems fine on its own. But I hold that without an explanation of modality, the discussion is relatively empty. I have thought the heart of real causality is the idea that things could have been some other way. So the earlier lack of adequate treatment of modality makes Armstrong’s discussion of causality of minor interest to me.
Tuesday, September 27, 2005
Notes on Determinism, Modality and Causality
Below are some thoughts I was trying to develop. Everyone is welcome to point out where I'm getting off-base.
1. If determinism is true, there is no modality: everything is necessary. So, if you take modalities like possibility and contingency to be real, determinism is false.
2. If determinism is true, there is no “real” (non-Humean) causality: everything is connected necessarily and symmetrically; the world is fixed forever. Real causality implies modality: things could have happened differently if causation is assumed to involve real “work”. Like modality, real causality is opposed to determinism.
David Lewis would be able to embrace both modality and determinism by postulating concrete possible worlds (which are causally unconnected to ours) as the vehicle for modality. But he is a Humean about causality. There is no real causation in his system.
The only way to embrace modality and real causation is to reject determinism. This implies that the world includes an intrinsic selection or choosing among possibilities within causal events.
An assertion that only objectively random choosing occurs at this point is ad hoc: it has no particular advantage over asserting that choosing is non-random.
1. If determinism is true, there is no modality: everything is necessary. So, if you take modalities like possibility and contingency to be real, determinism is false.
2. If determinism is true, there is no “real” (non-Humean) causality: everything is connected necessarily and symmetrically; the world is fixed forever. Real causality implies modality: things could have happened differently if causation is assumed to involve real “work”. Like modality, real causality is opposed to determinism.
David Lewis would be able to embrace both modality and determinism by postulating concrete possible worlds (which are causally unconnected to ours) as the vehicle for modality. But he is a Humean about causality. There is no real causation in his system.
The only way to embrace modality and real causation is to reject determinism. This implies that the world includes an intrinsic selection or choosing among possibilities within causal events.
An assertion that only objectively random choosing occurs at this point is ad hoc: it has no particular advantage over asserting that choosing is non-random.
Tuesday, September 20, 2005
Dreyfus-in-the-World
A philosopher I like is Hubert Dreyfus, the existentialism and phenomenology scholar. Years ago I got more out of Heidegger by virtue of reading his commentary volume Being-in-the-World. Dreyfus is famous for his criticism of artificial intelligence, especially in the early days when many thought HAL from 2001: A Space Odyssey might soon become a reality! His What Computers Can’t Do from 1972 turned out to be prescient when the early versions of AI (now called “good old-fashioned AI” or GOFAI) started hitting tough obstacles.
I just finished reading Dreyfus’ APA Pacific Division Presidential Address, which was called “Overcoming the Myth of the Mental: How Philosophers Can Profit from the Phenomenology of Everyday Expertise.”
When working on the foundations of knowledge and many other problems, many assume (explicitly or implicitly) that our conceptual thinking facility is the primary dimension of mind. Dreyfus argues that our capacity for detached conceptual thinking rests on top of (is derived from) our non-conceptual embodied engagement with our environment.
The fundamental core of mind is not detached, deliberate, rational, or conceptual; it is our facility for skillful coping with the world. This is a key insight derived from Heidegger and Merleau-Ponty. It is also consistent with an evolutionary perspective on the human mind: after all, this core of skillful coping is what we share with animals and infants. What differentiates the mature human mind is that “we can transform our unthinking non-conceptual engagement, and thereby encounter new, thinkable, structures.” Accounting for this transformation is the research project recommended by Dreyfus.
I have liked this line of argument, and think Dreyfus’ perspective can serve to make the problems of perception and knowledge more tractable, reducing a bit the seemingly huge divide between the mental realm and the rest of the world.
I just finished reading Dreyfus’ APA Pacific Division Presidential Address, which was called “Overcoming the Myth of the Mental: How Philosophers Can Profit from the Phenomenology of Everyday Expertise.”
When working on the foundations of knowledge and many other problems, many assume (explicitly or implicitly) that our conceptual thinking facility is the primary dimension of mind. Dreyfus argues that our capacity for detached conceptual thinking rests on top of (is derived from) our non-conceptual embodied engagement with our environment.
The fundamental core of mind is not detached, deliberate, rational, or conceptual; it is our facility for skillful coping with the world. This is a key insight derived from Heidegger and Merleau-Ponty. It is also consistent with an evolutionary perspective on the human mind: after all, this core of skillful coping is what we share with animals and infants. What differentiates the mature human mind is that “we can transform our unthinking non-conceptual engagement, and thereby encounter new, thinkable, structures.” Accounting for this transformation is the research project recommended by Dreyfus.
I have liked this line of argument, and think Dreyfus’ perspective can serve to make the problems of perception and knowledge more tractable, reducing a bit the seemingly huge divide between the mental realm and the rest of the world.
Friday, September 16, 2005
Philadelphia-Area Philosophy
Check out the updated website for the Greater Philadelphia Philosophy Consortium (GPPC). The GPPC is a cooperative effort including the philosophy departments of 13 area universities which sponsors conferences and other events. The 2005-2006 program is up, and also see the link to the regional calendar which includes talks open to the public at the various schools (although you need to confirm these events with the schools as the dates approach.
Friday, September 09, 2005
Fetal Pain
Recently, JAMA published a survey article on Fetal Pain (the abstract is here, the full text requires a fee). The authors reviewed a large number of research papers relevant to the stage of development at which a human fetus feels pain, and, secondarily, on techniques available for direct fetal anesthesia or analgesia in the case of abortion (or therapeutic interventions). Despite the evidence of reflex responses and hormonal responses by the first and second trimester respectively, the authors conclude that analysis of nervous system development indicates fetal perception of pain is unlikely before the third trimester (29 or 30 weeks gestational age). They note that withdrawal reflexes only require peripheral sensory nerves which connect to motor neurons through the spinal column. They assert psychological awareness of pain requires relatively full cortical functioning, which they conclude comes at about 29-30 months.
The article includes a paragraph acknowledging the context: proposed federal legislation which would require physicians to inform women seeking abortions at 20 or more weeks that the fetus feels pain and to offer anesthesia for the fetus (evidently statutes like this have been enacted in Georgia and Arkansas).
Given the highly charged politics of abortion, the article caught a good deal of attention in the press, and a number of critics have disputed the conclusions and/or complained about pro-choice bias given some of the authors’ past affiliations (links here and here: William Saletan’s take on Slate here). I argue below that a different conclusion from the authors' can be reached from the same set of facts.
My interest in this was similar to my angle in my Terri Schiavo and animal consciousness posts: we have a tendency to think of conscious experience as an all or nothing thing, and I’m interested in exploring the evidence that there is more of a continuum of first person experience from minimal to full tilt.
Echoing what I’ve read elsewhere, the authors say the seat of awareness is in the thalamocortical circuitry. While they found no studies of such circuits in fetuses as they relate to pain specifically, they (reasonably) infer conclusions from looking at work on other pathways (like visual and auditory). Now, curiously, the specifics of these studies don’t seem to precisely support the survey article’s conclusion, since they show thalamic projections reaching the cortex at 23-27 weeks. A later part of the paper on electrical activity is invoked to buttress the 29-30 week figure in the conclusion, but this seemed to my amateur eyes less compelling (more indirect) evidence on the question.
Importantly, the authors include a paragraph noting that others have proposed that connection to the cortex could be established indirectly if afferents from the thalamus reach a transient cortical subplate which appears earlier while the layers of the mature cortex are still forming. This happens by 20 to 22 weeks. Given the lack of firm evidence that this connection conveys pain information (as we understand it in fully developed context) the authors don’t give it weight in their conclusion.
The authors do not address the broader question of whether something similar to pain (if more primitive in some sense) exists during development absent any pathway to even a primitive cortical subplate. Is there something in-between the reflex arc and the cortical arc which gives rise to pain-like sensation? At this point we probably have no evidence and can only speculate. But I think it is a reasonable intuition that in different stages of development, intermediate stages of experience may exist.
My problem with this article is not with the factual content, but with the way this is parsed to reach a conclusion. In the absence of compelling evidence that fully developed pain awareness exists prior to 29-30 weeks, the authors conclude it doesn’t. I might place the emphasis differently: while we lack definite confirmation, something similar to our pain awareness might exist at 20-22 weeks, and we just don’t know if a neural response worthy of being compared to pain might exist earlier than that.
I continue to support the legality of abortion; however, I also continue to believe relatively more emphasis should be placed on the procedure being done as early as possible.
The article includes a paragraph acknowledging the context: proposed federal legislation which would require physicians to inform women seeking abortions at 20 or more weeks that the fetus feels pain and to offer anesthesia for the fetus (evidently statutes like this have been enacted in Georgia and Arkansas).
Given the highly charged politics of abortion, the article caught a good deal of attention in the press, and a number of critics have disputed the conclusions and/or complained about pro-choice bias given some of the authors’ past affiliations (links here and here: William Saletan’s take on Slate here). I argue below that a different conclusion from the authors' can be reached from the same set of facts.
My interest in this was similar to my angle in my Terri Schiavo and animal consciousness posts: we have a tendency to think of conscious experience as an all or nothing thing, and I’m interested in exploring the evidence that there is more of a continuum of first person experience from minimal to full tilt.
Echoing what I’ve read elsewhere, the authors say the seat of awareness is in the thalamocortical circuitry. While they found no studies of such circuits in fetuses as they relate to pain specifically, they (reasonably) infer conclusions from looking at work on other pathways (like visual and auditory). Now, curiously, the specifics of these studies don’t seem to precisely support the survey article’s conclusion, since they show thalamic projections reaching the cortex at 23-27 weeks. A later part of the paper on electrical activity is invoked to buttress the 29-30 week figure in the conclusion, but this seemed to my amateur eyes less compelling (more indirect) evidence on the question.
Importantly, the authors include a paragraph noting that others have proposed that connection to the cortex could be established indirectly if afferents from the thalamus reach a transient cortical subplate which appears earlier while the layers of the mature cortex are still forming. This happens by 20 to 22 weeks. Given the lack of firm evidence that this connection conveys pain information (as we understand it in fully developed context) the authors don’t give it weight in their conclusion.
The authors do not address the broader question of whether something similar to pain (if more primitive in some sense) exists during development absent any pathway to even a primitive cortical subplate. Is there something in-between the reflex arc and the cortical arc which gives rise to pain-like sensation? At this point we probably have no evidence and can only speculate. But I think it is a reasonable intuition that in different stages of development, intermediate stages of experience may exist.
My problem with this article is not with the factual content, but with the way this is parsed to reach a conclusion. In the absence of compelling evidence that fully developed pain awareness exists prior to 29-30 weeks, the authors conclude it doesn’t. I might place the emphasis differently: while we lack definite confirmation, something similar to our pain awareness might exist at 20-22 weeks, and we just don’t know if a neural response worthy of being compared to pain might exist earlier than that.
I continue to support the legality of abortion; however, I also continue to believe relatively more emphasis should be placed on the procedure being done as early as possible.
Friday, September 02, 2005
Desert Phenomenology
{UPDATE 24 March 2008: Please note the 'Desert Landscape' blog is defunct and the links were broken. A shame}
Uriah Kriegel at Desert Landscapes has had a number of interesting recent posts on phenomenological topics. The most recent discusses Alva Noe and Sean Kelly's take on perceptual constancies, which I discussed in previous posts (here and here). He leans toward Kelly's view as I did.
Uriah Kriegel at Desert Landscapes has had a number of interesting recent posts on phenomenological topics. The most recent discusses Alva Noe and Sean Kelly's take on perceptual constancies, which I discussed in previous posts (here and here). He leans toward Kelly's view as I did.
Thursday, September 01, 2005
Whole Lotta Worlds
I enjoyed reading David Lewis’ On the Plurality of Worlds. Of course, for me it was like entering a theater in the third act: my eclectic self-education has big holes in it which inhibit my comprehension of such a work.
The usefulness of expressing modal concepts in terms of possible worlds seems clear enough (here’s a blog post which outlines this well – hat tip to the latest philosophy carnival). David Lewis argues that the value and utility of possible worlds in modal thinking points to a metaphysical or ontological argument that the full set of possible worlds exists concretely. In the book he defends this thesis.
Most other philosophers would say possible worlds are abstract, like mathematical objects. Even setting aside that the ontological status of mathematical objects is highly controversial topic, Lewis moves from his usual modest tone toward vehemence in criticizing these views as insufficient in supplying the full structure needed for possible worlds; further the question of how the actual world is selected from the set of abstract possibilities is either inadequate or “magical” in competing accounts.
In other words a sufficient proposal for a modal metaphysics either needs to have Lewis’ ontological panoply of concrete worlds, or else we need a fuller account of the ontological status of the abstract world along with a detailed mechanism for selecting the actual world (in Lewis’ scheme, “actual” just means the world we happen to be in, but there is nothing else special about it relative to the other worlds). Note that this means a system with only one actual concrete world and no ‘abstract realm+selection’ endowment is simply an inadequate metaphysics. While Lewis takes this for granted, I think it's interesting that seemingly only a small subset of philosophers would endorse either the Lewisian system or an abstract alternative with full mechanisms for addressing the Lewisian critique. They are presumably content to deal with modality in epistemological and linguistic arenas unburdened by the metaphysical worries.
While Lewis doesn’t explicitly address it much in this book, the other way to parse the metaphysics is in terms of causality. Lewis’ approach enables him to be a Humean about causality. Everything in the world just happens. If we want to talk about what counterfactual situations could have possibly happened to us, say, then we are talking about things that happen to our counterparts in other worlds. If we want to say there is only one concrete world where things happen and these are caused in a way which could have happened differently (they are contingent things), then we need a model of “real” causation that can handle this selection process. These two alternatives exclude the common assumption many seem to share that there is only one world, but a simple billiard ball notion of causation is all we need. In such a world there is no possibility or contingency at all. Again, this would be an inadequate metaphysics.
So, what do I think of Lewis’ proposal? I don’t like the Humean/fatalistic aspect of it. I would like to think a selection process happens rather than think that every possible world exists in an even-handed way, and we just happen to be in one. But at this point this is a preference on my part, not an argument.
What I get out most out of this is what I’ve been emphasizing in this post: Lewis is an obstacle to those who think they can “get away with” a minimalist metaphysics like single-world physicalism. It’s not enough.
The usefulness of expressing modal concepts in terms of possible worlds seems clear enough (here’s a blog post which outlines this well – hat tip to the latest philosophy carnival). David Lewis argues that the value and utility of possible worlds in modal thinking points to a metaphysical or ontological argument that the full set of possible worlds exists concretely. In the book he defends this thesis.
Most other philosophers would say possible worlds are abstract, like mathematical objects. Even setting aside that the ontological status of mathematical objects is highly controversial topic, Lewis moves from his usual modest tone toward vehemence in criticizing these views as insufficient in supplying the full structure needed for possible worlds; further the question of how the actual world is selected from the set of abstract possibilities is either inadequate or “magical” in competing accounts.
In other words a sufficient proposal for a modal metaphysics either needs to have Lewis’ ontological panoply of concrete worlds, or else we need a fuller account of the ontological status of the abstract world along with a detailed mechanism for selecting the actual world (in Lewis’ scheme, “actual” just means the world we happen to be in, but there is nothing else special about it relative to the other worlds). Note that this means a system with only one actual concrete world and no ‘abstract realm+selection’ endowment is simply an inadequate metaphysics. While Lewis takes this for granted, I think it's interesting that seemingly only a small subset of philosophers would endorse either the Lewisian system or an abstract alternative with full mechanisms for addressing the Lewisian critique. They are presumably content to deal with modality in epistemological and linguistic arenas unburdened by the metaphysical worries.
While Lewis doesn’t explicitly address it much in this book, the other way to parse the metaphysics is in terms of causality. Lewis’ approach enables him to be a Humean about causality. Everything in the world just happens. If we want to talk about what counterfactual situations could have possibly happened to us, say, then we are talking about things that happen to our counterparts in other worlds. If we want to say there is only one concrete world where things happen and these are caused in a way which could have happened differently (they are contingent things), then we need a model of “real” causation that can handle this selection process. These two alternatives exclude the common assumption many seem to share that there is only one world, but a simple billiard ball notion of causation is all we need. In such a world there is no possibility or contingency at all. Again, this would be an inadequate metaphysics.
So, what do I think of Lewis’ proposal? I don’t like the Humean/fatalistic aspect of it. I would like to think a selection process happens rather than think that every possible world exists in an even-handed way, and we just happen to be in one. But at this point this is a preference on my part, not an argument.
What I get out most out of this is what I’ve been emphasizing in this post: Lewis is an obstacle to those who think they can “get away with” a minimalist metaphysics like single-world physicalism. It’s not enough.
Saturday, August 13, 2005
Summer Break
I'm taking a break from my rigorous once a week (at most) blogging schedule. My beach reading assignment is David Lewis. Here's a naive question: should one's opinion of modal realism be influenced by the fact that physicists for their own reasons are increasingly postulating the existence of multiple worlds?
Thursday, August 04, 2005
Philosophical Physicists
As I’ve written about before (here), theoretical physics has seemed stuck in recent years. Astronomical observations continue to uncover new features of the universe (like dark energy), but these mostly add new mysteries to the list which cannot be explained by current theories. The jury is also definitely still out on older ideas about the early universe (such as the inflationary scenario). Physicists await with great expectations the 2007 debut of the Large Hadron Collider, which offers the possibility of revealing new phenomena at higher energies which may assist theorists. In the meantime, there is plenty of time for physicists and philosophers of science to step back and consider conceptual issues which underlie physical theories. Here are a couple of examples of this I came across recently.
Lee Smolin has a new essay (hat tip to Peter Woit’s blog): “The case for background independence.” The context of the essay is the ongoing difficulty in constructing a theory of quantum gravity. Smolin thinks what has ultimately stymied the most popular quantum gravity research program--string/M theory-- is its use of a space-time backdrop as a “given”, rather than having space-time emerge from relations of variables within the theory. The essay starts with a historical sketch of the absolute and relational views of space-time, then summarizes how this is playing out in the different quantum gravity research programs (Smolin is one of the founders of the approach called loop quantum gravity, which attempts to preserve the background independent flavor of general relativity). Lots of interesting ideas in the essay. Also, for a debate on string theory see this post and comment thread on the excellent new team physics blog, Cosmic Variance.
As an aside, the debate between absolute and relational theories of space-time was most clearly personified in the personal rivalry between Newton and Leibniz. A fun coincidence is that I’m now working my way through the second installment (The Confusion) of Neal Stephenson’s Baroque Cycle, which features this rivalry as a minor subplot. The Confusion has a nice scene where Leibniz explains the relational view to a colleague.
On a related topic, the interpretation of quantum mechanics, I enjoyed this paper revisiting the philosophical conflict between Bohr and Einstein by N.P. Landsman (from PhilSci archive, via Brian Weatherson’s invaluable papers blog).
Einstein was famously frustrated by quantum mechanics, despite being one of the founders of the theory. Landsman summarizes his view as one where physical thought and physical laws require (classical) separability and locality (which QM doesn’t provide). Bohr accepted that while classical (objective) thinking was indeed necessary for science, the classical world in which physicists work incontrovertibly sits next to a quantum realm which has a nature which will not reduced to the classical. Landsman importantly also brings out the interesting “flip side” to Bohr’s thought, which is that it is equally true that a full quantum description of the world which leaves out the classical is impossible (many physicists and philosophers seem to miss this point). The implication is that there are necessarily two modes of reality: the quantum realm of spread-out potentials and the classical world of interactions/measurements.
Lee Smolin has a new essay (hat tip to Peter Woit’s blog): “The case for background independence.” The context of the essay is the ongoing difficulty in constructing a theory of quantum gravity. Smolin thinks what has ultimately stymied the most popular quantum gravity research program--string/M theory-- is its use of a space-time backdrop as a “given”, rather than having space-time emerge from relations of variables within the theory. The essay starts with a historical sketch of the absolute and relational views of space-time, then summarizes how this is playing out in the different quantum gravity research programs (Smolin is one of the founders of the approach called loop quantum gravity, which attempts to preserve the background independent flavor of general relativity). Lots of interesting ideas in the essay. Also, for a debate on string theory see this post and comment thread on the excellent new team physics blog, Cosmic Variance.
As an aside, the debate between absolute and relational theories of space-time was most clearly personified in the personal rivalry between Newton and Leibniz. A fun coincidence is that I’m now working my way through the second installment (The Confusion) of Neal Stephenson’s Baroque Cycle, which features this rivalry as a minor subplot. The Confusion has a nice scene where Leibniz explains the relational view to a colleague.
On a related topic, the interpretation of quantum mechanics, I enjoyed this paper revisiting the philosophical conflict between Bohr and Einstein by N.P. Landsman (from PhilSci archive, via Brian Weatherson’s invaluable papers blog).
Einstein was famously frustrated by quantum mechanics, despite being one of the founders of the theory. Landsman summarizes his view as one where physical thought and physical laws require (classical) separability and locality (which QM doesn’t provide). Bohr accepted that while classical (objective) thinking was indeed necessary for science, the classical world in which physicists work incontrovertibly sits next to a quantum realm which has a nature which will not reduced to the classical. Landsman importantly also brings out the interesting “flip side” to Bohr’s thought, which is that it is equally true that a full quantum description of the world which leaves out the classical is impossible (many physicists and philosophers seem to miss this point). The implication is that there are necessarily two modes of reality: the quantum realm of spread-out potentials and the classical world of interactions/measurements.
Wednesday, July 27, 2005
A Look at Pantheism
Why not split the difference between the traditional forms of theism and atheism? By traditional theism I mean the worldview featuring the personal, transcendent, benevolent “omni-everything” deity. By traditional atheism I mean the materialist/physicalist worldview (essentially “billiard balls all the way down”).
I make the following two observations:
1. The most compelling arguments atheists make against traditional theism have little or no force against a deity which lacks some of the usual attributes of being, say, personal, transcendent, and omnipotent. I’m thinking of the argument from evil as well as the general argument from the absence of evidence for ad hoc supernatural interventions.
2. The most compelling arguments made against materialist atheism point to a need for “something more” but they don’t require a deity with all those classic attributes. Here I have in mind the arguments from the irreducibility of first-person experience and intentionality and a modified version of the Thomist cosmological argument (the existence of things requires a unifying ground or force).
I’d also note that, in my opinion, religious experiences provide authentic evidence which points toward theism, but they don’t provide much support for specific religions (I’m skeptical that a person who never heard of Jesus would ever have an explicitly Christian religious vision).
So, what about pantheism? Pantheism is the idea that God is identical with nature or the universe, or alternatively God is a ubiquitous force or presence uniting entities in the universe. What God is not is a person or being distinct from the world. I thought this SEP article on pantheism by Michael Levine provided a fine, sympathetic summary.
His article shows that a difficulty with pantheism is that (not surprisingly) there are many versions, and it appears that few have been fleshed out in detail. There doesn’t seem to have been a leading figure in Western thought who was a pantheist since Spinoza (Levine lists some figures who are “possible” pantheists). So there is no metaphysical program to sign on to. On the other hand it seems pantheism is an idea which is all too easy to invoke in a fuzzy new-age way.
Actually, I would highlight two issues which need to be addressed. First, and to me most important, what problems does pantheism help us solve? Specifically, what are the metaphysical features which improve on the perceived limitations of atheism without broaching the problems of classical theism? The second issue is a question about why the worldview entails invoking something divine. What makes it a religious as well as a philosophical worldview?
To bring out the issues, let’s look at a particular simplified version of pantheism which identifies God with the natural world (with no further detail). On the first issue, one would have to say this worldview adds no “metaphysical value” to a naturalist philosophy which lacks God. On the second issue, it begs the question of why it would make sense to treat the natural world as something divine. What type of religious practice (worship or prayer) makes sense if there really is nothing supernatural? Traditional theists have understandably criticized this sort of pantheism as atheism with window dressing.
It seems to me it may be possible to form a more successful version of pantheism to address these issues. In the article, Levine highlights the concept of unity as something important to pantheism. Let me sketch something building on this concept. Perhaps God/World has two modes of presentation. The entities and properties studied through third-person investigations are one mode. The second mode is a unifying force which binds and connects individuals in the world-network and also endows them with the gift of first-person experience (minimal for simple entities, robust for humans). So, we’ve given the God/World some features beyond the usual worldview of naturalism, and also have a mode of existence (a uniting “world-mind”) which may be worthy of religious feeling. Obviously this sketch leaves many unanswered questions. What accounts for the dual modes? Is it a monistic system or really a dualism? Is the ontology one of substance, like Spinoza’s, or not? What are the implications for ethics, etc.?
Given the deficiencies in both poles of the traditional theism-atheism debate, these seem to be ideas and questions worth exploring.
One last note on a variation of pantheism: that is, panentheism. This is the idea that while our world is part of God, it does not exhaust God. God extends beyond our world. Given that even physicists speak more and more about multiple universes and/or dimensions, it seems reasonable to think that God could be a repository of many real or possible worlds, of which ours is a particular manifestation.
I make the following two observations:
1. The most compelling arguments atheists make against traditional theism have little or no force against a deity which lacks some of the usual attributes of being, say, personal, transcendent, and omnipotent. I’m thinking of the argument from evil as well as the general argument from the absence of evidence for ad hoc supernatural interventions.
2. The most compelling arguments made against materialist atheism point to a need for “something more” but they don’t require a deity with all those classic attributes. Here I have in mind the arguments from the irreducibility of first-person experience and intentionality and a modified version of the Thomist cosmological argument (the existence of things requires a unifying ground or force).
I’d also note that, in my opinion, religious experiences provide authentic evidence which points toward theism, but they don’t provide much support for specific religions (I’m skeptical that a person who never heard of Jesus would ever have an explicitly Christian religious vision).
So, what about pantheism? Pantheism is the idea that God is identical with nature or the universe, or alternatively God is a ubiquitous force or presence uniting entities in the universe. What God is not is a person or being distinct from the world. I thought this SEP article on pantheism by Michael Levine provided a fine, sympathetic summary.
His article shows that a difficulty with pantheism is that (not surprisingly) there are many versions, and it appears that few have been fleshed out in detail. There doesn’t seem to have been a leading figure in Western thought who was a pantheist since Spinoza (Levine lists some figures who are “possible” pantheists). So there is no metaphysical program to sign on to. On the other hand it seems pantheism is an idea which is all too easy to invoke in a fuzzy new-age way.
Actually, I would highlight two issues which need to be addressed. First, and to me most important, what problems does pantheism help us solve? Specifically, what are the metaphysical features which improve on the perceived limitations of atheism without broaching the problems of classical theism? The second issue is a question about why the worldview entails invoking something divine. What makes it a religious as well as a philosophical worldview?
To bring out the issues, let’s look at a particular simplified version of pantheism which identifies God with the natural world (with no further detail). On the first issue, one would have to say this worldview adds no “metaphysical value” to a naturalist philosophy which lacks God. On the second issue, it begs the question of why it would make sense to treat the natural world as something divine. What type of religious practice (worship or prayer) makes sense if there really is nothing supernatural? Traditional theists have understandably criticized this sort of pantheism as atheism with window dressing.
It seems to me it may be possible to form a more successful version of pantheism to address these issues. In the article, Levine highlights the concept of unity as something important to pantheism. Let me sketch something building on this concept. Perhaps God/World has two modes of presentation. The entities and properties studied through third-person investigations are one mode. The second mode is a unifying force which binds and connects individuals in the world-network and also endows them with the gift of first-person experience (minimal for simple entities, robust for humans). So, we’ve given the God/World some features beyond the usual worldview of naturalism, and also have a mode of existence (a uniting “world-mind”) which may be worthy of religious feeling. Obviously this sketch leaves many unanswered questions. What accounts for the dual modes? Is it a monistic system or really a dualism? Is the ontology one of substance, like Spinoza’s, or not? What are the implications for ethics, etc.?
Given the deficiencies in both poles of the traditional theism-atheism debate, these seem to be ideas and questions worth exploring.
One last note on a variation of pantheism: that is, panentheism. This is the idea that while our world is part of God, it does not exhaust God. God extends beyond our world. Given that even physicists speak more and more about multiple universes and/or dimensions, it seems reasonable to think that God could be a repository of many real or possible worlds, of which ours is a particular manifestation.
Monday, July 25, 2005
The Realm of the Possible
I imagine many who hold to physicalism or materialism do so at least partly out of respect for Occam’s razor: a reluctance to add substantial new metaphysical entities or processes. And indeed, two of my favorite proposals for improving on physicalism posit not only new processes which enable consciousness, causation and the existence of individuals but they also require an additional realm of being beyond our concrete world: a realm of possibility.
In Whitehead’s process theory, each individual (an “actual occasion”) has two poles. It has a physical pole which contains the objective input from antecedent occasions but it also has a mental pole which is able to draw upon a realm of possibility to finalize its becoming. It subsequently becomes part of the objective input for new occasions. The realm of possibility (which Whitehead considered to be God) is the source of true creativity in the system.
In Gregg Rosenberg’s theory, receptive properties unify and connect effective properties (the ones physics explicates are the effective ones) to create real individuals linked in a causal mesh. Receptive properties have an intrinsically experiential aspect which accounts for the origin of consciousness. Rosenberg also says that the fact that effective properties have many potential values implies a metaphysical realm of possibility existing along with the concrete world. The structure of receptive properties in essence chooses among the possibilities.
The existence of a realm of possibility is what makes something like free will possible in these systems, which is another benefit. But the cost is high. In addition to our concrete world, whose individuals require at a minimum a dual-aspect nature to fully explain, we also need in additional world which underlies ours: the realm of the possible.
One thing I always fall back on when I worry that this is too extravagant is the fact that physics itself appears to include a realm of possibility. Today's concrete (classical-looking) state of affairs can be seen as arising from a combination of the concrete past plus "choosing" from the realm of (quantum) possibility.
In Whitehead’s process theory, each individual (an “actual occasion”) has two poles. It has a physical pole which contains the objective input from antecedent occasions but it also has a mental pole which is able to draw upon a realm of possibility to finalize its becoming. It subsequently becomes part of the objective input for new occasions. The realm of possibility (which Whitehead considered to be God) is the source of true creativity in the system.
In Gregg Rosenberg’s theory, receptive properties unify and connect effective properties (the ones physics explicates are the effective ones) to create real individuals linked in a causal mesh. Receptive properties have an intrinsically experiential aspect which accounts for the origin of consciousness. Rosenberg also says that the fact that effective properties have many potential values implies a metaphysical realm of possibility existing along with the concrete world. The structure of receptive properties in essence chooses among the possibilities.
The existence of a realm of possibility is what makes something like free will possible in these systems, which is another benefit. But the cost is high. In addition to our concrete world, whose individuals require at a minimum a dual-aspect nature to fully explain, we also need in additional world which underlies ours: the realm of the possible.
One thing I always fall back on when I worry that this is too extravagant is the fact that physics itself appears to include a realm of possibility. Today's concrete (classical-looking) state of affairs can be seen as arising from a combination of the concrete past plus "choosing" from the realm of (quantum) possibility.
Monday, July 18, 2005
Individuals and their Constituents
Here's the last part of Alan Cook's comment from the previous post's discussion about Bill Vallicella's book:
---
You quote, or paraphrase, Bill to the effect that "existence is the unity of an individual's ontological constituents."
Questions:
-- What is an "ontological constituent"?
-- What is an "individual?" (Does the term refer primarily or solely to subjects of experience, such as persons, or is the glass of beer on my desk an "individual"?)
-- What's the relationship between the answers to the above two questions; that is, why should we think that individuals have ontological constituents?
I'm sure Bill has very good answers to these questions; maybe formulating your own answers might help increase your understanding (and mine) of his viewpoint.
---
Here's my shot at a brief response:
As a starting point, I think we can take individuals and their constituents in the classic sense of talking about objects and the properties they contain or exemplify (apples which are red and round). Often Bill V. uses simple examples like this. And I take the category of individuals to be expansive and include me as well as my beer.
But despite my example using “red” and “round” which refer to perceptual appearances, for the purposes of this exercise we are being realists. Individuated things exists out there in the world independent of us; we can philosophize about them without having to assume we are really only talking about aspects of concepts and language. Now I would also add that this doesn’t mean we are naive realists, who precisely identify our perceptions with reality. I think it just means that despite the idiosyncrasies of the causal chains linking us to other things, we can at least assume they exist and have attributes analogous to what we apprehend.
Now I must also mention that often, depending on the context, Bill V. also talks about individuals as truth-making facts and/or states of affairs rather than objects; I take it by the way he moves back and forth that he sees these modes of “individual” as consistent with each other.
Now, when you ask, is an individual a “subject of experience” I think you are asking a very interesting question. (BTW this is not an angle Bill V. pursues in the book). I’m intrigued by the idea that the very same entities or processes responsible for the existence of individuals are what endow the individual as a subject of experience. Further, following through, this would be true of any individual, not just human beings: a panexperientialist system. Keep in mind this is just me talking here.
As to your last question, about why we should assume individuals have constituents? There are theories where the lowest level constituent would have the same kind of status as bundles of these constituents – I take it that this is true of trope theory (this is one of the very large number of alternative theories criticized in the book). But usually I would think an individual needs to contain a constituent or exemplify some property for it to be able to exist or enter into causal relations. (In the book, Bill V. also criticizes perspectives which hold that individuals are irreducible entities.)
---
You quote, or paraphrase, Bill to the effect that "existence is the unity of an individual's ontological constituents."
Questions:
-- What is an "ontological constituent"?
-- What is an "individual?" (Does the term refer primarily or solely to subjects of experience, such as persons, or is the glass of beer on my desk an "individual"?)
-- What's the relationship between the answers to the above two questions; that is, why should we think that individuals have ontological constituents?
I'm sure Bill has very good answers to these questions; maybe formulating your own answers might help increase your understanding (and mine) of his viewpoint.
---
Here's my shot at a brief response:
As a starting point, I think we can take individuals and their constituents in the classic sense of talking about objects and the properties they contain or exemplify (apples which are red and round). Often Bill V. uses simple examples like this. And I take the category of individuals to be expansive and include me as well as my beer.
But despite my example using “red” and “round” which refer to perceptual appearances, for the purposes of this exercise we are being realists. Individuated things exists out there in the world independent of us; we can philosophize about them without having to assume we are really only talking about aspects of concepts and language. Now I would also add that this doesn’t mean we are naive realists, who precisely identify our perceptions with reality. I think it just means that despite the idiosyncrasies of the causal chains linking us to other things, we can at least assume they exist and have attributes analogous to what we apprehend.
Now I must also mention that often, depending on the context, Bill V. also talks about individuals as truth-making facts and/or states of affairs rather than objects; I take it by the way he moves back and forth that he sees these modes of “individual” as consistent with each other.
Now, when you ask, is an individual a “subject of experience” I think you are asking a very interesting question. (BTW this is not an angle Bill V. pursues in the book). I’m intrigued by the idea that the very same entities or processes responsible for the existence of individuals are what endow the individual as a subject of experience. Further, following through, this would be true of any individual, not just human beings: a panexperientialist system. Keep in mind this is just me talking here.
As to your last question, about why we should assume individuals have constituents? There are theories where the lowest level constituent would have the same kind of status as bundles of these constituents – I take it that this is true of trope theory (this is one of the very large number of alternative theories criticized in the book). But usually I would think an individual needs to contain a constituent or exemplify some property for it to be able to exist or enter into causal relations. (In the book, Bill V. also criticizes perspectives which hold that individuals are irreducible entities.)
Thursday, July 14, 2005
Blurbs on Books by Bloggers
I bought and read books by two philosophers whose blogs I’ve enjoyed browsing: Victor Reppert’s C.S. Lewis’s Dangerous Idea: In Defense of the Argument from Reason, and William F. Vallicella’s A Paradigm Theory of Existence: Onto-Theology Vindicated. The two books both contain philosophical arguments which point toward theism. Otherwise they were quite different in intent and scope: Reppert’s is a short book meant to be engaging and accessible to laypeople, while Vallicella’s was a thorough and closely argued work of metaphysics/ontology.
I enjoyed Reppert’s book. C.S. Lewis is an intriguing figure and by taking some of his writings as a launching point, Reppert adds interest to the main task of the book, which is a discussion of the Argument from Reason (AfR). A brief sketch of the AfR: We form beliefs through rational inference. If materialism is true, all beliefs have non-rational root causes. Therefore no belief could be rationally inferred and materialism is false. There is a fair amount to unpack here, and Reppert analyzes a number of strands which underly the argument, and responds to some objections. He concludes there is ongoing merit to considering the argument. The book is rounded out by a discussion of the larger context of the debate between theism and naturalism.
In my opinion, the one underlying strand of the AfR which has “bite” is the argument from intentionality (especially conscious intentionality). The specific focus on reason and rational inference doesn’t add much in my view. Investigations in cognitive science and neuroscience on humans and animals seem to be slowly but steadily gaining traction on the problem of how reasoning and language can be built up from more primitive intentional interaction with the environment. What is not well explained is how conscious intentionality gets bootstrapped from components which themselves lack it.
Vallicella’s book was a challenging one for me to read, but I found it to be time very well spent. I plan to re-read parts or all of it again, and will probably post more down the road.
The goal of the book is to answer these questions: From the point of view of being a realist about the existence of concrete individuals in the world (a perspective I endorse), how can we account for this existence? What is existence, anyway?
In the primarily critical part of the book (chapters 2 through 5), Vallicella tenaciously takes apart what seems to be every alterative put forth by other philosophers past and present until one choice is left standing: Existence is the unity of an individual’s ontological constituents; further, this unity requires an external unifier. This theory is described and defended in chapters 6 & 7. Vallicella then offers as his proposal that the unifier is existence itself – the paradigm existent (introduced in the introductory chapter 1 and then discussed in the concluding chapter 8).
Because of the gaping holes in my background knowledge of the philosophical theories Vallicella takes on, I’m can’t offer any authoritative pronouncements, but I must say the clarity of the arguments and the cumulative impact of the criticism of competing ideas made for me a persuasive case that concrete individuals/facts do indeed require an external unifier of their parts.
On the other hand I found the conclusion that it is the paradigm existent that does the job comparatively less persuasive. One problem is that for some reason I am wary of the classical distinction between contingent and necessary (as I mentioned in this recent post). And when throughout the book Vallicella uses the word contingent to describe both the concrete individual and its ontological constituents, it directly leads him to a unifier which is necessary (furthermore a necessary being – and who else can that be but God?) But while conceding that the solution requires going beyond a traditionally monistic version of naturalism, couldn’t there be other ways to skin this cat? Specifically, could the parts be in a relation of interdependence with a co-evolving binding force? While I may have missed something, the only competing theory Vallicella discusses which appears to be something of this sort is Hector-Neri Castaneda’s theory of ontological operators (ch.8 section3), which I had not heard of and will need to investigate. It seems Vallicella rejects this idea because the unifier here would also be contingent, not necessary. And I guess the thought is that it is you’d still have more explaining to do because it is only the necessary which (by definition) doesn’t require further explanation.
My ideas on this are half-baked at this point, but I’m influenced by the fact that both Whitehead’s process philosophy and the recent proposal by Gregg Rosenberg go beyond physicalist-naturalism to solve the problem of actualizing/unifying individuals without invoking the paradigm existent. I have more thinking to do on this. But Bill Vallicella’s book has certainly offered plenty of food for that thought.
I enjoyed Reppert’s book. C.S. Lewis is an intriguing figure and by taking some of his writings as a launching point, Reppert adds interest to the main task of the book, which is a discussion of the Argument from Reason (AfR). A brief sketch of the AfR: We form beliefs through rational inference. If materialism is true, all beliefs have non-rational root causes. Therefore no belief could be rationally inferred and materialism is false. There is a fair amount to unpack here, and Reppert analyzes a number of strands which underly the argument, and responds to some objections. He concludes there is ongoing merit to considering the argument. The book is rounded out by a discussion of the larger context of the debate between theism and naturalism.
In my opinion, the one underlying strand of the AfR which has “bite” is the argument from intentionality (especially conscious intentionality). The specific focus on reason and rational inference doesn’t add much in my view. Investigations in cognitive science and neuroscience on humans and animals seem to be slowly but steadily gaining traction on the problem of how reasoning and language can be built up from more primitive intentional interaction with the environment. What is not well explained is how conscious intentionality gets bootstrapped from components which themselves lack it.
Vallicella’s book was a challenging one for me to read, but I found it to be time very well spent. I plan to re-read parts or all of it again, and will probably post more down the road.
The goal of the book is to answer these questions: From the point of view of being a realist about the existence of concrete individuals in the world (a perspective I endorse), how can we account for this existence? What is existence, anyway?
In the primarily critical part of the book (chapters 2 through 5), Vallicella tenaciously takes apart what seems to be every alterative put forth by other philosophers past and present until one choice is left standing: Existence is the unity of an individual’s ontological constituents; further, this unity requires an external unifier. This theory is described and defended in chapters 6 & 7. Vallicella then offers as his proposal that the unifier is existence itself – the paradigm existent (introduced in the introductory chapter 1 and then discussed in the concluding chapter 8).
Because of the gaping holes in my background knowledge of the philosophical theories Vallicella takes on, I’m can’t offer any authoritative pronouncements, but I must say the clarity of the arguments and the cumulative impact of the criticism of competing ideas made for me a persuasive case that concrete individuals/facts do indeed require an external unifier of their parts.
On the other hand I found the conclusion that it is the paradigm existent that does the job comparatively less persuasive. One problem is that for some reason I am wary of the classical distinction between contingent and necessary (as I mentioned in this recent post). And when throughout the book Vallicella uses the word contingent to describe both the concrete individual and its ontological constituents, it directly leads him to a unifier which is necessary (furthermore a necessary being – and who else can that be but God?) But while conceding that the solution requires going beyond a traditionally monistic version of naturalism, couldn’t there be other ways to skin this cat? Specifically, could the parts be in a relation of interdependence with a co-evolving binding force? While I may have missed something, the only competing theory Vallicella discusses which appears to be something of this sort is Hector-Neri Castaneda’s theory of ontological operators (ch.8 section3), which I had not heard of and will need to investigate. It seems Vallicella rejects this idea because the unifier here would also be contingent, not necessary. And I guess the thought is that it is you’d still have more explaining to do because it is only the necessary which (by definition) doesn’t require further explanation.
My ideas on this are half-baked at this point, but I’m influenced by the fact that both Whitehead’s process philosophy and the recent proposal by Gregg Rosenberg go beyond physicalist-naturalism to solve the problem of actualizing/unifying individuals without invoking the paradigm existent. I have more thinking to do on this. But Bill Vallicella’s book has certainly offered plenty of food for that thought.
Friday, July 01, 2005
Return of the Tropes
Talk about an unlikely "blog meme"! Uriah Kriegel has a post on Desert Landscapes introducing and offering some admiration for metaphysical trope theory (my recent trope post linking to Bill Vallicella's take on this is here).
I find it useful to think through this theory, but believe it finally falls short. Individuals are something more than a bundle of tropes. There needs to be a unification of the individual's constituents. This unifier cannot itself be another trope, but needs to be something ontologically distinct. Uriah Kriegel says he is attracted to this because it is a monistic theory, and monism is more aestetically pleasing than pluralism. I share this attraction, but I've been increasingly of the opinion that you can't easily pack enough features into monism to both make it work and still be worthy of the name.
I find it useful to think through this theory, but believe it finally falls short. Individuals are something more than a bundle of tropes. There needs to be a unification of the individual's constituents. This unifier cannot itself be another trope, but needs to be something ontologically distinct. Uriah Kriegel says he is attracted to this because it is a monistic theory, and monism is more aestetically pleasing than pluralism. I share this attraction, but I've been increasingly of the opinion that you can't easily pack enough features into monism to both make it work and still be worthy of the name.
Monday, June 27, 2005
Comparing Notes with Aquinas
Midway through this post on Dr. Victor Reppert’s blog is the outline of a version of the Thomist cosmological argument with which I’m sympathetic, up to a point.
I have argued that the existence of real concrete individuals in the world is under-explained by the kind of fundamental entities and causal laws traditionally associated with physics. The Thomist argument is similar in that it asserts that the existence of contingent beings (which might not have existed) cannot be explained solely by reference to other contingent things. It concludes there must therefore be a necessary being which supports the existence of the contingent.
The problem here is that the separation of individuals into “contingent” and “necessary” is a step which ends up adding more than we need. It is, of course, a move motivated by the presupposition that classical theism is true. While we need “something more” which binds concrete individuals, I think we can incorporate this in more “democratic” way. The constituents of concrete individuals may exist interdependently with something that unifies them and binds them into a causal network. This unifying element has a different mode of existence than the constituents of the individual but I’m not sure there is any reason to call one side dependent and the other independent.
I have argued that the existence of real concrete individuals in the world is under-explained by the kind of fundamental entities and causal laws traditionally associated with physics. The Thomist argument is similar in that it asserts that the existence of contingent beings (which might not have existed) cannot be explained solely by reference to other contingent things. It concludes there must therefore be a necessary being which supports the existence of the contingent.
The problem here is that the separation of individuals into “contingent” and “necessary” is a step which ends up adding more than we need. It is, of course, a move motivated by the presupposition that classical theism is true. While we need “something more” which binds concrete individuals, I think we can incorporate this in more “democratic” way. The constituents of concrete individuals may exist interdependently with something that unifies them and binds them into a causal network. This unifying element has a different mode of existence than the constituents of the individual but I’m not sure there is any reason to call one side dependent and the other independent.
Tuesday, June 21, 2005
Against "Intelligent Design"
Here’s the place where my advocacy of a distinction between physicalism and naturalism finds a real-life application.
Like many others, I criticize physicalism by pointing to what I perceive as its failure to include an explanation for first-person purposeful experience, and more tentatively for a more general failure to account for the nature of emergent higher level systems in the world.
I think a more successful metaphysical theory may include properties or causal structures not historically contemplated by physics. I fully expect that some of the ideas required for a successful theory might lend themselves to theistic interpretations.
But nothing in this would lead one to postulate intermittent or ad hoc supernatural interventions in our world, for which there is absolutely no evidence. This is the sense of philosophical naturalism worth preserving. And for Pete’s sake, no philosophical debate among we adults offers any basis for politico-religious efforts to inject unfounded ideas about “Intelligent Design” into our high school science classes. I find this effort deeply offensive.
I’m gratified that many resources are available for defending our public schools against "ID". For instance see the links (on the lower right-hand side) on the Panda’s Thumb group blog.
Like many others, I criticize physicalism by pointing to what I perceive as its failure to include an explanation for first-person purposeful experience, and more tentatively for a more general failure to account for the nature of emergent higher level systems in the world.
I think a more successful metaphysical theory may include properties or causal structures not historically contemplated by physics. I fully expect that some of the ideas required for a successful theory might lend themselves to theistic interpretations.
But nothing in this would lead one to postulate intermittent or ad hoc supernatural interventions in our world, for which there is absolutely no evidence. This is the sense of philosophical naturalism worth preserving. And for Pete’s sake, no philosophical debate among we adults offers any basis for politico-religious efforts to inject unfounded ideas about “Intelligent Design” into our high school science classes. I find this effort deeply offensive.
I’m gratified that many resources are available for defending our public schools against "ID". For instance see the links (on the lower right-hand side) on the Panda’s Thumb group blog.
Tuesday, June 14, 2005
Thursday, June 02, 2005
Emergence Revisited
I want to update an opinion I’ve had about whether or not first-person subjective experience (FPE) -- the “hard” to explain aspect of consciousness -- is essentially an emergent phenomenon. In the past I believed that FPE is too radically unique a phenomenon to have just suddenly popped into the natural world because, say, the human nervous system reached some threshold level of complexity or functional organization. Even if human-style consciousness is uniquely robust in the natural world, I didn’t think FPE itself could arise in a world lacking some fundamental feature which was a precursor or building block.
It is this belief that helped set me off on the “road” to panexperientialism: the theory that an experiential aspect must be present in all of nature (the term is due to process philosopher David Ray Griffin).
Here is a counter-argument: emergent phenomena of all sorts are ubiquitous in nature, and this should make an account of FPE as emergent more plausible.
On this topic, I recently read A Different Universe, by physicist Robert B. Laughlin. The book is an extended essay stressing the role of emergent organizing laws in nature generally and in physics in particular (where we assume reductionist approaches have had their greatest success). Laughlin argues (compellingly I thought) that the world is constituted by a hierarchy of physical principles of organization, which often cannot be derived from principles governing the parts, and which furthermore are sometimes insensitive to compositional detail. Laughlin presents examples starting with phase organization of materials, from the more mundane (water) to the exotic (involving superfluidity and superconductivity). He then goes on to argue, more controversially, that systems of physical description up to and including relativity are descriptions of collective or emergent organizational principles. Emergence is not just about biology and social sciences, it’s about physics.
So, if “collective principles of organization are not just a quaint side show but everything – the true source of physical law”, what should we take from that? Well, I think it means we lack a well developed modern philosophical system (metaphysics) which is consistent with what we’ve learned about the natural world. The dominant metaphysical interpretation of naturalism is physicalism (the older version was materialism). While there is a vast literature on physicalism which makes generalizing dangerous, I find physicalist accounts to be typically inspired by reductionism and the analogy of mechanism. Emergent phenomena are not to be separately explained, but rather are asserted to supervene on lower level phenomena, and it is further assumed that the lowest level of nature is fully deterministic and causally closed. The fact that quantum physics is not conventionally deterministic doesn’t change this for most physicalists (although it should); they assume it adds a bit of randomizing, but doesn’t otherwise impact the story. I conclude that physicalism doesn’t explain well a world in which emergent phenomena exist at multiple levels of nature.
So, looking back, when I rejected the idea of first-person experience as a true emergent phenomenon, I was assuming that if it were (ontologically) emergent, it would be uniquely so in a world otherwise well explained by reductionist accounts. Therefore I sought an explanation of FPE in terms of simpler experiential ingredients – basically a reductionist account of experience to be (somehow) integrated with the paradigm account of the physical world. But if reductionist accounts are actually inadequate for physical phenomena, then the project is a different one. The goal is to develop a new metaphysics which is consistent with ubiquitous emergence, and then explain FPE in this new context.
In fact, I think it would be most productive to assume all natural systems above the quantum level are emergent – to be a natural system is to have been actualized from an underlying potential. As I recently discussed on this blog (here), the hierarchy of natural systems in the world may be made more explicable by utilizing an event ontology which has two poles: the set of potential realizable properties, and also an actualizing property which binds these into systems. And again I must plug Gregg Rosenberg’s excellent A Place for Consciousness, which presents a detailed proposal in this spirit (my original posts on the book are here and here). By the way, it turns out that his approach, while taking a different road than my old one, still entails a form of panexperientialism!
[As an aside, I recommend the Laughlin book for those interested in this subject matter, but I'll warn that I found it a bit annoying to read. I was put off by the many digressions and modestly amusing anecdotes and also some poor analogies (all meant to increase accessibility to the lay audience, I guess). Part of the problem is that I simultaneously started the latest Richard Dawkins book, which was spoiling me in terms of truly great science writing.]
It is this belief that helped set me off on the “road” to panexperientialism: the theory that an experiential aspect must be present in all of nature (the term is due to process philosopher David Ray Griffin).
Here is a counter-argument: emergent phenomena of all sorts are ubiquitous in nature, and this should make an account of FPE as emergent more plausible.
On this topic, I recently read A Different Universe, by physicist Robert B. Laughlin. The book is an extended essay stressing the role of emergent organizing laws in nature generally and in physics in particular (where we assume reductionist approaches have had their greatest success). Laughlin argues (compellingly I thought) that the world is constituted by a hierarchy of physical principles of organization, which often cannot be derived from principles governing the parts, and which furthermore are sometimes insensitive to compositional detail. Laughlin presents examples starting with phase organization of materials, from the more mundane (water) to the exotic (involving superfluidity and superconductivity). He then goes on to argue, more controversially, that systems of physical description up to and including relativity are descriptions of collective or emergent organizational principles. Emergence is not just about biology and social sciences, it’s about physics.
So, if “collective principles of organization are not just a quaint side show but everything – the true source of physical law”, what should we take from that? Well, I think it means we lack a well developed modern philosophical system (metaphysics) which is consistent with what we’ve learned about the natural world. The dominant metaphysical interpretation of naturalism is physicalism (the older version was materialism). While there is a vast literature on physicalism which makes generalizing dangerous, I find physicalist accounts to be typically inspired by reductionism and the analogy of mechanism. Emergent phenomena are not to be separately explained, but rather are asserted to supervene on lower level phenomena, and it is further assumed that the lowest level of nature is fully deterministic and causally closed. The fact that quantum physics is not conventionally deterministic doesn’t change this for most physicalists (although it should); they assume it adds a bit of randomizing, but doesn’t otherwise impact the story. I conclude that physicalism doesn’t explain well a world in which emergent phenomena exist at multiple levels of nature.
So, looking back, when I rejected the idea of first-person experience as a true emergent phenomenon, I was assuming that if it were (ontologically) emergent, it would be uniquely so in a world otherwise well explained by reductionist accounts. Therefore I sought an explanation of FPE in terms of simpler experiential ingredients – basically a reductionist account of experience to be (somehow) integrated with the paradigm account of the physical world. But if reductionist accounts are actually inadequate for physical phenomena, then the project is a different one. The goal is to develop a new metaphysics which is consistent with ubiquitous emergence, and then explain FPE in this new context.
In fact, I think it would be most productive to assume all natural systems above the quantum level are emergent – to be a natural system is to have been actualized from an underlying potential. As I recently discussed on this blog (here), the hierarchy of natural systems in the world may be made more explicable by utilizing an event ontology which has two poles: the set of potential realizable properties, and also an actualizing property which binds these into systems. And again I must plug Gregg Rosenberg’s excellent A Place for Consciousness, which presents a detailed proposal in this spirit (my original posts on the book are here and here). By the way, it turns out that his approach, while taking a different road than my old one, still entails a form of panexperientialism!
[As an aside, I recommend the Laughlin book for those interested in this subject matter, but I'll warn that I found it a bit annoying to read. I was put off by the many digressions and modestly amusing anecdotes and also some poor analogies (all meant to increase accessibility to the lay audience, I guess). Part of the problem is that I simultaneously started the latest Richard Dawkins book, which was spoiling me in terms of truly great science writing.]
Monday, May 23, 2005
The Trouble with Tropes
In this interesting post at Maverick Philosopher, Bill Vallicella explores critically the metaphysical theory that all properties in the world are tropes, which are ontologically simple building blocks (technically speaking, abstract particulars). In other words, trope theory is a one-category ontology. The key question is, then, how do these tropes come together to make things? The advocate of the view would need to propose a compresence relationship which binds tropes, but still is itself a trope of the same ontological variety. Vallicella then argues that this leads to problems, particularly a vicious regress. If the compresence trope ‘C’ links tropes ‘R’ and ‘S’ together, then what links C to both R and S? If it is another trope C*, you can see the regress. It made sense to me, although I imagine philosophers who have developed different varieties of trope theories would have a response. In any case, the arguments are carefully constructed, and I refer you to the post itself and the SEP entry on tropes for further reading.
For me, this discussion lends support to my previously held belief that a single simple ontological building block cannot account for the diversity of natural entities. A second relation is needed – a property which binds and actualizes, but is itself not in need of binding in the same way. At first, this may seem to make for a less appealingly simple or monistic ontology; however, when the property complexes that result are cast as events rather than things, the dual aspect seems to me elegant and not ad hoc, as I argued in this recent post.
For me, this discussion lends support to my previously held belief that a single simple ontological building block cannot account for the diversity of natural entities. A second relation is needed – a property which binds and actualizes, but is itself not in need of binding in the same way. At first, this may seem to make for a less appealingly simple or monistic ontology; however, when the property complexes that result are cast as events rather than things, the dual aspect seems to me elegant and not ad hoc, as I argued in this recent post.
Friday, May 20, 2005
More Quantum Causality
I have suspected that modern quantum theory contains the seeds of a new theory of causality. I did a search on Google scholar looking for more papers. I thought this one, by mathematical physicist V.P. Belavkin, was interesting, and I offer the briefest of summaries below. As usual, I was limited in my ability to follow the formalism, and therefore will “bleg” anyone with expertise in this area to comment or offer references which help explain this in layman’s terms.
Belavkin says that “the latest developments in quantum probability, stochastics, and in quantum information theory” make it possible to bypass the paradoxes of the measurement problem in the traditional quantum theory. The original theory divides the world into an external observer and a closed quantum system to be observed, which results in the problem.
It goes something like this: Belavkin analyzes open quantum systems using a dynamical approach which gives the output statistics of continuous quantum measurements which result from the solution of a stochastic differential equation. He then applies a special filtering method or superselection rule – which he calls a causality principle – which imposes a past-future boundary. The past consists of classical particle trajectories, the future are the quantum probabilities compatible with these trajectories. The statistical results obtained are consistent with experiment, just as in the traditional formulation.
Now it seems we haven’t gotten “something for nothing” here. In exchange for getting rid of the seemingly subjective observer problem in the original theory and making things more objective, he had to insert “by hand” a boundary defining the arrow of time. Still, it is appealing to think that time and causality are in an objective way intimately bound up with the transformation of quantum potentials into classical realities, as is the case with this proposal.
Belavkin says that “the latest developments in quantum probability, stochastics, and in quantum information theory” make it possible to bypass the paradoxes of the measurement problem in the traditional quantum theory. The original theory divides the world into an external observer and a closed quantum system to be observed, which results in the problem.
It goes something like this: Belavkin analyzes open quantum systems using a dynamical approach which gives the output statistics of continuous quantum measurements which result from the solution of a stochastic differential equation. He then applies a special filtering method or superselection rule – which he calls a causality principle – which imposes a past-future boundary. The past consists of classical particle trajectories, the future are the quantum probabilities compatible with these trajectories. The statistical results obtained are consistent with experiment, just as in the traditional formulation.
Now it seems we haven’t gotten “something for nothing” here. In exchange for getting rid of the seemingly subjective observer problem in the original theory and making things more objective, he had to insert “by hand” a boundary defining the arrow of time. Still, it is appealing to think that time and causality are in an objective way intimately bound up with the transformation of quantum potentials into classical realities, as is the case with this proposal.
Subscribe to:
Posts (Atom)