Tuesday, March 24, 2009

Points of View are Irreducible

I’m an admirer of the relational interpretation of quantum mechanics (RQM), due originally to Carlo Rovelli. The revised SEP entry on RQM by Federico Laudisa and Rovelli made reference to two new papers about RQM by philosophers of science Bas C. Van Fraasen and Michel Bitbol. Unfortunately, Van Fraasen’s paper is not yet published, and Bitbol’s paper (entitled “Physical Relations or Functional Relations?”) is in French. However, in the course of looking for this, I found another interesting paper by Bitbol in English, which was published in a journal called NeuroQuantology: “Consciousness, Situations, and the Measurement Problem of Quantum Mechanics.”

In this paper, Bitbol looks at the history of discussions about the putative role of consciousness in the measurement process. He concludes that it is a mistake to think that human minds need have anything special to do with the measurement process; but a careful analysis reveals that QM does necessitate at a minimum taking into account particular viewpoints. His analysis places Bitbol in the same general camp as RQM and also the “Perspectivist” interpretation I described in these posts which referenced Paul Merriam’s papers.

The bulk of Bitbol’s paper is his careful presentation of a thought experiment which shows the difference between classical and quantum physics as it relates to the role of the observer in a measurement. (This exposition is similar to the thought experiment known as “Wigner’s friend” -- see Henry Stapp's discussion in this doc file). Wave functions characterize the observables of a system relative to interaction with another particular system –- in fact we can characterize a whole chain of interactions via a wave function -- until the chain comes to the end with our observation. But, Bitbol says the following:

"I am not saying that WE are unique or privileged beings in nature (this would be collective solipsism of an absurd sort), but only that we are privileged beings for US! As soon as we establish a relation with an element of the measurement chain, this element acquires a determination relative to US. Nothing has thus to be changed in the physical description, since determinations of the measurement chain are still relative to something. But everything is different for US, since the determinations of the measurement chain are now relative to US. And a relation of which WE are one term is something quite peculiar, even if it is only peculiar…from OUR point of view."

As he says, in classical physics, the fact that we are situated subjects with a point of view can be “bracketed”, and we can view ourselves as just another object. In quantum mechanics, this “situatedness” is irreducible. This doesn’t mean we can’t naturalize ourselves in picturing the world - we don’t need to think human consciousness is something distinct from the rest of nature. But QM teaches us that this process of naturalization has a boundary – points of view cannot be reduced or eliminated from the picture.

15 comments:

Anonymous said...

My understanding of the SEP article and the Bitbol paper is that conscious observation is not a special case in quantum mechanics. There's no difference between you interacting with a quantum system or my chair interacting with that same quantum system.

A quantum "measurement" event is sort of a misleading term. At the particle level, it's just two systems interacting. You may interpret the results of that interaction as a measurement, but the word "measurement" has no physical meaning with regard to the systems involved.

It seems to me that quantum mechanics is neutral on the subject of consciousness in exactly the same way that it is neutral on the subject of chairs. QM only concerns itself with the the microscopic details of the universe. The fact that events happening at the microscopic level ultimately give rise to chairs and consciousness does not factor into the equations of QM.


>> points of view cannot be reduced or eliminated from the picture

This is not how I would characterize the SEP article or the Bitbol paper. Unless you mean that my chair has a "point of view". What I got out of both of those was that consciousness doesn't have any special place in quantum mechanics. A system of particles is a system of particles, regardless of whether the system is conscious or not, and regardless of whether the system has a point of view or not.

So it still seems to me that your theory of consciousness boils down to just "intrinsic properties". Quantum mechanics doesn't contradict your theory, but neither does it seem to lend any special support to it, that I can see.

It seems to me that the universe is indifferent to "points of view", in the human sense of the phrase anyway.

Anonymous said...

By the way, I've learned a significant amount from your posts and from our conversations on these topics. I appreciate the time you put into both!

Steve said...

>>A quantum "measurement" event is sort of a misleading term. At the particle level, it's just two systems interacting. You may interpret the results of that interaction as a measurement, but the word "measurement" has no physical meaning with regard to the systems involved.

Measurements are indeed just interactions (when interactions are considered from the POV of one of the interacting systems). But the word does have special physical meaning (i.e. distinct from the classical meaning of “interaction”) because a system’s properties are only actualized into determinant values in a measurement.

>>It seems to me that quantum mechanics is neutral on the subject of consciousness in exactly the same way that it is neutral on the subject of chairs. QM only concerns itself with the the microscopic details of the universe.

QM events are the constituents of large composite systems like chairs and conscious brains. I take QM as relevant to all of nature, since it is the fundamental theory of physics.

>>Unless you mean that my chair has a "point of view".

Every system does have a point of view. Relational QM is democratic.

>>What I got out of both of those was that consciousness doesn't have any special place in quantum mechanics.

Human consciousness has no special place. I think (now going beyond the articles) that each measurement event has a grain of experience for the participating system

>>A system of particles is a system of particles, regardless of whether the system is conscious or not, and regardless of whether the system has a point of view or not.
>>So it still seems to me that your theory of consciousness boils down to just "intrinsic properties". Quantum mechanics doesn't contradict your theory, but neither does it seem to lend any special support to it, that I can see.

I think the process of actualization in the measurement event is the natural home for the intrinsic properties. With regard to these articles, the fact that measurements only occur relative to a first-person interaction is a telling clue to their link to consciousness (which has intrinsic first person character).

P.S. Thanks to you for the continued dialogue.

Anonymous said...

>> But the word does have special physical meaning (i.e. distinct from the classical meaning of “interaction”)

So "measurement" has a meaning that is distinct from the classical meaning of "interaction", BUT, it is not distinct from the quantum mechanical meaning of the term "interaction", correct?


>> because a system’s properties are only actualized into determinant values in a measurement.

I think you should say: "a system’s properties are only actualized into determinant values with respect to the interacting systems at the time that the interaction between the systems occurs".

Because the values are only determinent with respect to you and the other system, correct? A THIRD system/observer will see both you AND the first system you interacted with as still being in indeterminate states until he interacts with it and you. Though, when he does observe you and the other system, he will then see a correlation between your state and the first system's state. But that state isn't "fixed" with respect to him until the interaction he participates in occurrs.


>> I think (now going beyond the articles) that each measurement event has a grain of experience for the participating system

So again, here "measurement event" is a loaded term that implies more than QM delivers. It seems to me that to state this in more neutral, but equally accurate, terms would be to say that "every interaction involving physical systems has an experiential aspect".


Stepping back here:

So every "third person" aspect of human behavior seems explainable within the framework of regular physics with no intrinsic properties or anything like that, right? By examining a human's physical state, we should theoretically be able to determine whether they are in pain, for instance. Or angry. If a human exhibits some behavior, we should (again, theoretically) be able to explain that behavior by tracing back along the chain of cause and effect, using physics as our guide to what cause preceeded each effect.

However, examining all of this "third person" information won't tell us anything about the first person experience of that human. A color-blind scientist COULD look at all of this information and determine that the subject was seeing "red", but would not be able to derive from that information what it is like to see red.

So, you then conclude that this extra "first person" information, experience, must be something that isn't contained in the "third person" data...it must be something intrinsic to the material that constitutes the person.

Which may be. However, this runs into problems if you start thinking about simulations, which duplicate the functional/computation structure of the brain, and would presumably therefore produce the same outputs for a given set of inputs.

So according to your theory, you could have a simulation that responded to all inputs in the same manner as you, but would have a different conscious experience, or maybe no conscious experience at all.

And, if you could run a simulation on a digital computer, then you should be able to run the exact same simulation on a mechanical computer (e.g., Babbage's Analytical Engine). Here we have yet another set of materials, with presumably different "intrinsic" properties. But, if the mechanical simulation preserved the functional/computational structure of the digital simulation, which in turn preserved those properties from the original human brain being simulated, THEN the mechanical simulation would also answer all questions the same. Even questions about it's first person experience.

So, you can then say that computer simulations are impossible, BUT...why would that be? We know that simulations of physical systems are generally possible, and produce useful information. And there is no obvious theoretical reason that scientists have discovered that would make them impossible. So declaring them impossible doesn't actually seem to be a legitimate move for you.

So rather than attribute first person consciousness to the intrinsic properties of matter, I think it makes more sense to attribute it to the intrinsic properties of specific functional/computational ARRANGEMENTS of matter.

This doesn't run afoul of the simulation scenario. And I think is generally just a more robust theory. It doesn't depend on contingent facts about the nature of matter, or on contingent facts about our universe's particular laws of physics. And it avoids Zombie problems in this world, and also in other "conceivable" or "possible" worlds, which your theory doesn't do.

And it really is just a small shift in emphasis. From intrinsic properties of matter, to intrinsic properties of specific functional/computational ARRANGEMENTS of matter.

Right?

Steve said...

>>Right?

I’m actually OK with the arrangement idea. But I’m not sure it makes the simulation easier if I’m right that getting the base-level “micro-arrangement” correct is crucial to the result!

First, let me say that I’m trying to picture the fundamental constituents as events, not particles of matter. So instead of “arrangements of matter”, I might say an “event configuration” or event complex. The intrinsic properties are properties of the events or event-complexes.

But, anyway, the simulation question is tricky. If I try to recreate the arrangement/configuration of the brain/body system in a way which is nonetheless distinct from a perfect micro-level clone, will it work?

I think that if we pursue a coarse-grained approximated simulation, it will fail. This is because in my opinion the quantum level matters for function as well as experience.

But what if we laboriously model each quantum detail on a classical computer? Well, we saw that classical replication of quantum detail (Hilbert spaces, etc.) takes exponentially increasing resources, so this may be a purely hypothetical question.

At first, I’m tempted to go ahead and concede this hypothetical possibility of a successful digital simulation, although still troubled by the idea that it would supposedly be functionally accurate despite a different micro-structure.

But, then, the computational resource issue might be a deal breaker in another way. It would be impossible for the digital computer to reside in a body that looked like mine – it would be larger and look different than me. But, then, would the functional responses fail because it wouldn’t interact with the environment in the same way? So, we’d have to model the world-environment correctly as well. Pretty soon – we’ve got the whole matrix! {throws up hands in surrender}.

(P.S. With regard to your reference to the zombie problem: Chalmers acknowledges that the Russellian “intrinsic property” idea does dissolve the zombie problem; when we think we’re conceiving of physical/material zombies we fail because we’re leaving out the intrinsic properties. I think this would be true whether the intrinsic properties were of “matter” or “arrangements of matter”)

Allen said...

So, structure is obviously very important for the brain to function correctly. If the structure is disrupted, the function of the brain is disrupted, sometimes in very strange (or fatal) ways. So we can't get rid of structure, and structure depends on the extrinsic properties of matter.

If some sort of extra property is required to explain consciousness, it seems to me that we should try to attach the property to the structure of the system, not to the individual pieces that make up the structure. And I think that there's a good reason for this...consciousness seems to be n somewhat emergent property of the whole system, not obviously attributable to any of it's constituent parts.


>> I’m actually OK with the arrangement idea. But I’m not sure it makes the simulation easier if I’m right that getting the base-level “micro-arrangement” correct is crucial to the result!

I think it's helpful to make a comparison to digital computers. Quantum effects play a big role in how modern computers work at the physical level. The design and manufacture of integrated circuits is a pretty precise business, and things have to be "just so" for the resulting chip to function correctly. The micro-arrangement is crucial.

And yet, most software is written to be platform neutral. Software doesn't care about the details of the hardware that it runs on, doesn't care about quantum mechanical properties, doesn't care about circuit design and layout, or any other specific micro-physical implementation details.

Software is an abstract functional description that can be successfully mapped onto many, many different physical systems, some of which are vastly different from each other (e.g. mechanical computers vs. digital computers).

If we have the same software running on a digital computer and a mechanical computer, the physical states of the two systems are very very different. And yet, there exists a mapping from one physical system to the other, because both can be interpreted as implementing the same functional structure, as defined by the software. And both systems will yield the same output values for a given set of inputs.

I like to look at examples of mechanical computation, because it shows clearly that there's nothing "magical" going on with computation. For example, Lego logic gates: http://goldfish.ikaruga.co.uk/logic.html and also a K'nex adding machine: http://www.youtube.com/watch?v=3vXlQZvS-nM

So, it seems to me that we can think about the mind's relationship with the brain in a similar way to thinking about software vs. hardware. It's not exactly the same, but analogous.

I disagree with Dennett on a lot of things, but I think he puts this pretty well:

"Computers are mindlike in ways that no earlier artifacts were: they can control processes that perform tasks that call for discrimination, inference, memory, judgment, anticipation; they are generators of new knowledge, finders of patterns–in poetry, astronomy, and mathematics, for instance–that heretofore only human beings could even hope to find. We now have real world artifacts that dwarf Leibniz’s giant mill both in speed and intricacy. And we have come to appreciate that what is well nigh invisible at the level of the meshing of billions of gears may nevertheless be readily comprehensible at higher levels of analysis–at any of many nested “software” levels, where the patterns of patterns of patterns of organization (of organization of organization) can render salient and explain the marvelous competences of the mill. The sheer existence of computers has provided an existence proof of undeniable influence: there are mechanisms–brute, unmysterious mechanisms operating according to routinely well-understood physical principles–that have many of the competences heretofore assigned only to minds.

[...]

In addition to the computers themselves, wonderful exemplars and research tools that they are, we have the wealth of new concepts computer science has defined and made familiar. We have learned how to think fluently and reliably about the cumulative effects of intricate cascades of micro-mechanisms, trillions upon trillions of events of billions of types, interacting on dozens of levels."


>> First, let me say that I’m trying to picture the fundamental constituents as events, not particles of matter.

So, I say particles, but my understanding of quantum mechanics is that really everything ultimately boils down to waves (as in quantum field theory). But I don't think that is an important detail here.

>>So instead of “arrangements of matter”, I might say an “event configuration” or event complex.

If you think that an event based architecture is necessary to explain the observed features of the physical universe, then I'm okay with that. I've heard of such things, but at this point I wouldn't bet the farm one way or the other.

Ultimately I don't think that this matters. Particles, waves, strings, events, something even stranger -- I think I'm neutral on choosing between these. In my view, consciousness should be possible in any universe that has any of these as a basis. Because they ultimately just serve as building blocks for the functional/computational arrangements/configurations.

>> If I try to recreate the arrangement/configuration of the brain/body system in a way which is nonetheless distinct from a perfect micro-level clone, will it work?

I think there are good reasons to think that it will work (or is at least theoretically possible), and no obvious reasons to think that it won't (or that it is in principle impossible). For example: http://news.bbc.co.uk/2/hi/technology/6600965.stm or http://machineslikeus.com/news/ibms-global-brain

Also, there's a reason why the government spends billions of dollars to simulate nuclear warheads. Because the simulations tell us something useful about the real world. There is a meaningful relationship between the simulation and what is being simulated.


>> This is because in my opinion the quantum level matters for function as well as experience.

What is your reasoning here? You have that opinion, but what facts lead you to it?

It seems to me that "function" is, basically by definition, independent of implementation. Anything can serve the function of being a chair if it meets the functional requirements. Anything can be a computer if it meets the functional requirements. So, what are the functional requirements for a brain (aside from consciousness) and which of these requirements do you believe can only be met by quantum level processes?


>> we saw that classical replication of quantum detail (Hilbert spaces, etc.) takes exponentially increasing resources, so this may be a purely hypothetical question.

First, the simulation doesn't have to run in real time. So even if it takes 10,000 years of work on a classical supercomputer in real time to produce 1 second of "simulated time", that's fine. If it's theoretically possible for a computer simulation to duplicate the abilities and consciousness of a human brain, then that tells us something important about the nature of consciousness, right? And that's enough, I think.


>> troubled by the idea that it would supposedly be functionally accurate despite a different micro-structure.

Micro-structure isn't relevant to functional accuracy, right? It's functionally accurate if it correctly duplicates the required functionality (ha!).

Let's look at it this way. If we can look at a person's neurons and figure out from the states of these neurons what a person is thinking (as in http://www.sciencedaily.com/releases/2009/03/090312114754.htm and http://www.bloomberg.com/apps/news?pid=20601124&sid=a.TfJzNTqD_I&refer=home), then I think we're pretty safe in saying that somewhere around the level of neurons is an acceptable "substitution level", and we can draw the line there.

Though, really I think you can pick any arbitrary volume and say, "if we produce the correct outputs for EVERY input that can come across the boundary of this volume, then we have captured everything important about this volume's functionality". The specific implementation details of your simulation should not be relevant. By the fact that the "functional units" (e.g., simulated neurons) produce the correct outputs when provided with any possible input, we know that they have at least as much internal complexity as the system being simulated, and we also know that their internal causal structure must bear some relation to that of the system being simulated. Otherwise, how do we explain the fact that they produce correct outputs?

>> It would be impossible for the digital computer to reside in a body that looked like mine

I don't think this is a problem just for establishing whether the idea of a simulated mind is feasible and would be conscious. Have you read Daniel Dennett's "Where Am I" (http://www.newbanner.com/SecHumSCM/WhereAmI.html)? It's somewhat entertaining, and touches on this. Though, again, let me stress that I'm not a huge Dennett fan.

>> P.S. With regard to your reference to the zombie problem: Chalmers acknowledges that the Russellian

It explains why you couldn't have a zombie world that was physically identical to ours in every way, except that there is no consciousness. Because if it's physically identical to ours, then it's matter has intrinsic properties, and thus any human made from that matter will be conscious.

BUT, it doesn't rule out the possibility of universes where matter has extrinsic but no intrinsic properties, and so people there exhibit conscious behavior but are not in fact conscious, right? AND it doesn't rule out the possibility in our own universe of a "robot" version of me, that looks and acts like me, but is in fact secretly made of metal and microchips. This robot would appear to be conscious in the same way as I am, but in fact might have no internal consciousness, OR might have a very different internal consciousness than mine.

>> I think this would be true whether the intrinsic properties were of “matter” or “arrangements of matter”

Basically what I'm getting at (obliquely) is that "information" is this intrinsic property of "arrangements of matter".

Is it possible to imagine ANY possible universe where there are flat white sheets on which there are black marks arranged in such a way as to form the words from "War and Peace", but which do not contain the information found in "War and Peace"? I don't think so. If you find some physical representation that can be interpreted as a faithful copy of War and Peace...then it is "War and Peace", regardless of the details of physics or any intrinsic/extrinsic properties of matter. So information holds across ALL possible universes. And if you attach consciousness to information, then consciousness will also hold across all possible universes. Anywhere you SEEM to find consciousness, there will actually be consciousness, because it is attached to information, not to materials.

Steve said...

Thanks for this, including the interesting links. (Coincidentally, I had just seen this video for a lego turing machine!)

>>then I think we're pretty safe in saying that somewhere around the level of neurons is an acceptable "substitution level", and we can draw the line there.

>>So, what are the functional requirements for a brain (aside from consciousness) and which of these requirements do you believe can only be met by quantum level processes?

The crux of this part of our debate:

Functionalism assumes the micro-physical details don’t matter (hence “multiple realizability”). I think they do. Why do I think this?

I’ve watched AI efforts (casually) for a long time, and each decade has seen a lot of optimism (Dennett’s heyday of the 80’s may have been a peak – although skeptics like Dreyfus and Searle were always around as well). I’m confident interesting simulations of cognitive processes will be forthcoming from the efforts underway. The brain has a number of modules which offer potential for modeling on a computer. I agree we'll learn alot from this work.

But I think neuron-level modelling will remain far away from the goal of an accurate simulation of a conscious human. The key to why I think this lies in thinking about the neuron itself: this is a remarkable entity in its own right – a living cell.

1. It strikes me that single cells do “proto-conscious” things that are well beyond computers: they exhibit and sustain creativity and intentionality. (Here’s a recent paper I liked on this topic). They do this completely independent from any outside “programming” support, and have been doing so through a lineage of billions of years.

2. I think generally there is a lot of evidence, mostly circumstantial, but some direct, that biological systems up and down the evolutionary spectrum exploit the quantum level of nature to achieve their remarkable functionality. And if you need quantum physics to properly explain photosynthesis, I think you’ll need it for consciousness.

3. And, to repeat what you’ve heard me say before, another (philosophy-derived) reason I think the micro-physics matters for consciousness is that I see the quantum level as a natural home for the intrinsic property “path” for solving the hard problem of first person experience.

Allen said...

>>1. It strikes me that single cells do “proto-conscious” things that are well beyond computers

Here's one thing that occurs to me: any process that we understand, we can simulate via computers. "Understanding" something consists of breaking it down (reducing) it to simpler concepts which we fit together into a step by step "procedural" fashion. We have to understand the pieces, and how they fit together. And if we understand that, we can program a computer to simulate the process.

If you say we can't simulate the brain, then you are saying we can't understand the brain, and thereby taking the "Mysterian" position I think.

I think this is an important point: Anything we truely understand, we can simulate.

On the paper you reference, I think a key phrase comes on page 3:

>>"There is no non-physical élan vital that sets off living things from other complex physical objects, and all of a cell's functions can, in principle, be derived from the chemistry and physics of its components, and thus "reduced" to physics in the same way as the functions of carburetors and laptops. Although our understanding of this reduction remains incomplete, the immense and rapid progress of cellular and molecular biology in the last half-century leave little grounds for doubt of this principle."

Then the author makes this point:

>>"Eukaryotic cells respond adaptively and independently to their environment,"

A simulated eukaryotic cell would ALSO respond adaptively and independently to it's (simulated) environment. If the behavior of a real eukaryotic cell is reducible to the behavior of it's constituent particles (with our previous caveats about the nature of particles in effect here), which act according to the laws of physics, AND if we actually understand these laws of physics, then we can simulate the eukaryotic cell accurately. I come back to this point below.

Continuing the same sentence:

>>"rearranging their molecules to suit their local conditions, based on past (individual and species) history. A transistor or a thermostat does not - nor do the most complex machines currently available. This is a practical difference between cells and machines"

This is wrong. Computers DO rearrange themselves to suit their local conditions based on past history. The atoms (made up of protons, neutrons, and electrons) that make up the transistor change state as the inputs to the transistor change. The electrons at the very least rearrange. And the overall physical state of the computer changes in a way that reflects it's past history. How else could a computer work? Computers are made up of atoms and molecules, and these atoms and molecules change in reaction to changes in inputs from the rest of the world. How is this different in kind from the eukaryotic cells he describes? It isn't different at all.


>>"A crucial difference between a cell (including but not limited to a neuron) and a transistor on a silicon chip is that the former arrangement of matter can autonomously and adaptively modify itself in response to its circumstances, whereas the latter cannot."

Wrong wrong wrong wrong. A computer DOES "modify itself" in response to it's circumstance, otherwise how could it respond to inputs? It must change state, therefore it must be physically modified. If it changes state, it is "responding". And what does he mean by "autonomously"??? I think he's being misleading here. Does he mean "uncaused"? If so, that contradicts what he said in the first quote I gave from his paper.


>>"In contrast to a silicon chip, your brain is constantly changing at a microscopic, molecular level,"

AND

>> "It would be necessary to go back to square one, technologically speaking, and create computing components and machines that can autonomously and adaptively reconfigure themselves physically, at a distributed molecular level, and then figure out a way to put a whole lot of them together in just the right way."

AND

>> "Cells (and thus brains) do more than "process information" and "represent" - they actually do work, and in particular they actively reconfigure their material molecular structure as part of their representing."

ARRRRGGGGHHHHH!!! A silicon chip DOES CONSTANTLY CHANGE at a microscopic level! How else could it work???

If you're saying, "oh, a transistor changes, but not in the right way", you're making an arbitrary distinction with no objective basis. All that "in the right way" means, is "in a way that fits your conclusions".

>> "any more than a molecule-accurate model of the Gulf Stream can generate hurricanes or warm Scotland."

So we define the gulf stream in terms of weather, climate, rain, clouds, etc. But simulated weather, climate, rain, clouds, etc. has no real impact on the real world, as he says.

But when we think about a brain, we are primarily concerned with behavior, with answers and information and responses and abilities. And in these cases the simulated version is JUST AS GOOD as the real version. A simulated answer to a hard question is a REAL answer, and is just as good as the answer produced by a real brain. The simulated ability to manage a body and make it walk across a room can be hooked up to a robot, and then it's just as good as what's produced by a real brain.

Unlike rain, Information is real regardless of whether it's produced by a simulation or by a biological brain. And when we talk about brains, it's not at all like talking about stomachs (a common Searle example I think). A simulated stomach can only digest simulated food. Which is interesting, and can be useful in the real world in an indirect sense...by telling us something about how "real" stomachs work.

But simulated brains can take REAL information as input, work with REAL information via simulated thought processes, and produce REAL information, in the form of ideas, reponses, behaviors, and answers. And that's the crucial difference that Fitch (and Searle) misses.

Am I making any sense here, or does this all sound like crazy gibberish?

I'll respond to your other two points in a seperate post!

Allen said...

Okay, second post, as promised:

>> 2. [...] And if you need quantum physics to properly explain photosynthesis, I think you’ll need it for consciousness.

Why would you think that? Photosynthesis is a chemical process by which plants use the energy from photons to turn CO2 and water into carbohydrates. How is this in any way like consciousness? What is the significant link you see between photosynthesis and consciousness?

Do you think that consciousness is a direct byproduct of the brain turning glucose BACK into CO2 and water?

In my opinion, consciousness is a byproduct of a process that is implemented (in our case) by brain cells which rely on glucose for energy, but the actual process of metabolizing glucose is not in any direct way essential to consciousness.

BTW, I'm being facetious about the glucose. This is important later. Ha!

But, as I've said before, transistors also rely on quantum effects.. Everything relies on quantum effects. The idea that photosynthesis and other biological processes do also, even in very specific ways, is not particularly surprising to me nor do I think it's relevant to consciousness.

We're interested in the brain as the seat of consciousness because changes to the brain's structure affects behavior and response. Taking in information and producing clever responses to that information is the defining characteristic of a brain.

If we were going to build an artificial brain, and we wanted to test it to see if we were on the right track, we would look at how the artificial brain processed information and the types of responses the artifical brain had to various inputs. We would NOT look at how the artificial brain processed glucose to decide if it were working correctly. We wouldn't look at any chemical process. Because consciousness isn't a chemical process like photosynthesis.

I'm trying to understand your reluctance to seperate what the brain does (process information) from what the brain is (a piece of meat). But I'm not having much luck.

Clearly processing information is the most important aspect of the brain's function. Your theory of "intrinsic properties" seems to me unsupported by any evidence, and seems unnecessary. Occam's razor says get rid of intrinsic properties and go with what seems to be working in explaining human behavior: information processing and neural computation.

Which brings us to your 3rd point:

>> 3. [...] another (philosophy-derived) reason I think the micro-physics matters for consciousness is that I see the quantum level as a natural home for the intrinsic property “path”

It looks to me like you are doing a lot of bending and twisting to fit the available facts into your pre-conceived philosophy-derived reasoning, rather than following the observed evidence where it leads.

If information processing is the defining characteristic of a "brain" (a brain that doesn't process information is either comatose or dead), and it's what we use to judge the correct functioning of a brain (not digesting food like a stomach, or removing waste like kidneys), then it seems to me that consciousness must be attached to information processing in some way. And information processing is not chemistry dependent.

QM-related processes may provide a much more efficient and powerful engine for doing some types of calculations that are important to consciousness and human behavior, but that doesn't mean that consciousness IS that efficient and powerful QM-based engine.

Steve said...

Thanks. A couple of brief points, and I'll follow up more a bit later.

I do think there's a point about the cell's autonomy which is more important than you credit.

They respond creatively to the environment without any outside help (as I said without a programmer's intervention for at least the last few billion years). A computer's states change also, obviously, but everything it does had to be anticipated and programmed. And we have no computer or robot which responds effectively (yet) to a real world environment without ongoing intervention from a programmer. They are not autonomous.

My reference to photosythesis had a link to the research showing quantum coherence plays a role. I think coherence-related effects are the non-trivial quantum effects at issue here.

Again, I don't think simulation is logically impossible (with unlimited resources), we're debating whether it can be successful while ignoring micro-physical structure and non-trivial quantum effects that enter at that level.

The example of the cell and the Engel et.al photosynthesis research is to point to some special things going on a the micro-level in biology that likely play a role in producing consciousness as we know it.

Allen said...

>>They respond creatively to the environment without any outside help (as I said without a programmer's intervention for at least the last few billion years).

Here we get into the issues of evolution and natural selection. There is an approach to algorithms and even hardware design that uses similar methods of mutation and selection. Admittedly, none of them have produced anything as complicated as a cell, but we've been at this computing stuff for less than 100 years, give it time. There's no evidence that I've seen that would indicate that natural selection (or even non-natural selection), as opposed to human design, has some magical power that it imbues its creations with.

A physical system is a physical system. There's no difference in principle between one that was evolved and one that was built. The appearance of autonomy (or lack thereof) is just a reflection of the complexity and/or subtlety (to human eyes) of the underlying processes that drive the system. Probably also it is a reflection of how well we understand those underlying processes...the less understanding we have, the more autonomous the system seems. (see my previous point on understanding and simulation)

>> The example of the cell and the Engel et.al photosynthesis research is to point to some special things going on a the micro-level in biology that likely play a role in producing consciousness as we know it.

Again, I ask: What does photosynthesis have to do with consciousness? You make this vague hand-wavey gesture in that direction, but I don't see it. I think you need to make this explicit. What is the connection? Both may depend on quantum level effects, but so do transistors. So, that can't be the connection.

If there are "special things" in plant cells and special things in human cells, then there are special things in transistors too! What is the criteria that you use to discount the specialness of transistors?

You seem to make arbitrary distinctions all over the place, solely in order to make your preferred theory work.

Searching your previous posts for the terms "functionalism" or "computationalism" or even "connectionism" doesn't turn up very many hits. I'd be interested in hearing why you've rejected these, as they seem to offer a good framework to explain all observed human behavior, and fit very well with the rest of our observations of the universe.

They do fall short of explaining "subjective experience", but then so does every other theory. But it would seem like we'd want to use these as our starting point, as they explain 99% of what we observe.

Steve said...

The quantum effects in a transistor are used in the service of creating an explicitly classical system. The quantum effects I'm interested in are ones which actually exploit coherence/superpositions.

For me debates about functionalism and A.I were mostly in the B.B.E. (before blog era). The fact that functionalism as well as all traditional forms of physicalism fail to address the hard problem of subjective experience (I endorse Chalmer's arguments here) is the reason I havent' taken it up as much here. Purely philosophical reasons led me toward panexperientialism by the time of the blog - and I've been trying to figure out whether and how this solution is implemented in nature. The world of QM seems to offer a picture potentially consistent with the philosophical analysis.

Classical functional analysis will explain alot of how brains and bodys work. I don't mean to be a defeatist about this. I do think, though, that the practical difficultes encountered so far (after great optimism in the period of the 60's -80's) might also be pointing toward the quantum level also.

Allen said...

>> The quantum effects in a transistor are used in the service of creating an explicitly classical system.

There is no such thing as a "classical system". All physical systems are constructed from base materials that are susceptible to quantum effects, therefore all physical systems are quantum systems. We can use approximate "classical intuitions" to work with these systems, but that's just a useful fiction.

So at the smallest level, things don't work in a way that is intuitive to us, because the things that are intuitive to us are the situations that we've evolved to deal with.

We are aggregates, and thus reducible to smaller components. The nature of these components is irrelevant. They can interact via quantum mechanics or newtonian mechanics or relativistic mechanics, or some strange hybrid of all three. It doesn't matter. In all cases, the question remains, why does this add up to consciousness?

Even if you ascribe "proto-consciousness" to the subcomponents, that still doesn't answer the question, or even move you closer to an answer, because my conscious experience is unified. So the question just becomes "how do these proto-conscious components combine to form a single unified consciousness who thinks of himself as Allen".

Which is really the same question as "how does my consciousness emerge from my non-conscious components". Who cares whether my subcomponents are conscious? Subcomponents are subcomponents. The question is, why am "I" conscious!

The problem has nothing to do with quantum mechanics, the problem has to do with how subcomponents (conscious or not) combine to form aggregate conscious entities.

So really, I don't see how your theory moves us any closer to resolving anything.

Steve said...

>>So the question just becomes "how do these proto-conscious components combine to form a single unified consciousness who thinks of himself as Allen".

Which is really the same question as "how does my consciousness emerge from my non-conscious components".

I disagree with this. the first problem is essentially a scientific question, the second problem is a age-old metaphysical dilemma.

>>So really, I don't see how your theory moves us any closer to resolving anything.

Well, between the two of us I'm sure we'll get it resolved eventually!

Thoughts said...

Nice blog Steve, lots of interesting snippets. Have you read Zeh's paper about observers as space-time points? What is interesting in this paper is that IF conscious observation is like Bishop Berkeley's "passive ideas" then the conscious observer is best modelled as a space-time point. Note that a space-time point is not the same as a 3D point but corresponds to the apex of a light-cone.