Tuesday, March 25, 2008

The Periphery is the Periphery

A philosopher named Jason Ford offers a dose of sensible phenomenology in a well written article called “Attention and the New Sceptics” which appears in the latest issue of the Journal of Consciousness Studies (unfortunately not free online). The subject is the proper interpretation of the various interesting studies on inattentional blindness and change blindness.

In various ways these studies show that our ability to make judgments about phenomena on the periphery when attention is elsewhere is poor relative to our ex ante expectations. (Think of the research using the now famous video of people passing a ball among themselves and the inability of most people to detect the entrance of a gorilla when asked to focus on counting passes). Ford’s article discusses the work of several philosophers who take these results as evidence for a broader reaching skepticism about the reliability of consciousness. They think they can argue we don’t have the conscious experience we think we do. Ford argues that they are mistaken in arguing that such skepticism is warranted by the research. Careful analysis of focal and peripheral perception shows that they are fundamentally different in character when it comes to their availability for detailed judgment and comparisons. It is a mistake to hold the periphery to the same standard as the focus of attention: “Being vague and nebulous just is the way the periphery registers in consciousness”.

Once this is acknowledged, one cannot argue from these results that we don’t have the conscious experience we think we have. It remains a viable assertion that when we claim we know what it is like to experience what we’re focused on, we are not in error.

By the way, in the course of the article, Ford does a good job reminding us what such a claim of infallibility is and is not. The claim says nothing about the following ways we can be mistaken (p.62):

"We can be mistaken about the proximate causes of our experience… We can be mistaken about how our biology produces experience…We can make mistaken categorical judgments when we try to classify our conscious experiences…Our memories are fallible."

To this list, he says, we can add that we need to be very careful when making claims about the content of the periphery of attention. But skepticism about our knowing what our own experience is like is unfounded.

Endnote: it’s been awhile since I’ve written on phenomenology – here’s the list of posts with that label. These are mostly focused on the key insight of continental thinkers: analysis of everyday lived experience (now often called primary or core consciousness ) offers the best hope for insights in the philosophy of mind. The other side of this coin is that higher-order reflective self-consciousness (a derivative late-comer to evolution), while central in our lives, can give a misleading picture of how experience fits into the natural world.

18 comments:

Eric Thomson said...

Mandik and I discussed similar issues about fallability of qualia judgments here.

I am often wrong about faint experiences to which I am intently attending (i.e., my attention is focused right on the relevant bit of the world). E.g., is that a tickle or an itch or a mild pain on my leg? Would he be OK with that?

I tend to agree that many people make too much of the change- inattentional-blindness results, as if such results imply consciousness is an illusion or something. The results are very cool without having to add such junk.

Mike Wiest said...

I'm not able to read the target article...but here's my two cents about the question of whether (or how) one can be wrong about one's own conscious state.

I don't see that you can be WRONG about whether that feeling is a tickle, itch, or mild pain--you just might be unsure what to call it. We say a perception is wrong when it fails to accord with the empirical reality that others can verify. So when you fail to notice the gorilla, you aren't wrong about your conscious state, you're wrong about what's on the screen. This takes nothing away from your perfect knowledge of your own conscious state. I can't conceive of any way one COULD show consciousness to be wrong about ITSELF--it can only be wrong about some external reality we interpret it as representing.

So I tend to side with those that say we can't be wrong about our conscious state--at the same time I fully endorse the list of ways our judgments and memories about those states can err.

The skeptical objections that I've come across seem to center on issues of vagueness or uncertainty in our conscious states. But I don't see any need to invoke distinctions between the periphery and the focus of attention. I would submit that perfectly definite conscious states can nevertheless be vague with respect to certain questions. For example, I think there is a Borges story that discusses a daydream of a flock of birds, in which the number of birds in the flock is indeterminate. My point is that we are able to experience conscious impressions that are indeterminate about details. But the skeptics I'm (vaguely) thinking of seemed to require that someone who claims to experience a full visual field should be able to answer detailed questions about every pixel of that visual field. Even with a perfect memory I don't think that is a valid criterion to impose on consciousness.

I'll just add a little concrete example to support my suggestion that a conscious state can be definite in some sense but also indeterminate in some other sense. It would be a--you guessed it--quantum coherent state, such as a laser, which has well defined macroscopic electric and magnetic fields, but in which the number of photons is indeterminate. In other words, the laser is in a superposition of different number (of photons) states. Like our memory of a flock of birds is in a superposition of different number states...

Eric Thomson said...

Just to be clear, and I think nobody would disagree, this isn't about whether an experience is right or wrong: that would be confused (like asking if a tree is right or wrong). Rather, it's whether our judgments about our experiences can be wrong, and I don't see why people think it is such a big deal if they can be wrong.

So, Michael says we can't be wrong about our conscious states, but our judgments about them can be wrong. To me, being wrong about our conscious states is just to make incorrect judgments about them. Perhaps he means that we can't be wrong in the judgment that there is a conscious state there (though I would be careful of being too confident about such things, given people with blindness denial which hasn't been much explored yet by the philosophers).

There are many ways such judgments can be wrong: you might be confused about the meaning of some of the terms, you might make a mistake because the experience is dim (e.g., I thought I felt an itch but upon more reflection over time I realized it was a pain; I have also mistaken a stomach ache for hunger), the experience may be extremely complex (e.g., complicated feelings and emotions about loved ones).

I really don't know what rides on this, though. Even though we can be mistaken about our experiences, that doesn't mean experiences aren't real or anything silly like that (any more than errors in my judgments about trees imply that trees don't exist).

I think there may be a lurking kind of sense-datum foundationalism in many people's minds. They think that experiences (not judgments about experiences, but somehow raw experiences themselves) are a foundation for knowledge, and then they conflate such experiences with judgments about such experiences, and think that by avoiding error they have a solid foundation for knowledge, a way to avoid skepticism. I think that's bologney (for one, such ridiculously high standards leaves us stuck in solipsism from which no Cartesian has been able to argue out of), but many people I know (present company excluded?) do have that foundationalist epistemic picture in their minds.

At any rate, I think experiences are real, and are different from judgments about experiences (e.g., I doubt if my dog has judgments about experiences--he most likely just wants the cookie I'm holding), and see no reason to make the latter immune to falsity.

Anyway, I think this is orthogonal to the original post. None of what I've said suggests experiences aren't real, and perhaps the original paper was getting at people who try to draw out 'consciousness is an illusion' type conclusions from inattentional and change blindness studies.

Steve said...

Thank you guys for stopping by here.

I think there is a knowledge claim here but it is very limited. It is that I know what my experience is like in the moment I'm having it. Once I start classifying the experience, and also once some time passes, then errors can come in. But the claim is that this minimal knowledge is enough to forestall a thorough-going skepticism about whether any part of experience is reliable. Then it is a short step to start saying experience is completely illusory. In other words, how do I both maintain that experience is real, but yet nothing about it is reliable?

After reading Eric's last comment, it just occurs to me that what makes this tricky is that in any research setting, the very process of dredging up a report may lead the subject into potential error. If so, we will have trouble verifying in such a setting that the limited claim is true. On the other hand, we won't be able to say it is not true either.

I'm comfortable saying that the burden of proof is on the skeptic here, what do you think?

(P.S. I'll have to read that Pete Mandik paper).

Mike Wiest said...

Hi Guys

Well, in answer to Eric's question about what I thought consciousness is infallible on, I think I'm happy with Steve's formulation, in which we're only infallible about the moment we're having the experience. I think that would include being sure that when you think you're conscious, you are.

I'm afraid I have to agree also with the line of thought suggesting that this claim might be unverifiable and unfalsifiable. So Eric asks what's riding on this assertion about consciousness' infallibility, since neither he nor any of us think that being error-prone makes consciousness UNREAL. Well we all know at least one respected philosopher who says experience is completely illusory, so the "skeptic" is not entirely a straw man. Steve's response suggests a connection between the RELIABILITY of an experience and its REALITY.

Now, while I want to defend our knowledge about our conscious state right now, I'm not sure how to interpret a claim that it is reliable. That seems to imply that one can check to see whether the conscious state was correct or not. And that seems to imply that we are talking again about agreement with some external objective reality. But it can't fail to be in agreement with ITSELF--I'm just nervous calling that reliable because it seems tautological. As if A could fail to be A.

Maybe one could evade the unverifiability problem somewhat with sophisticated psychophysical probing methods, and demonstrate some kind of "asymptotic self-consistency" at short time intervals to support the knowledge claim. Is this the kind of evidence that Jason Ford uses? That would be a kind of reliability that wouldn’t seem to be a logical necessity.

Regarding the burden of proof, it seems we're all three of us comfortable putting the burden on a skeptic about the existence of consciousness, but Eric seems to retain a skepticism about our other knowledge about our own conscious state at this very moment. I don't think Eric's examples of errors about our conscious state fall into the same class as "errors of judgment about past states." Again they seem to be about whether the state is confused or dim or complex--to me those are not errors about our conscious state, those ARE conscious states. A confused state is precisely a confused state--it is unreliable about what's happening in the objective world, but it is totally certain that it is confused...! But Eric might have in mind some kind of experimental procedure in which one’s reports could disagree with what one actually experienced a moment ago—in which case he might also be willing to put the burden of proof on a skeptic about our knowledge of right now.

I don’t think any of this makes me a sense-datum foundationalist, but I don’t really know. If it does I hope you can still respect me. I don’t think I am because I think that our experiences can be judgments, and often are, whereas Eric describes (pejoratively) an idea of infallible sense-data that we make fallible judgments about. In my view all our experiences might be judgments, and there might never be any pure “sense-data,” and the judgment-experiences might all be wrong about whatever they are about—but they cannot be wrong about what they feel like.

Eric Thomson said...

I guess I should track down the paper at some point...

The assumption I am making that judgments and experiences are mutually exclusive is not based on strong data. I base it on weak evidence, such as our inability to change experiences just by changing our judgments (e.g., illusions that persist despite our knowing and correctly judging them to be illusions). But it could be simply that there are two judgment systems, one modular computationally encapsulated that generates some experiences, another more "central" one that is under more voluntary control, the one that judges that the experience is a cool illusion.

I think that there are different types of vagueness. Yes, I can have vague, weird experiences, or vague ill-formed judgments, but they don't have to go together. E.g., I believe I have had clear experiences and weird judgments (e.g., clearly my stomach was aching mildly, but for some reason I didn't classify the experience correctly).

This does depend on me trusting my ability to look back in time, say that "Wow that experience stayed the same, how did I ever think it was hunger when I can now so clearly see it was an ache."

As far as the "knowing what it is like" formulation, I am not sure how saying "X knows what it is like to experience E" is different from saying "X is having experience E."

Do you have to presently be seeing red to know what it is like to see red? Or do I know what it is like to see red even though I am not presently seeing red?

I think all this jargon sometimes obscures what most people are really interested in: why the hell do we have experiences? Whether we call experience knowledge (know-how, know-that, knowing-what-it-is-like), if we do call it knowledge whether we want to say it is infallible, is our knowledge of experience theory-laden or unmediated by theory, etc.. All these questions seem to be tangential.

The problem is when the experience-phobes try to use these questions and words to somehow argue that experience isn't real (e.g., we can sometimes be mistaken about our phenomenology, therefore we can be mistaken about all phenomenology).

You might be interested in this paper called "The unreliability of naive introspection" by Schwitzgebel.

Here is the abstract:
We are prone to gross error, even in favorable circumstances of extended reflection, about our own ongoing conscious experience, our current phenomenology. Even in this apparently privileged domain, our self-knowledge is faulty and untrustworthy. We are not simply fallible at the margins but broadly inept. Examples highlighted in this essay include: emotional experience (for example, is it entirely bodily; does joy have a common, distinctive phenomenological core?), peripheral vision (how broad and stable is the region of visual clarity?), and the phenomenology of thought (does it have a distinctive phenomenology, beyond just imagery and feelings?). Cartesian skeptical scenarios undermine knowledge of ongoing conscious experience as well as knowledge of the outside world. Infallible judgments about ongoing mental states are simply banal cases of self-fulfillment. Philosophical foundationalism supposing that we infer an external world from secure knowledge of our own consciousness is almost exactly backward.

Highliting mine, as I like that last sentence. Frankly, the paper in general didn't impress me.

Steve said...

I think I agree a bit more with Mike's take, but maybe the fine points here aren't too important. Eric makes a very good point in his last comment. For some of these philosophers, the arguments over these issues seems to be a kind of "proxy war" for the underlying metaphysical debate on the status of conscious experience: a debate which is probably untouched by these attempts to harness research experimental results in phenomenology.

Mike Wiest said...

I'm not sure about the judgment vs experience issue. I'm open to being convinced either way. I thought at first that you (Eric) seem to be embracing (tentatively) the position that you labeled "foundationalist." But I think I understand now--you may accept the segregation of judgment and experience, you just don't accept that experience is a solid foundation for knowledge. And that fallibility applies (in your view as I understand it) both to knowledge about the external world and to knowledge about the experience itself.

I'm still not entirely comfortable with the examples of "error" like deciding that this feeling should be called a pain rather than a hunger. But I think I can relate to this: "As far as the "knowing what it is like" formulation, I am not sure how saying "X knows what it is like to experience E" is different from saying "X is having experience E."" I think I am agreeing that "experiencing" is the same as "knowing what it feels like." (I just find "what it feels like" to be more meaningful than "what it's like.") I can also agree with Eric's sentiment that a lot of this debate can come off as tangential or purely semantic.

In terms of consequences of this particular debate, Steve has suggested a possible slippery slope into the nonexistence of consciousness. I guess we have to answer the skeptics (however kooky we judge them) with arguments rather than dismissal. I mean, if we have the energy. Could there also be a relation to the question of epiphenomenality of consciousness? I can't quite make the connection...

Steve, sorry to do this, but since we can't read the article can you tell us what kind of evidence Jason Ford marshals against the skeptical arguments? The only way I can see to make the debate be about empirical evidence is through some kind of "consistency at small time intervals"--does that make any sense to y'all?

By the way Steve, thanks for this and your many other interesting posts. (I checked out the fetal pain and superfluid universe posts recently...)

Steve said...

Thanks Mike. I wish the paper was online, because I'm too lazy to summarize it in alot more detail. But just so you know, Ford's paper isn't a comprehensive review of all the evidence. It is a critique of the work of several other philosophers who have made (according to him) too-broad skeptical arguments based on the errors involved in our peripheral perceptions. He is saying that they have not shown what they claim to show. That's the bulk of the paper. He himself believes that we can claim to know what our conscious experience is like in the focus of attention at the time it is occurring. He acknowledges that there are or will be more arguments made against this position -- but he feels he has shown that the particular challenges he discusses fail.

The authors he discusses most are Dennett, Susan Blackmore, and Eric Schwitzgebel, although in the latter's case he is referencing an older paper than the one Eric Thomson linked to above.

Steve said...

I wanted to throw another comment in after thinking about Eric's idea about two "systems" being involved. I think that is right. There is the engaged and the detached, or the pre-introspective and the introspective. I think the special epistemological claims apply to the former. I also think the engaged system has priority: even though they seem to run in paralell often. I mean priority in the sense of evolutionary priority; the engaged (in the flow) feeling of what's-it's-like was there first, and the introspective (conceptual, language-oriented) system came later.
I had an old post or two about this on this blog - like this one: the limits of introspection.

Mike Wiest said...

Cool. The engaged/detached or pre-reflective/reflective distinction sounds plausible to me. I guess the reflective could be conceptualized as willfully deciding to make a memory part of our current consciousness.

I think I'll try to read the Schwitzgebel paper Eric linked to get a better sense of what the skeptical arguments are.

By the way, I noticed the mention of Libet in your old post. Did you ever study/post on the great back-referral debate?

Steve said...

Yeah. I had become pretty convinced of the veracity of that distinction when I used to think about this more. I had read Heidegger and Merleau-Ponty years ago and took this as one of their central lessons.

I read the Schwitzgebel paper - and it is an interesting catalogue of observations about the introspection of our own phenomenology. But if you buy into the distinction we're discussing, the observations don't have implications for the "hard" part of the mind/body debate.

I don't think I blogged on Libet beyond that little bit in that post.

Blue Devil Knight said...
This comment has been removed by the author.
Blue Devil Knight said...

Clearly there has been a distinction between judgment/perception or experience/concepts going back probably to Kant (at least in root form, with his forms of intuition versus categories of the understanding). Most of them claim that there are not just two systems, but two systems with qualitatively different types of representational formats. Merleau Ponty is great, tries to give experience priority, saying tha conceptual activity grows out of it. Others try to give conceptual judgments priority, or even hegemony, saying that is all we have (I'm not sure of any examples, but I think Merleau-Ponty picks on some people on that front--a lot of modern day naturalists are like this, some versions of Dennett).

I look at them as two systems that emerge in parallel, much like Kant actually. So it is natural for me to think in terms of judgments about the world (judgments about experience come much later, are a relatively sophisticated development on the scene, almost as liable to be error prone as judgments about trees). Ultimately this is an empirical psychological question, of course. Dretske has developed it more than most I have seen.

This is my chess blog moniker I'm signed in as...

Steve said...

Thanks. One practical problem I worry about when it comes to empirical research on is that once you ask a subject for a report on their own experience, you've moved things irrevocably onto the reflective/conceptual track. Maybe imaging can help.

Mike Wiest said...

I read Schwitzgebel’s paper that Eric linked to, so maybe I have a bit better sense of the skeptical viewpoint, although he did not mention change blindness.

Here’s how I saw his points:

He starts with an onslaught of difficult questions—are emotions ‘bodily,’ how do we define joy, when does a pain become an itch, does thinking have non-sensory phenomenology, etc. We are supposed to be so impressed by the difficulty of answering these questions, and by the fact that people disagree and change their minds, that we stipulate that we don’t really know what our experiences are like. He claims “it’s not like perfectly well knowing what particular shade of tangerine your Volvo is, stumped only about how to describe it. No, in the case of emotion the very phenomenology itself…is not entirely evident.”
To me these questions mostly are like the Volvo color—we just don’t know how our feelings relate to these highly theoretical concepts like ‘emotion,’ ‘sensory image,’ etc. We don’t know whether to call this feeling a pain or an itch—but that is arbitrary, a matter of definition. As he says, “It doesn’t tell against the reliability of a stock quote program if it doesn’t describe the weather.”
Also, many of the questions are about comparing the varieties of experience, which means we are no longer talking about CURRENT consciousness, so those examples are not relevant.
The point about how we can change our mind about how ‘detailed’ our visual field is, is interesting. Still, I interpret such a situation to mean, I previously felt that I could answer a question about any part of my visual field. After testing that, I find that I can’t say what card is in my periphery. So I made an error about how well I could resolve external objects and answer future questions. But there is no way for me to be wrong about feeling like I could answer the questions in the earlier case. That is how I felt. Now I don’t feel that way anymore, but I still don’t have an empty spot in my visual field, I just have a blurry picture of a card like I did before. If you say, why didn’t I know it was blurry before—maybe I did, if that had been the question I was asked. But if the question was about how ‘complete’ I feel the visual field is, then I will answer that it covers the whole visual field! And it does!

Another point he made was about saying “I’m not angry” when he actually was. I think he answers that one himself when he says the anger could be unconscious or he could be mislabeling it because of ulterior motives.

He also made a case that our perception of external objects is more reliable and stable than our perception of our own perceptions. I would submit that the ‘stability’ he experiences about the tomato and the stack of papers are constructions of his own mind. Cf. color-constancy etc. The real world is highly variable—our mind identifies certain constellations of sense-data (forgive the term) as stable objects despite changes in the sensory inputs. So that stability of the external world depends to some extent on the reliability of our brain processes that generate consciousness. So I don’t see that it supports turning Decartes upside down.

Finally, he describes an objection that says one can’t be wrong about how things appear, and says this objection hinges on an equivocation between ‘epistemic’ seeming and ‘phenomenological’ seeming. To me this is a distinction without a difference. Phenomenology IS our conscious epistemic seeming.
The way he argues this point makes me think that perhaps the difference between skeptics and infallibilists stems from a different way of conceiving experience. It seems the skeptics tend to speak about ‘judgments about our experiences,’ which seems to imply a view of phenomenology as an object that a separate homunculus (or unconscious reporting module) looks at and therefore can be mistaken about. (I think Chalmers talks this way too…)
In my view on the other hand, experiences ARE phenomenology. They may or may not be judgments, and may or may not be about external objects. But experiences are not about some separate thing called ‘the phenomenon.’ The phenomenon is the experience. If one talks about making a judgment about some experience, then likely we are talking about an after-the-fact judgment about some PAST experience. But saying this perceptual judgment-experience I am having right now can be wrong about what it feels like, is like saying it can be different from itself.

So, since the infallibilists agree that our reports can be mistaken in many ways, and Schwitzgebel (tentatively, sec. X) agrees that epistemic seeming can’t be wrong about itself, could the debate here arise from a difference of terminology? I.e. if you separate ‘epistemic’ seeming from ‘phenomenology’ you can talk about disagreements between the two, but if you see phenomenology as our conscious epistemic seeming, then all the putative ‘errors’ would involve problems in translating the phenomenology into a report…?

The part where he describes our overconfidence in our introspection, because “we never see decisive evidence of error,” is pretty funny. I guess I still don’t see decisive evidence of error (about CURRENT consciousness) after reading his paper.

Blue Devil Knight said...

Thanks for summarizing it--it reminds me why I didn't like the paper all that much.

If one talks about making a judgment about some experience, then likely we are talking about an after-the-fact judgment about some PAST experience. But saying this perceptual judgment-experience I am having right now can be wrong about what it feels like, is like saying it can be different from itself.

Don't you make judgments about experiences you are having right now? Again, if you are saying that you can't have experience X and not have experience X at the same time, I would probably agree.

Mike Wiest said...

Yah, sorry it got so long.

In answer to your question, I'm not sure. If I'm making a judgment about my current experience, isn't that judgment part of the experience? Having posed that question, I guess it depends on whether we think judgments have conscious phenomenal aspect. And I'm not sure about that. With all my uncertainty it sounds like I'm arguing for the skeptical position, doesn't it?

But I guess I lean towards thinking that we can be aware of when we are making judgments, and to that extent they can be conscious, so that when I'm consciously judging something about my current conscious state, that judgment is part of the conscious state. So some of those conscious judgments could be about my current conscious state. Then I guess the question is whether any of those judgments are infallible.

I think I would need to think about them case by case. But my bias is to think that they can be wrong about lots of things, especially WORDS, but they can't be wrong about what the subject is experiencing. They could be lies or confabulations as in the "I'm not angry" example. We don't care about that. The subject could be using words in a way that listeners don't understand or agree upon--but that's not relevant here either.

So what would a relevant kind of error be like? Maybe judging that you're having a pain when you're not. Or vice versa. We can even imagine that this could be testable, under a future theory of consciousness. One might be able to observe the (lack of) correlates of pain experience (obviously these would be distinguished in the theory from mere nociception) and tell the subject that her judgment ("No pain") is incorrect. It seems to me that even if we had every reason to believe that the theory was perfect, we would have to attribute the error to the reporting process, not the experience itself.

I'm not sure though, and this is getting long again...