On a Phenomenal Confusion about Access and Consciousness

Presenter: Daniel C. Dennett, Tufts University

About these ads

51 Comments

  1. Terrific talk, Dan! I also think your paper with Cohen brings out the core issues really vividly. I’m inclined to agree that separating phenomenology from access in a perfectly sharp way does take phenomenology outside the scope of science. But: Does the separation have to be perfectly sharp? What if phenomenology and access are just somewhat misaligned? How would that affect your argument?

    If we do end up taking phenomenology outside the scope of science, we seem mainly to have dubious armchair intuitions and difficult-to-specify theoretical virtues to rely on in formulating theories of phenomenal consciousness. The proper result of heading down that path is probably some pretty substantial skepticism. And indeed Block does end up pretty skeptical in some of his work, e.g., “The Harder Problem of Consciousness”. Would you say I’ve accurately captured what you see as the choice? Either allow that patterns of access is all there is to consciousness or fall into skepticism? And if that is the choice, what’s the argument against choosing skepticism?

  2. I like your notion of competence without comprehension, Dan. But I have some reservations about your claim that there can be no phenomenal consciousness without phenomenal access.

    1. It seems to me that if you hold this view then phenomenal access and phenomenal consciousness are equivalent concepts. Would you agree?

    2. If you don’t agree, how do they differ?

    3. If you do agree, what is it that is really accessed by phenomenal access?

  3. If to have access to something means to be aware of it, as it most certainly seems to, then THAT would seem to be the very core of what is meant by “consciousness”, i.e., that it is the state of being aware of something (as in being conscious vs. being unconscious).

    If, then, “phenomenal” (“phenomenality”) means the state of having experiences of something, whether of sensory information or memories or thoughts or emotions, or concepts, etc., then it would seem to follow that there is no phenomenal consciousness without access to it, even if we can make distinctions between the terms. Is there something it is like to be conscious? Well the real question must be whether there is something it is like to be conscious of this or that. A bat on the wing is presumably aware of what it feels like to be soaring above the ground, to be getting sound echoes via its sonar system, of the objects the sonar is picking out, etc., even if it may have no sense of itself as a self, of its existence as distinct from its world, and so forth. Awareness can come in many degrees as we may discover in our own lives (without ever being batty).

    The question raised by Dennett here is to what extent is phenomenality and access are dependent on one another and, more importantly, which comes first in the dependence hierarchy? Dennett’s answer, that phenomenal consciousness depends on access consciousness, offers a reasonable account of how experiences are made, i.e., they can be explained as the outcome of a complex set of processes in brains which link a range of inputs to a range of retained prior inputs in the brain which form layered, networked and overlapping representations imaging the complex world in which the brained entity operates.

    Although experiencing seems somehow unique in a world like this, a world that consists of what is experienced, and so seems to cry out for an account that differs radically from the accounts offered for everything else, there’s no strong reason to reject a view like Dennett’s, i.e., that the phenomenon of phenomenality is just a function of the way in which a series of perfectly physical processes interact under certain conditions to build a representational world out of external inputs which closely track the way things in the world, the things responsible for those inputs, are arranged. Indeed, why should we expect anything more mysterious than that, even if the seemingly radical difference between experiencing and what is experienced gives an appearance of mystery?

    Understanding this seemingly strange phenomenon of phenomenality (experiencing experiences as part of a world consisting of the things that are experienced) presents us with a picture consisting of two very different types of things, hence the idea of dualism.

    How much of this, though, is just due to the limitations of language itself? Just trying to make explicit distinctions between 1) experiencing as a state of being in the world, 2) experiencers as the entities that have those states, 3) experiences as the states they have, and 4) experiences as the things that they have them of, produces all sorts of linguistic conundrums. And yet the issue of what consciousness is and how it fits into a world of non-conscious objects persists. Mere reference to the problems we have in talking about it never quite seems to be enough. Good talk and I thoroughly enjoyed it though it’s not the first time I’ve heard it.

  4. Interesting Question’s Arnold,

    Do you have a view on the answer to your own question?

    To me, what is important is simply “phenomomenal qualities”. Access is just what the corpus callosum must be doing to unify a neural correlate that can be experienced as redness in the left hemesphere, and a different neural correlate that can be experienced as greenness in the right hemisphere. All this phenomenally unified, surely by the corpus callosum, so we can experience them both at the same time, and thereby say they are or are not – qualitatively the same. So, if my interpretation of “access consciousness” is correct, this would be the difference between them, and I would agree and this would be my answer to your number 2.

    Does anyone agree or disagree?

  5. Hi Dan,

    Around 12 minutes into the video, you talk about red stripes not existing in the brain.

    Would you agree that one theory might predict there is a neural correlate that is responsible for exactly a redness after image. And it might predict that knowing the necessary and sufficient causal conditions of such correlates, would allow one to perfectly predict when someone is and isn’t experiencing such an after image. If such were the case, and expecting these neural correlates to reflect something like 650 nm light, seems kind of silly? So, it seems to me such would be a perfectly adequate theory that would explain everything, as if there was a redness quality in the brain, it’s just that we are blind to it because of the quale interpretation problem. (see: http://canonizer.com/topic.asp/88/28 ) Surely you must at least agree that there is at least some particular neural correlate of this after image in the brain?

    Upwards,

    Brent Allsop

  6. One thing that puzzles me with reference to the “perfect experiment” by Cohen and Dennett paper. They do not require reportability to attribute consciousness – for example in a completely locked in syndrome, or perhaps a supposedly persistently vegitative patient with fMRI activation patterns suggestive of consciousness. However, in the perfect experiment the hypothesis that phenomenal consciousness is present without access is considered unfalsifiable and thus outside the realm of science (which it may be). But couldn’t the same be said for attributing consciousness for the completely locked in and (supposedly) persistently vegitative patients with fMRI activation?

  7. swmirsky wrote: “Is there something it is like to be conscious? Well the real question must be whether there is something it is like to be conscious of this or that.”

    It seems to me that we go round and round through the same revolving door unless we state our working definition of consciousness. I claim that we are conscious if and only if we have a brain representation of *something somewhere in relation to our self*. This is phenomenal consciousness — what it is like to be conscious. To be “conscious of this or that” is merely to isolate and perceive some *particular thing* somewhere within our global phenomenal surround (our phenomenal world). It follows that we must first be conscious before we can perceive any particular content of our conscious experience. So how does this relate to the brain? I have proposed the following:

    a. We are not conscious unless we have an experience of something
    somewhere in relation to our self.
    b. Experiencing something somewhere requires an internal representation of our
    surrounding 3D volumetric world from an egocentric perspective.
    c. We have no sensory transducers that can detect the 3D volumetric space we
    live in.
    d. Therefore, the human brain must have an innate biological structure that can
    provide us with a volumetric analog of our personal world from an egocentric
    perspective.

    In thinking about consciousness as a biological phenomenon, I arrive at the following working definition:

    *Consciousness is a transparent brain representation of the world from a
    privileged egocentric perspective.*

    Getting back to the original question concerning a confusion about access and consciousness, it seems to me that the concept of “phenomenal access” is too vague to arrive at any sensible conclusion. When it comes to conscious cognition, I prefer to use the standard psychological term “perception” (as distinct from sensation) instead of “phenomenal access”. If we think this way, there should be no confusion, because to perceive something demands that we direct attention to something at a particular location within our occurrent brain representation of the world and then parse that representation of something out of its global context to detect it and form a judgement about it. It follows that consciousness, as such, is a prerequisite for perception (phenomenal access) and is inclusive of perception. Bottom line, I agree with Block that phenomenal consciousness “overflows” access consciousness if access consciousness is taken to mean perception.

  8. Trehub, but of course, whether top-down attention is necessary for consciousness is in contention, of course, one could just define consciousness as needing top-down attention, but this just avoids the issue. Now if you agree with Block about overflow, doesn’t that contradict your initial definition of consciousness as needing top-down attention?

  9. Gene L wrote: “Now if you agree with Block about overflow, doesn’t that contradict your initial definition of consciousness as needing top-down attention?”

    Not at all. I didn’t define consciousness as needing top-down attention. I made a distinction between consciousness and perception (which does require attention). Here is my definition of consciousness:

    *Consciousness is a transparent brain representation of the world from a
    privileged egocentric perspective.*

    Our fundamental conscious representation is of a volumetric space (something) all around our self (something somewhere). In this state we are conscious without the deployment of attention to any part of our egocentric phenomenal space/world. Moreover, if we are not in a conscious state we are unable to direct attention to anything. For more details about this, and for an account of selective attention in terms of the neuronal structure and dynamics of a particular kind of brain mechanism see http://people.umass.edu/trehub/YCCOG828%20copy.pdf .

  10. Hi Dan, I had a couple of questions.

    The first was about the perfect experiment. Ned is very clear in his work that he distinguishes between something’s being inaccessible and something’s merely not being accessed at a particular moment. His argument is designed only to show that there is consciousness that is not accessed at any particular moment, not that there is consciousness that cannot ever be accessed. It seems to me that the perfect experiment argument only cuts against the claim that there is consciousness that cannot ever be accessed, and not against his more modest claim.

    Also, it seems to me that you do not address Ned’s ‘mesh argument’ here and I am wondering what you would say about it. His claim is not that there is some experiment (or set of experiments) that can show us that there is consciousness in the absence of access but rather that we want to endorse the theory that allows us to explain the broadest swath of evidence both from the neurosciences and the psychological sciences.

    You say, “It is clear, then, that proper scientific theories of consciousness are those that specify which functions are necessary for consciousness to arise” but Ned does try to say something about this. He suggests (and Dave Chalmers seems to agree) that it is the function of accessibility (not being accessed, but the making available to the accessing systems whether they actually access it or not) that is responsible for consciousness, so don’t they have a proper scientific theory of consciousness after all? You may not think that accessibility (as opposed to actual access) is good enough to do the job (I don’t either) but it is still a scientific theory of consciousness according to your standards and it is one that people as diverse as Jesse Prinz (consciousness is attention), Ned Block (consciousness is something distinctly biological) and Dave Chalmers (consciousness is a fundamental element of reality) all endorse.

  11. Whoops, I forgot my other questions! :)

    Secondly, I was wondering about your argument that Cog, if completed, would have phenomenal consciousness. Dualists like Dave could accept that a suitably sophisticated Cog could have phenomenal consciousness (since he accepts what he calls Organizational Invariance, which is the subject of another session here). If this is right (I am not a fan of OI but it is interesting) then building a conscious Cog isn’t a refutation of dualism. What do you think of this?

    Thirdly, Ned does say that there is a kind of access that is involved in phenomenal consciousness (though not a kind of cognitive access). On his view this kind of access is either something deflationary (like the way in which I smile my own smiles) or some kind of self-representation (which is something different from cognitive self-representation). Given all of this it looks to me like you and Ned basically agree on everything except what kind of access is involved. Again, you may not think that this kind of access can do the job (I don’t) but the point is that he doesn’t say that there is consciousness in the absence of all access.

    Finally, I liked the bit at the end about Ned’s strange lack of inverted reasoning and I think that you are right that this is the way he views things but I didn’t follow what the argument against thinking this was supposed to be. Is it that there is some experiment which could differentiate or decide between the two? If so, what is it? If not then how do we decide which to go for?

  12. ase.tufts.edu/cogstud/papers/replytofahrenfort.pdf

    It looks like Cohen and Dennett addressed the Block accessible but not accessed issue in their response to Lamme

    “Again, why consider accessible, yet not accessed, states as being conscious
    rather than simply the product of unconscious processing?”

    At this point, it seems to be a bit of a terminological quibble- a weak form of “phenomality” vs “unconsious processing”.

  13. Thanks for reminding me of that reference Gene Lin! I had forgotten about it.

    Just for the record, I tend to agree with the claim that this reflects unconscious processing but in fairness it is not like those on the other side haven’t attempted to answer Dennett’s rhetorical question. For instance Dave Chalmers in his ‘on the search for the neural correlates of consciousness’ (p 91-95 in The Character of Consciousness) suggests that this is a kind of ‘pre-experimental bridging principle’. We can arrive at it by thinking common-sensically about consciousness. For instance, no one thinks that I need to actually verbally report that I see something in order for it to be conscious, rather it looks like it only need to be reportable. But of course no one really thinks that language is required for consciousness, so availability for verbal report doesn’t seem right. Rather it seems like availability for global control of behavior is required. From there Chalmers argues that this principle is implicitly used in actual consciousness science. Indeed it looks like Dennett himself endorses this principle (though he doesn’t seem to realize it) at around 36:55 in the video.

    From another approach Ned Block argues that identifying consciousness with accessibility allows us see how results from neuroscience explain results from psychological science (this is the mesh argument).

    Again, you may not agree (and again, I don’t (think) I agree either!) but the point is that you need to say more than Dennett has said in order to address the issue.

  14. Hi Daniel and all
    I have a couple of questions although I suspect I may just have missed something that was already argued for…
    i) In saying the red stripes in the after image of the flag don’t exist it seems that we could be making two claims, one about how we represent the world, the other about what is represented. Taking the later first then to say the red stripes don’t exist is true, I see an after image not a real flag [put another way there is no represented object]. On the other reading however “red stripes” refers to a property of the representing vehicles we use to represent the world. Now part of the explanation of the after image would seem to involve a hypothesis regarding the formation of a misrepresentation, a misrepresentation which has as a part of its content red stripes. Now, these “red stripes” do seem to exist, and exist in the brain at that.

    Have I missed something here? Do we wish to avoid representational analysis of experience? Or is there something wrong with saying that the content of a representation is an existant property of its vehicle (albeit a property intimately tied to interpretation/use/consumption i.e. function)?

    ii) In some areas of cognitive science we sometimes explain phenomena with reference to specific experience. For example, a patient forms a delusional belief that their left arm is not their own, but their neice’s, because they lack an experience of ownership over (or embodiment in) their left arm, and so have an experience of, say, disownership when attention is drawn to the arm. It is experiencing disownership which causes the the delusion. Now it seems to me that if access is all there is to such an expereience then we can’t say this. The delusional belief is part of the access of the what? representation of? disownership and so partially constituitive of disownership being experienced. As things can’t be causes of their consituants the explanation of the delusion interms of the experience is no longer open to us.

    Now this is counter intuitive, but that’s ok, all theories of consciousness are counter intuitive in someway, so my question is this- what explanation of delusions like this is availble if we cannot cite certain experiences in part of their causal history?

  15. COUNTING THE WRONG CONSCIOUSNESS OUT

    Yes, there was a phenomenal confusion in doubling our mind-body-problems by doubling our consciousnesses.

    No, organisms don’t have both an “access consciousness” and a “phenomenal consciousness.”

    Organisms’ brains (like robots’ brains) have access to information (data).

    Access to data can be unconscious (in organisms and robots) or conscious (in organisms, sometimes, but probably not at all in robots, so far).

    And organisms feel. Feeling can only be conscious, because feeling is consciousness.

    So the confusion is in overlooking the fact that there can be either felt access (conscious) or unfelt access (unconscious).

    The mind-body problem is of course the problem of explaining how and why all access is not just unfelt access. After all, the Darwinian job is just to do what needs to be done, not to bask in phenomenology.

    Hence it is not a solution to say that all access is unfelt access and that feeling — or the idea that organisms feel — is just some sort of a confusion, illusion, or action!

    If, instead, feeling has or is some sort of function, let’s hear what it is!

    (Back to the [one, single, familiar] mind/body problem — lately, fashionably, called the “hard” one.)

    More prior commentaries here:

    http://turingc.blogspot.ca/2012/06/dan-dennett-phenomenal-confusion-about.html

  16. Perhaps the most efficient way of responding to all these comments is to pursue my exercise in Braitenbergian downhill synthesis (of phenomenality) a bit further. So there is Cog, equipped with color cameras and a “vision” system (I’m going to drop the scare quotes henceforth, but re-insert them if you’re squeamish) that parallels the opponent processes in our human color vision. So, when Cog is presented with the green flag stimulus, it not only says the things we say but says them for the same reason we do, caused by (informed by, modulated by) the same sort of internal representation state: what is going on in it is like what is going on in us. How like? In ways neither Cog nor we can directly introspect beyond just this: when challenged, we “pay close attention to our experience” and it’s pretty much like our experience would be were there an American flag in front of us. (We are, in a sense, embarrassed by such “how do you know?” questions; we just do know. If you ask me how I know there is a flag flying outside the window, I can say I see it, and moreover, if challenged, can point to it, lower it and feel and smell it, etc., etc. With a subjective flag illusion, all I can do is report what I “see”, knowing that this is not regular seeing, but is awfully like regular seeing.

    Now I take it that as described, Cog has some sort of access consciousness—the ‘easy’ kind—since Cog’s avowals are belief-caused, not canned. Does Cog also have “phenomenality” in some sense? Does Cog have feeling, as Stevan Harnad would say? Maybe, and maybe not. (I’m exploring people’s somewhat subterranean intuitions now; please bear with me.) I think many people think of phenomenality as tightly (if not constitutively, “intrinsically”) linked to emotional valence somehow. As Wilfrid Sellars once said to me, “Dan, qualia are what make life worth living.” So I’m going to try to add this explicitly to Cog as follows.

    Cohen and I, in our discussion of the Perfect Experiment, cut all ties to autonomic systems:

    Moreover, imagine that, before the surgery, that particular shade of red would reliably agitate or excite the patient. Would the patient have such feelings now and say something like, ‘I don’t see red but I notice that I’ve gotten a little tense’? As described here, the patient would not because such affective, emotional or ‘limbic’ reactions are themselves the types of functions that we are isolating from the color area. To be excited or calmed or distracted by a perceptual state of red discrimination is already to have functional access to that state, however coarse-grained or incomplete, because such a reaction can obviously affect decision-making or motivation.(Cohen and Dennett, p361-2)

    Suppose we revise the Perfect Experiment to permit the causal links to early, autonomic responses. Now our conclusion that there is no reason at all to call activity in the otherwise isolated region phenomenally conscious loses its obviousness. (I suspect that Lamme’s conviction about the Perfect Experiment is caused by neglecting to consider these variations carefully.) We can consider this further, by supposing that people exhibit faint autonomic reactions to a backward-masked red patch that they cannot (or do not) access consciously (cf. Öhman and Soares, 1994). This autonomic response could be taken as evidence that although the red was not accessed, it was “phenomenal”—because we’re in effect defining phenomenality in terms of such visceral or affective effects. Would Harnad say that there was feeling in such a case? I don’t know, but I expect not, since it would be perilously close to acknowledging unconscious feeling (unfelt feeling?). But if he, or anybody else, welcomed this addition to our view, we could then go on to add just this delicate bit of functionality (early autonomic modulation via upward-bound-but not-yet accessed contents) to Cog as well. Ta-DAA! Feeling in a robot. (If not, why not?) Or, we could continue to insist that contents that are not cognitively accessed but only, shall we say, emotionally accessed, count as preconscious, not conscious. That would presumably be the Dehaene group verdict, and Cohen and I would concur.

    Note that I’m patiently trying to find the “proper” place for “the juice” that so many people insist I’ve left out. Nicholas Humphrey, for instance, draws a sharp line between imagination and perceptual experience. No matter how vividly I fantasize about or imagine a red dress, there is no “redding” going on in me, like the redding that happens when I see a red dress. Seeing a red dress has the juice; imagining one doesn’t. I have never been drawn to this position. For one thing, it is perfectly obvious that, say, fantasizing sexy properties can have all manner of autonomic, affective effects in a person. Why isn’t that an instance of the juice?

    Cog without a wealth of affective reactions to its color vision is plausibly bereft of “phenomenality”; adding those affective reactions, as just described, may strike people as an important necessary condition for consciousness, but not sufficient; it still leaves folks doubting that there is any phenomenality or feeling in Cog. But consider: Cog thus equipped can exhibit in both linguistic and non-linguistic behavior how different colors matter to it. Insisting that there are still no grounds for granting feeling to Cog would begin to look like protein chauvinism or mysterianism. (At this point I will allude to, but not rehearse, my critique of the whole embarrassing zombiephilia of philosophy.)

    In other words, I insist on putting the burden of proof on those who say the juice is still missing. If they cannot say in non-question-begging terms what is missing, if they insist that it is just obvious that the juice is missing, I reluctantly dismiss them as failing to meet their intellectual obligations. At this point I cannot distinguish them from somebody who, say, insists that left-handers are zombies or women are zombies.

    And finally, responding to Richard Brown’s question about whether a conscious Cog would be a refutation of dualism of course it would, unless you mean by dualism something vanishingly vapid. One of the great beauties of computers (and hence robots) is that if you succeed in modeling or simulating or duplicating some apparently mysterious phenomenon with a computer, you know to a moral certainty that there is no strange causation, no new physics, no morphic resonances, no ectoplasm, no alternative forms of matter involved.

    Öhman, A. and Soares, J., 1994, ‘“Unconscious Anxiety”: phobic responses to masked stimuli” Jounral of Abnormal Psychology, 103, pp231-40

  17. Two quick addenda: Nick Humphrey tells me I’ve misrepresented his view here, so please bracket the paragraph about his views on sensation for the time being while I seek clarification. And my dismissal of “protein chauvinism” needs to be clarified: in my recent work (see, e.g., my Edge.org interview) I’ve emphasized that neurons are not to be seen as relatively simple switches that can be unproblematically replaced by a machine, but rather as agents with agendas. But the work they do is still a kind of computing, and it is accomplished by molecular-level machinery–robots made of robots made of robots. Motor proteins–for example–by the trillians are required to get all the jobs of cognition and consciousness done. This amounts to saying that in fact, if you want to have “strong AI” you probably need to model all the way down to the protein level, but not because proteins have some extra, non-computational contribution to make. They are as fungible as any other bit of computer hardware (Macs can do what PCs can do and vice versa). In the immortal words of Maria Muldaur, “it ain’t the meat, it’s the motion!”

  18. Thanks to Dan for his typically tantalizing talk. Three things:
    :
    1) On judgments as the basis for feels. Dan seems somewhat torn between quining phenomenal experience out of existence and trying to show how it’s a real effect of informational access. He suggests feels might reduce to judgments, such that there is no phenomenal feel about which we judge; rather – the strange inversion – the judgment creates the phenomenal feel, which doesn’t exist in the way dualists suppose. We project it, mis-interpret it as, an internal mental object in figment space about which we form judgments, when in fact it *just is* the judgment. But at this point Stevan would ask: why do some judgments end up as phenomenal, others not? Before Cog is given the right stuff, it makes behavior-guiding judgments but they don’t result in phenomenal experience.

    In “3 Laws of Qualia” Ramachandran and Hirstein suggest that phenomenal feels are something like irrevocable judgments: “I cannot simply decide to start seeing the sunset as green, or feel pain as if it were an itch.” Qualia – non-decomposable phenomenal feels like pain – are perhaps hard-wired, bottom line representational categorizations, judgments the brain makes about states of affairs in some basic respects which can’t be altered by higher order cognition. Because they are cognitively impenetrable, they present themselves as stubborn subjective givens or surds to which we sometimes pay more or less attention. So in making the higher order judgment “I’m in pain” I’m asserting the existence of a lower-order judgment that appears in consciousness as a non-decomposable qualitative feel, and I could be mistaken about that. In contrast, on Dan’s account if I belieeeeve I’m in pain (his rhetorical flourish), then I’m in pain, so I can’t be mistaken about it. I think the consensus is we *can* be mistaken about the contents of consciousness. But in any case, the question still remains: why do these lower order judgments, when embedded in higher level information processing in service to complex behavior control, entail phenomenality?

    2) On afterimages vs experience. Seems to me (a real seeming!) that the afterimage experiment is a bit misleading as a way to get at the larger issue of phenomenal consciousness. Although it’s obvious the experienced red stripe has intentional inexistence (like Sherlock Holmes, there’s no red stripe out there in the world), it isn’t obvious that the phenomenal experience of the red stripe, or any other experience for that matter, has intentional inexistence. Feels and their ethical significance (don’t mistreat feeling robots) *are* unequivocally real, otherwise Dan and the rest of us wouldn’t feel obligated to explain them. So the metaphysical status of afterimages is one thing, the metaphysical status of experience tout court, of phenomenal feels, is another.

    3) On the hard problem. Once Cog is given the right functional stuff (our stuff, for example) then as Dan rightly says above it would be chauvinistic not to grant it feels and it would be an existence disproof of *spooky* dualism (not necessarily naturalistic dualism should such be the case, see below). But this still leaves open the question of why that stuff entails phenomenality. At this point we have to keep the essential characteristics of basic feels firmly in mind as respect-worthy targets of explanation: privacy (unlike my brain, my feels aren’t public objects), cognitive impenetrability, and qualitative irreducibility, undecomposability and smoothness (e.g., basic sensory red).

    What about the representational goings-on in us (and eventually Cog) might get us somewhere *in the vicinity* of feels, thus characterized? Well, according to Metzinger and others, and very roughly, representational systems need reliable bottom line representational elements that the system itself can’t modify (untranscendable objects) and that reliably co-vary with the aspects of the world relevant to the system’s interests, at limits of resolution set by behavioral requirements and the physical world itself. The system itself won’t be able to directly represent these elements (it’s recursively limited) so these necessarily get *presented* to it as givens. And it’s important for behavioral success that the system take these *as* givens, not merely representations. Such givens can serve as data inputs for higher order judgments that take them as internal private objects (yes, I’m really in pain now) but they only become conscious when integrated into a coordinated set of behavior-controlling higher order representations, including the self model (the global access requirement) that corresponds to the experience of self in a world. More on these and other possible entailments at http://www.naturalism.org/appearance.htm#part5

    Notice there’s nothing standardly *causal* about these entailments, which is why experience is private for the system alone, not a public object produced or caused by the brain (as Dan would put it, there’s no second transduction). It’s also why feels won’t ever be shown to have a function in 3rd person accounts of behavior (and yet *aren’t* epiphenomenal), since science only traffics in observables, http://www.naturalism.org/privacy.htm But for all that, feels remain perfectly natural phenomena since after all the informational functions that entail them are observable and specifiable, not spooky. The phenomenal-physical parallelism suggested (not yet proven!) by some representational accounts of consciousness might upset the good old fashioned materialism that Dan wants to keep safe, but not naturalism.

  19. Tom, I agree with much of what you say, but I wonder about your assertion that feels won’t ever be shown to have a function in 3rd person accounts of behavior because science only traffics in observables. Surely photons or Higgs bosons are not directly observable yet they are important theoretical entities in science, and they enable us to understand previously unexplained observable events that are predicted to occur as the effects of the putative physical properties of these unobservable theoretical entities. On what principled grounds would you reject the possibility that feels, similarly, will be shown to have a function as a property of their putative biophysical properties according to the norms of science?

  20. Hi Dan, thanks for your response! You say,

    And finally, responding to Richard Brown’s question about whether a conscious Cog would be a refutation of dualism of course it would, unless you mean by dualism something vanishingly vapid. One of the great beauties of computers (and hence robots) is that if you succeed in modeling or simulating or duplicating some apparently mysterious phenomenon with a computer, you know to a moral certainty that there is no strange causation, no new physics, no morphic resonances, no ectoplasm, no alternative forms of matter involved.

    It seems like you have an overly restrictive view of dualism (for the record I would like to state that in no way whatsoever am I endorsing (or even arguing for) this view. I am merely concerned with getting the positions and entailment relations right). There are some (property) dualists who claim that there are laws which connect functioning of the right sort to consciousness (one version of this is the ‘naturalistic dualism’ that Tom alludes to above). These kinds of dualists hold that whenever you have functioning of the right sort you get (non-physical) consciousness. This means that these kinds of dualists accept that there can be conscious (functionally isomorphic) robots (which will also have non-physical properties associated with their conscious experience) and they also allow that an appropriate computer simulation which simulated all of my functioning would also result in consciousness. Again, I am not endorsing this view (though I have friends who do :)) I am just wondering if this is a ‘vapid’ version of dualism and if so, why? Is it that, on your view, since there is no *empirical* way to differentiate them then we really don’t have two theses here?

  21. I think “naturalistic dualism’ is a possible, but utterly unmotivated (so far), view. It takes a heavy problem and an otherwise unavailable solution to that problem to generate such a doubling of ontology. The undeniable fact that some people think they can conceive of zombies is a featherweight problem, to which dualism provides at best a trivial solution, since it generates zero testable implications.
    Here are three different vapid dualisms:
    1. when I nail a diagonal cross piece to brace my table leg, I create an immaterial Euclidean triangle which interacts instantaneously with the three pieces of wood, causing rigidity.
    2. When Dickens writes David Copperfield, he creates an immaterial world that others can then enter by reading the novel. We can’t explain their emotional and cognitive reactions in terms of ink on paper, so (“obviously?) we have to resort to a dualism of material objects and immaterial intentional objects, fictions generated by some of them.
    3. My computer is calculating the decimal expansion of pi. Since that abstract object is an irrational number, it causes my computer to keep running forever, unless I stop it or it breaks.

    Is the property dualism of which Richard speaks like any of these? If so, it is indeed vapid, at best an eccentric way of talking about abstraction. Karl Popper’s Three Worlds comes to mind. Are we ready for a resurrection of that view? I doubt it.

  22. FEELING, DOING AND EXPLAINING

    Either Cog feels or he doesn’t. If he does, we’re just down to the usual hard question: how and why does he feel, rather than just do?

    If Cog doesn’t feel, then what he can or can’t do, and how, has nothing to do with the hard problem of explaining consciousness — it’s just about the easy problem of explaining doing-capacity.

    (But if Cog can do anything and everything we can do — can pass the Turing Test, in fact, for a lifetime — without feeling, then he does make the hard problem of explaining how and why organisms seem even more intractable. — I personally think anything that could pass the Turing Test would feel, but I have no idea why or how.)

    But feeling is not just about “affect”: It feels like something to see red — not emotionally, but sensorily; and it feels like something different to see yellow.

    And of course there’s no such thing as unfelt feeling.

  23. Stevan, I am so glad that you said that! For years I have suspected that your view boiled down to this, and I would have been tempted to impute this view to you now, but having just been brought up short for misrepresenting my dear friend Nick Humphrey’s position, I wouldn’t want to risk doing it again, to my dear friend Stevan Harnad. Now you have said it, in context, with all signs of deliberation, so I can simply point out what is unacceptable in it.

    “Either Cog feels or he doesn’t.” You lay down the gauntlet, using the folk-psychological term “feels” as if it is just obvious that we all know—or should know—what you mean by it. But it’s too late for that move. We have learned in dozens of cases that there is no guarantee of a clean verdict when a term from the manifest image is held up for anchoring in the scientific image. “Either apes can have beliefs about beliefs or they can’t.” “Either fish love or they don’t.” . . . . What if it turns out that apes only sorta have higher-order beliefs and fish only sorta love? What if Cog only sorta feels? In all such cases you don’t get to hold your ground and say “not good enough! I demand to know if it really loves [believes, feels]!” Cf. “I demand to know what is really red,” and “I demand to know if these events are really simultaneous.” That way you are heading down the path to an artifactual mystery that doesn’t need solving.

    But maybe you don’t mean “feel” in the ordinary language, manifest image sense (the blurry, problematically anchored sense). Maybe you mean “feel” to be a technical term, spruced up for scientific purposes. That won’t work either, as we can see if we replace “feels” with, say, “phelobizes”. “Either Cog phelobizes or he doesn’t.” What are you talking about? The burden would fall on you to define your technical term for us, as can be seen in a glance by considering the unacceptability of: “— I personally think anything that could pass the Turing Test would phelobize, but I have no idea why or how.” If you have no idea why or how, your technical term is in jeopardy of being dismissed as a half-baked proposal.

    Stevan, you are trying to shove feeling down our throats, insisting that it captures the problem of consciousness and refusing to say what you mean by it because, apparently, you think it is just obvious what feeling is. I guess you think we all just know from our own introspection what feeling is; it’s what we do when we feel (doh!). Well, it isn’t obvious. At the outset you pose “the usual hard question: how and why does he feel, rather than just do?” This is, I would say, a paradigmatic case of the rhetorical misstep of rathering (defined, with examples, in my forthcoming book, Intuition Pumps and Other Tools for Thinking). Cf. “Why is he wealthy, rather than just rich?” You may be sure in your heart that you are not guilty of rathering but we are entitled to ask you to prove it, and you keep insisting that neither you nor anybody else can, a cul-de-sac of your own devising.

  24. Dan,

    The fact that there are indeterminate cases of beliefs, feels, etc. doesn’t mean there aren’t perfectly good canonical examples of these things that we all agree exist. It looks as though you’re trying to eliminate (quine) the target of explanation – phenomenal consciousness – by saying no one has a good definition or clear conception of it, or uncontroversial examples to point to, but we do (pain, red), which is why it poses the problem it does, That we might not be able to determine if Cog *really* feels doesn’t impugn the reality of your feels or mine.

    The question is how feels (qualia) ultimately get incorporated into science (or more broadly, philo-scientific naturalism), and the answer to that question might be a successful reduction or identification of phenomenality with functions. As you suggest in your paper with Cohen, “A true scientific theory will say how functions such as attention, working memory, and decision making interact and come together to form a conscious experience.” Having admitted the existence of conscious experience as an explanatory target, I don’t see why you give Stevan such a hard time about the existence of feels, unless you’re assuming he has a spooky conception of them.

    Speaking of which: As you point out in the paper, there’s an active research program on what might be the representational, functionalist basis for consciousness (e.g., Dehaene, Kouider). I hope we agree it’s that, along with whatever other resources are eventually brought to bear, which will determine the (non-spooky) ontology, strictly materialist or not, that we end up with in naturalizing experience.

  25. Thanks for your very helpful response Dan. Personally I don’t think that this kind of dualism is like the three you point to, though I can’t say with certainty since I don’t really hold the view! I also agree that zombies are not a very good motivator, I probably dislike that stuff almost as much as you do (well probably not, but still I don’t like it very much). But I do wonder what you might say to another possible line of motivation, one that is stronger in my opinion than zombies. We often end up positing new entities when we encounter something that we cannot explain within existing ontologies. This is familiar from the history of physics. Sometimes we get it wrong (phlogiston, aether) but sometimes we get it right (fields, electrons). Positing the existence of fields, for instance, allowed us to explain how gravity works (I am talking about in Newtonian mechanics) without having to talk about action at a distance. So, too, someone might think that positing non-physical qualia would help us explain the kinds of things that Stevan (and many others) are worried about. How and why would Cog have consciousness? Because, given the fundamental laws of our world, the right functional organization is associated with (non-physical) consciousness. Now I don’t expect you to agree with this kind of motivation, all I wanted to do was point to something that seems a bit more heavyweight than zombies. Zombies get all the attention, but I think the real reason people move towards dualism is this kind of explanatory argument.

    But explain what? This gets us back to your challenge to Stevan, and those like him (and in this respect I am like him). What is it that we are trying to explain? I take it as obvious that I am conscious (and very probable that you are as well). So when we ask if Cog is conscious we mean to be asking whether she has experience which is similar to mine in the relevant respect. We need not be asking whether her consciousness is exactly like mine (it may come in degrees as you point out) but what we want to know is whether it is similar. Even if Cog is only sorta conscious it seems plausible that in virtue of that Cog’s experience will be sorta like mine (I am fully conscious). So we might rephrase Stevan’s gauntlet as follows: ‘either Cog has something which is (sorta) like what I have when I consciously see blue or she doesn’t.’ If she does then we need to explain why she does. And many find an explanation that leaves out the similarity (in consciousness) to leave out the thing we are interested in. That is why I interpret your work as arguing that we can explain -this very thing- in terms of judgements (whereas I think the empirical evidence suggests that it is morel likely to be certain kinds of higher-order judgements. So I agree that Stevan’s position is awkward. He claims (unlike you or I) that the kinds of explanations we would appeal to could not possibly do the explanatory job but then denies that he needs to postulate the kinds of additional entities which might allow an explanation but that shouldn’t distract us from what the prize here is: explaining consciousness (not explaining it away but explaining what it really is).

    Finally, I wonder if I could try to steer the conversation back to the question of the relation between access and phenomenal consciousness (something which I see as distinct from the above questions, though I gather this is not a popular view :). We can all agree that even Ned thinks that some kind of access is required for phenomenal consciousness. The question is *what kind of access*? Ned has tried to present an argument that it is not cognitive access (where this in turn seems to mean that it is not represented in working memory). I tend to think that this notion of access is really too thin to to do the work that is required but it is at least a possibility that cognitive access comes apart from (what we might call) phenomenal access. Perhaps, as I suggested in an earlier comment (number 11 and 13), phenomenal access amounts to availability for cognitive access. Again, I think we can give (what to me are) convincing arguments that this isn’t right and that this kind of activity reflects unconscious processing but this is an improvement in that we can at least imagine the kinds of evidence that would push us one way or the other and that should be enough to demystify the notion of phenomenal consciousness (to be clear, not the strange kind that involves no access whatsoever but this other kind that involves a kind of access that is not cognitive (and not emotional)). The question then is just as Ned says it is: which of these views is better supported by the totality of the evidence we have? I think that evidence favors cognitive access, he thinks it favors non-cognitive access. Who knows which way the real totality of evidence will point but we at least have a way forward, don’t we?

  26. Dan, you write:

    “In other words, I insist on putting the burden of proof on those who say the juice is still missing. If they cannot say in non-question-begging terms what is missing, if they insist that it is just obvious that the juice is missing, I reluctantly dismiss them as failing to meet their intellectual obligations. At this point I cannot distinguish them from somebody who, say, insists that left-handers are zombies or women are zombies.”

    There are enough huge differences between Cog and me, in neurobiology and cognitive process, that it seems not unreasonable to wonder whether one or more of those differences might be the difference-maker for phenomenology. The same doesn’t hold for the difference between me and lefties and women. So I think the analogy fails.

    I do agree that the burden of proof is on those who would insist that the juice *must* still be missing. But as far as burdens of proof go, I think in the dispute between those who think that it *might* be missing (like me) and those who think it *cannot* be missing (as one might interpret some your remarks) as long as the engineering is good enough to generate highly sophisticated and well-grounded reports about its cognitive states, the burden will fall on the latter.

  27. Dan, thanks a lot for the talk – I have very little to disagree with, unsurprisingly.
    One thing (maybe minor): I’m not sure Gibson is your ally and not your enemy. Remember that he takes affordances to be out there in the world waiting to be seen. For him, what we see are affordances, not objects – these weird entities. So what he does is exactly what your enemies do: project: he projects a feature of the mind (the action-oriented nature of our perception) to the outside world (to affordances).

    I don’t think your argument relies on this, what it relies on is the action-oriented nature of our perception, but you can get that without Gibson’s odd metaphysical claims…

    Thanks again – I’m glad you’re back in the ring defending the cause…
    Bence

  28. Dear Dan,
    I would like to support Eric’s contention – on *naturalistic* grounds. I may well be wrong but my understanding is that the claim is that Cog having phenomenology similar to ours is the default position because consciousness is just the physical goings on described by science. The problem, as I see it, is that science would quite explicitly *not* give Cog our phenomenality on the grounds of ‘function’.
    Physical science is insistent on causality being local. Any measurement, observation or experience (whether juicy or not) has to be explained by a series of entirely local interactions. Up to the boundary of the brain this is familiar. You are unaware of a picture behind a screen because no local interaction between photons and retina can occur. If we are to stick to local causal ‘physicalism’ then we must assume the rule applies within the brain. To suggest that once you get inside the brain observations can be based on non-local (i.e. distributed) events would be a stark form of dualism. No other aspect of the way biology explains the content of experiences works like this. (Even if this is what much of the literature seems to advocate.)
    Where we can ascertain, we find locality is rigorously true. Inside the brain ascertainment becomes difficult but science indicates that locality applies right down to the level of dynamic modes, like electron orbitals. Moreover, at larger scale, from split brains down to synapses, it seems to hold. So science does not recognize any prediction about observations or experiences based on ‘black-boxing’ the brain. The fact that Cog might behave like us gives no indication whatever that an experience generated within Cog would be like ours, because there is no requirement for similar local events.
    I think there is a serious problem with functionalism here. If functionalism is fine grained down to the level of locality in physics then it becomes redundant; an account of the function of a car motor that gives technical details for all parts and their relations no longer requires ‘what makes my car go’. If it is not that fine grained it is either an approximation that *might* hold if we have reason to think the innards are as we guess them to be or it runs the risk of being a wool-pulling-over-eyes exercise designed to avoid being explicit.
    My impression is that in this regard functionalism throws naturalism out with Descartes’s bathwater. Descartes was wrong about the single pineal soul and probably the uniqueness of human sentience but modern physics has pretty much vindicated his totally local dynamic theory, in a new form. Moreover, that new form is very compatible with Descartes’s caveat to Hobbes that maybe his two sorts of substance dynamics reflect some deeper unity of dynamics. Non-tautological functionalism seems to me potentially more dualist than Descartes.
    The real problem, as I see it, is that people back off looking for local events for our experiences because all the possible solutions have terrifying implications for our sense of identity. Multiple drafts seem to me a very good start, but why not multiple experiences? No-o-o-o not that! people say, but why not? As far as I can see no binary computer has a chance of a human experience because there are no *local* integrating events of adequate relevant complexity.

  29. Dan, thank you for expressing so well why talking about “feeling” without defining what it is supposed to mean is vacuous. However, I think the same charge can be leveled at the common use in philosophical and scientific discourse of the word “attention” and after reading your Current Biology paper with Cohen I have a related question. In particular, on p.360 of C&D you write:

    “If participants are conscious of the *identities* [emphasis mine] of all elements in the scene, as has been repeatedly claimed by dissociative theorists, then participants should instantly notice the pseudo-letters or or the scrambled image. The fact that they do not suggests that participants are overestimating the contents of their own experience.”

    Why can’t a dissociative theorist argue that whereas all elements in a complex scene are included in the participants’ conscious experience, the *identities* of all elements in the scene cannot be instantly noticed because to notice and *identify* any particular element in the scene requires the deployment of selective attention in order to parse that element out of the global scene before its individual identity can be determined. If one were to posit that global attention (maybe diffuse reticular activating excitation) were needed to access/experience the global scene, we still would not be talking about the selective kind of attention needed to identify any particular element of the scene.

  30. ILL-JUSTIFIED TRUE BELIEF

    Organisms with nervous systems don’t just do what needs to be done in order to survive and reproduce. They also feel. That includes all vertebrates and probably all invertebrates too. (As a vegan, I profoundly hope that plants don’t feel!)

    There’s no way to know for sure (or to “prove”) that anyone else but me feels. But let’s agree that for vertebrates it’s highly likely and for computers and today’s robots (and for teapots and cumquats) it’s highly unlikely.

    Do we all know what we mean when we say organisms feel? I think we do. I have no way to argue against someone who says he has no idea what it means to feel — meaning feel anything at all — and the usual solution (a pinch) is no solution if one is bent on denying.*

    You can say`’I can sorta feel that the temperature may be rising” or “I can sorta feel that this surface may be slightly curved.” But it makes no sense to say that organisms just “sorta feel” simpliciter (or no more sense than saying that someone is sorta pregnant):

    The feeling may feel like anything; it may be veridical (if the temperature is indeed rising or the surface is indeed curved) or it may be illusory. It may feel strong or weak, continuous or intermittent, it may feel like this or it may feel like that. But either something is being felt or not. I think we all know exactly what we are talking about here. And it’s not about proving whether (or when or where or what) another organism feels: it’s about our 1st-hand sense of what it feels like to feel — anything at all. No sorta’s about it.

    The hard problem is not about proving whether or not an organism or artifact is feeling. We know (well enough) that organisms feel. The hard problem is explaining how and why organisms feel, rather than just do, unfeelingly. (Because, no, introspection certainly does not tell us that feeling is whatever we are doing when we feel! I do fully believe that my brain somehow causes feeling: I just want to know how and why: How and why is causing unfelt doing not enough? No “rathering” in that!)

    After all, on the face of it, doing is all the Blind Watchmaker really needs, in order to get the adaptive job done (and He’s no more able to prove that organisms feel than any of the rest of us is).

    The only mystery is hence how and why organisms feel, rather than just do. Because doing-power seems like the only thing organisms need in order to get by in this Darwinian world. And although I no more believe in the possibility of Zombies than I do in the possibility of their passing the Turing Test, I certainly admit frankly that I haven’t the faintest idea how or why there cannot be Zombies. (Do you really think, Dan, that that’s on a par with the claim that one hasn’t the faintest idea what “feelings” are?)

    *My suspicion is that the strategy of feigning ignorance about what is meant by the word “feeling” is like feigning ignorance about any and every predicate: Whenever someone asks what “X” means, I can claim I don’t know. And then when they try to define “X” for me in terms of other predicates, I can claim I don’t know what those mean either; all the way down. That’s the “symbol grounding problem,” and the solution is direct sensorimotor grounding of at least some of the bottom predicates, so the rest can be reached by recombining the grounded ones into propositions to define and ground the ungrounded ones. That way, my doings would contradict my verbal denial of knowing the meanings of the predicates. But of course sensing need not be felt sensing: it could just be detecting and responding, which is again just doing. So just as a toy robot today could go through the motions of detecting and responding to “red” and even say “I know what it feels like to see red” without feeling a thing, just doing, so, in principle, might a Turing-Test-Passing Cog just be going through the motions. This either shows (as I think it does) that sensorimotor grounding is not the same as meaning, or, if it doesn’t show that, then someone still owes me an explanation of how and why not. And this, despite the fact that I too happen to believe that nothing could pass the Turing Test without feeling or meaning. It’s just that I insist on being quite candid that I have no idea of how or why this is true, if, as I unreservedly believe, it is indeed true. It’s an ill-justified true belief. Justifying it is the hard problem.

  31. Hi all,

    Really enjoyed the talk, Dan, and am working my way through this comment thread now. Just a few things to add, referring specifically to the background reading as well. On page 360 of the 2011 paper, the question is asked regarding why people overestimate the richness of their conscious perceptions. There are three things I think should be mentioned in this discussion of “filling in” and “not noticing” of changes, leading to such overestimation, which weren’t covered. As a neuroscientist and pretty solid reductionist/functionalist, I agree that the question of *why* people overestimate the richness of conscious experience can be answered by dissociative theories at all. However, the argument could be strengthened by inclusion of a few more findings from neuroscience. In particular, the perfect experiment doesn’t also mention (a) that the brain fills in the world inside the blind spot where there are no photoreceptors whatsoever and thus nothing is actually seen, (b) that the brain also fills in color in the periphery where there are no cones and thus color cannot physically be seen, or (c) that stroke victims who suffer from hemianopsia do not see a big black “hole” in their visual fields, but rather just simply don’t see things in that side of the visual field. This filling in seems to be largely a consequence of prior experience: if you know something in your fovea was red 2 seconds ago, for example, then you can reasonably infer that if you moved your eyes away from it and now it’s in the periphery of your visual field, it probably hasn’t changed color. Likewise, objects tend not to disappear and reappear from view, so it makes more sense for the brain to fill in the blind spot than to reach the conclusion that an object has mystically vanished. Such appeals to prior experience, memory, and even the potential neural instantiations of such prior beliefs (maybe as spontaneous activity? Although that’s a much larger debate that we needn’t go into here) may strengthen the assertion — which I wholeheartedly agree with — that access consciousness and phenomenal consciousness cannot really be dissociated. Although there are plenty of other “consciousnesses” to talk about besides visual consciousness, I think you’re right that this argument holds regardless of the modality or particular qualia being examined. Thanks for the excellent talk!

  32. Hi Stevan,

    I thought I’d stick my oar in these waters–thereby continuing our most enjoyable online exchange following your marvelous Montreal Turing Consciousness Summer School last summer.

    I don’t mind supposing that vertebrates feel–and it wouldn’t surprise me if some invertebrates do as well. But I want to echo Dan’s salutary insistence that the term ‘feel’ doesn’t go without a need for explanation–and his insistence that feeling may well amount something slightly (or even more than slightly) different for creatures of different types.

    “But we know from our own case!” This common refrain is not an answer. Some think–I believe you hold–that you know what feeling is from your case and I from mine. But whatever it is I know from my case, I take to be true of you–and I assume conversely. That is, whatever states I take myself to be in that involve feeling–what it’s like for one, conscious states, and so forth–I take it that they are states that you and Dan and other people can and typically are in as well. But if that’s so, my understanding cannot be “just from my own case”; I must have some intersubjective understanding of what it is for me to be in states and others to be in states of those very same sorts. I take that to be basic even to beginning to talk about feeling.

    But now we’ve committed ourselves to a notion of feeling that admits of theoretical treatment, intersubjective application, and with enough time and research the discovery of whether our vertebrate cousins are in those states–and if not states exactly like ours then how their states resemble and differ from ours.

    So the place I get off the bus–as I am understanding you to describe it–is that we can only tell about feeling from the inside. It *must* be, I’m arguing, that whatever feelings are, we can describe them intersubjectively, apply the term ‘feeling’ (and subordinate terms for the various types of feeling) to others, including other creatures, and do straightforward science on them.

    Is feeling, as Dan urges, a manifest-image term? I suspect it is. But we can do science on plants, animals, tables, clouds and all sorts of things that we initially pick out in the manifest image.

    Feelings, unlike those other things, can be self-ascribed in a way that seems subjectively unmediated–in a way that seems subjectively automatic, and perhaps even infallible. But these mysterious properties–infallibility, immediacy, and so forth–had better be subject to informative explanation as well. First-person access is real, but it’s not magical; it’s a natural phenomenon that we must explain. Indeed, that much follows from the need for intersubjective concepts and intersubjective understanding of feeling itself.

  33. I would respectfully offer one further thought on this question of how basic and how self-evident feeling is. Why should we even assume that all our feelings are one thing, feeling per se, as Stevan Harnad seems to put it?

    When I consider my feeling, as in seeing an image, having an emotion, or of just thinking about X, or any other of the many types of states we may find ourselves in, I don’t find the same thing going on in the different cases. Yes there is the state of being aware at some level or other in each situation (since I could hardly consider anything without being aware of it), but what I find myself aware of is different in each case.

    Well, are my instances of being aware at least the same then? After all, we use the same term across a variety of cases.

    But even it can’t be the same since sometimes being aware involves explicit self-referencing and sometimes it’s barely noticeable or not even noticed at all — except in moments of retrospective thought. And always there is the content, that of which we are aware, which is radically different across the range of awareness states we can have.

    So perhaps it’s a mistake to suppose that awareness qua feeling (or just “feeling”) is always one thing, or that to account for it in functional terms we must come up with the single function that corresponds to it. Why should being aware, or feeling, be a single function at all? Why not an array of functions combined in certain ways? Instead of THE function that just is feeling, perhaps we should be looking for the sub-systems, which combine a range of quite distinct functions within a larger array of brain processes, as the functional correspondent of the many particular instances of feeling we have?

  34. NOT WHETHER OR WHAT BUT HOW AND WHY

    The hard problem is not the problem of determining whether organisms feel, nor of determining what they feel. We can do a pretty good job at guessing that, and “doing science” on it. The hard problem is explaining why organisms feel, rather than just do — because doing seems causally sufficient for Darwinian purposes and feeling seems causally superfluous.

  35. I think we may be talking past one another, Stevan.

    I was not concerned especially with the problem of determining whether other creatures, humans or nonhuman, have feelings. I was raising–following Dan–the question of what it is for a creature to have feeling. And I was arguing that it is a question that must be answered *before* one can sensibly ask and answer the question about whether other kinds of creature have feelings and if so how they compare to the feelings humans have. I was arguing against the assumption common to those who think that there *is* a “Hard Problem,” which I do not, that we know well enough from our own case what feelings are and what it is for a creature to have them. I don’t think that’s something one can–in principle–tell from one’s own case, though of course the answer we give to the question about what feelings are must allow an informative explanation of our first access as well.

  36. Hi everyone, I think David’s point is important but I would like to stress that, though many people who think there is a hard problem start with the idea that we know about our own consciousness in a special way, there is nothing in that claim (to which I am sympathetic) that forces you into accepting a hard problem. Maybe it might force you into the claim that it *seems* like there is a hard problem but that is a very different claim, one which could be overcome with improved theory (whereas if there really is a hard problem then no amount of further (physical/functional) theorizing will do the trick). So I do think we all know from our own case that we are conscious, and I know that said consciousness is associated with certain kinds of behaviors (I notice that when I feel pain I swear, grimace, say ‘I am in pain! And it hurts so bad!!’ etc). When I see you exhibiting that kind of behavior I infer, by analogy to my own case, that it is likely accompanied by the same kind of feeling. This is why it is only probable that you are conscious (from my point of view) but still very likely once one throws in similarity of hardware and considerations about parsimony and simplicity of physical laws (this is Russell’s solution to the problem of other minds which I think works just as well here).

    It seems to me that the real culprit here is not that we each know from our own case what consciousness is (that just seems obvious) but whether there must be some kind of evolutionary function that consciousness performs. Those who think there *really* is a hard problem (as opposed to those like me who think that it merely seems like there is one) usually cite the lack of function for consciousness (this is a theme of Stevan’s work, see especially his session from CO3 and I think it is a theme in Chalmers’ work as well). If the way we understand/explain something is by locating its function (in this Darwinian sense), and if we don’t see one of those for consciousness, then we may end up concluding that it just can’t be explained via the usual methods: presto chango, et voila! A Hard Problem emerges! This is why I think David’s other work on the function of consciousness is so important. We have good reasons to doubt that consciousness has a function *in that Darwinian sense* but that doesn’t mean that it can’t be explained in broadly functional terms (as his higher-order thought theory aims to do).

  37. DOING SOME HARD THINKING

    It seems to me that there will continue to seem to be a “hard problem” as long as no one comes up with a causal explanation of how and why organisms feel rather than just do — or even just an explanation of how there could be a causal explanation of how and why organisms feel rather than just do. For, on all evidence to date, doing is sufficient and feeling is superfluous.

    Recourse to theories of higher-order thought does not seem helpful, since the question again arises: *felt* higher-order thought, or just *done* (i.e., unfelt) higher-order thought?

    (By the way: Why was it ever dubbed “the” hard problem? If it’s soluble, it’s just *a* hard problem. I’ll stick my neck out and say I think it’s insoluble (i.e., causally inexplicable), not just “hard.”)

  38. Stevan, you say “For, on all evidence to date, doing is sufficient and feeling is superfluous.” and I agree with you on that, so then we have three options. Some kind of dualism (epiphenomenalism would fit, so would a couple of others), mysterianism (your view), or a naturalistic theory that makes the same prediction (higher-order theories fit this bill). So how do we choose between these theories? If we look to empirical evidence to help with the dispute then the chances for some kind of higher-order theory look good (I made this point in comment 25). If we dig in our heals and insist that no theory will ever produce what we want then there isn’t much left to say (sorta like when someone digs their heels in and insists that they don’t know what consciousness is). You can do that but it starts to look like a gambit and nothing more.

    You then say, “Recourse to theories of higher-order thought does not seem helpful, since the question again arises: *felt* higher-order thought, or just *done* (i.e., unfelt) higher-order thought?”

    If it is true that consciousness just is an appropriate higher-order thought (something that certainly seems possible and might even be actual!) then they can’t be unfelt. So what we need from you is a reason to think that consciousness doesn’t (or can’t possibly) consist in an appropriate higher-order thought. And you cannot say ‘because consciousness is superfluous’ because we agree with you on that: evolution need only work out the doings, we give another story about how consciousness comes about. So, is there any other reason that you can give? If not then it looks like you are simply refusing to take the options seriously. I am reminded of a passage from Book I of The Republic where Socrates says,

    [speaking to Thrasymachus] You ask someone for a definition of twelve and add, “I don’t want to be told that it’s twice six, or three times four, or six times two, or four times three; that sort of nonsense won’t do.” You know perfectly well that no one would answer you on those terms. (This person) would reply, “What do you mean, Thrasymachus; am I to give none of the answers you mention? If one of them happens to be true, do you want me to give a false one?

  39. Hi Richard,

    I agree that Stevan seems unwilling to consider some proffered explanations of feels, e.g., HOTs or representational limitations and requirements (my current favorite). My diagnosis is that this is because he insists on there being a causal explanation of feels, and since there isn’t one handy, he declares the hard problem is insoluble (#37). I agree that feels likely aren’t caused, but there are other explanatory entailments besides causal entailments that might get us in the vicinity of qualia (see #18).

    It’s pretty clear that the cognitive functions associated with consciousness were adaptive and naturally selected for, in which case if “we have good reasons to doubt that consciousness has a function *in that Darwinian sense*” (and I agree), then consciousness isn’t identical to its associated functions. It’s rather the private, subjective, qualitative and closely parallel accompaniment to them. On the face of it, this puts consciousness outside the normal scientific explanatory practice that deals in public objects, quantifiable properties, and causes and effects. But it doesn’t mean that it’s beyond naturalistic explanation, only that the usual methods might not apply. I don’t think we can be sure that we have all the explanatory options worked out.

  40. I have the impression that there is some confusion about what I am arguing. Let me summrize. I am *not* (!) at all interested in the epistemology of mental states or feelings or the like.

    I am interested in what it is for a state to be a feeling–a conscious qualitative mental state. That’s a question about the nature of feeling–the nature of those states. We can’t do science of feelings until we know–in an ordinary, quotidien way–what feelings are. And we can’t tell who has feelings until we know what they are.

    But forget the science–and the epistemology. Again: Those are not my concern. Focus on the question of what feelings–conscious qualitative states–are. That is not a question to be dismissed; it’s an important question, to which we must give at least a folk-theoretical answer.

    And answering by saying “I know from my own case” is totally uninformative.

  41. David, just in case your last comment was aimed in my direction (I hope not!) let me just clarify that that was exactly what I was trying to do. Consciousness is that thing which we are (or at least seem to be) directly acquainted with in first-personal ways. This (seeming) acquaintance sets the parameters for the science of consciousness by identifying the target we are all interested in and identifying some (seeming) properties of the thing that must be accounted for. Whatever else consciousness is, this is the way we identify the thing we are trying to explain; from there we can move to trying to nail down its true nature but we have to start by picking out the thing we are interested in trying to explain in the first place. And, arguably, this is is an ordinary, quotidien way of knowing what feelings are (and is also arguably a part of the folk-theoretical answer). This does not commit us to a hard problem unless we make the further assumption that this kind of access reveals the essence of consciousness or that there is, in principle, no other way of knowing about it but that is a much stronger claim that the common-sense dictum that in the first instance we know about it from our own case. While not totally informative, it is still pretty informative.

  42. No, Richard; my effort at clarifying was not meant about your posts.

    But let me react to what you say. I think a certain amount of damage is done in consciousness studies by treating the subject matter as this single, uniform phenomenon, consciousness–sometimes even devolving into ‘consciousness’ as if it were a mass noun, consciousness as a stuff.

    I think it’s better to talk about particular conscious states. That leaves less room for false moves–like the so-called hard problem.

    I guess the topic of Dan’s marvelous talk is Ned’s P-A distinction, and assuming that conscious access isn’t a big issue (though I would raise questions there too), the main topic is the special case of qualitative mental states’ being conscious. So I was entering a plea: Say what conscious qualitative states are. That’s all. Don’t rest with Block’s Louis Armstrong quip: “If you gotta ask, you ain’t never gonna get to know” (1978, §1.3). That’s just giving into the idea that conscious qualitative states are ineffable. Say what they are.

    I myself think that Dan’s heterophenomenological method (Consciousness Explained, ch. 4) points in exactly the right direction.

  43. THRASYMACHUS, CAUSAL EXPLANATION AND HERMENEUTICS

    I don’t think there’s anything the least bit “Thrasymachean” about noting that feeling seems to be a biological trait, like all others, and hence that it seems reasonable to expect a causal explanation of why organisms have that trait.

    To reject causal explanations in which feeling obviously is *not* causally necessary but simply assumed to be present is not to reject all attempts at causal explanation out of hand.

    (The reason I happen to think this “hard” problem is insoluble is that the default — and hence non-Thrasymachean — hypothesis of psychokinetic dualism (according to which feeling is an independent fundamental causal force in the universe, alongside the other four) is obviously false on all evidence to date, and that that seems to mean that there is no “room” left for feeling as a cause, hence no prospect of a causal explanation, just hermeneutics, as in the Higher-Order Theories of thought.)

  44. Of course you don’t Stevan (Thrasymachus didn’t either! ;).

    You keep insisting that consciousness needs to be some kind of fundamental causal force in the universe and since you can’t see how it could be you declare a hard problem. But what reason do you have to insist on this without argument? On the kinds of views I alluded to consciousness has only very limited causal powers (to do with reporting, etc) and is definitely not a fundamental causal force in the universe (but can still be accounted for in causal terms, that is, we can explain how consciousness arises, and in fact how the brain causes it to arise in virtue of instantiating the proper kinds of representations). Too bizarre you say? But there is empirical evidence that it is in fact true, a point in favor of the views that predict it. So, again, the burden of proof is on you to give some kind of argument here but you simply refuse to acknowledge or meet it and instead simply assert that every possible theory has to be false because consciousness isn’t a fundamental cause. Very Thrasymachean indeed! To repeat: what we need is some other kind of reason since the thing you keep insisting on is common ground between us.

  45. By the way, I would also dispute this claim: “psychokinetic dualism (according to which feeling is an independent fundamental causal force in the universe, alongside the other four) is obviously false on all evidence to date”. But this probably (no, definitely) isn’t the right place to engage in that argument!

  46. I would like to echo Tom Clark’s #39 but with a slight shift of emphasis. Tom notes ‘normal scientific explanatory practice [] deals in public objects’ but is this right? What is a ‘public object’? (I am not taking Tom to task here, just the standard account.)

    If a young scientist were marooned on a desert island for 60 years and wanted to develop new theories he would still need rulers and clocks to calibrate, equations to solve, and a notebook to write in using scientific language, all in order to check his theories correctly predicted the content of his experiences. Scientific theory does not need to be accessible to more than one person. It covers, rainbows, which are based on individual relations to rain and sun, not objects. In fact modern science more or less discards the idea of objects and commits itself to dynamic relations, as James Ladyman points out. Moreover, it becomes increasingly clear that nothing really ‘is the case’ unless it is the case *to* something else – it has ‘passed on’ some causal impact.

    The upshot of all this seems to me to be relevant to Stevan Harnad’s complaint. Why is there feeling as well as just doing? The answer would seem to be that the only definition we have of doing is the sort of dynamic relation that, if we put someone in the right place, will engender a feeling of a predicted sort. Doing without feeling would be a set of dynamic relations to nothing. We have already got rid of Johnson’s ‘matter’, we would be left with change, but of nothing in particular. We think dynamic relations go on in our absence but the only criterion of the ‘physical’ we have is that it is a string of dynamics that will determine certain feels when things are set up right. So there is no hard problem because a zombie is not ‘physical’ inside by the only gold standard we have. Or at least only with unparsimonious caveats about the gold standard applying patchily.

  47. THAT WEASEL-WORD AGAIN…

    @Richard Brown: “consciousness has only very limited causal powers (to do with reporting, etc) and is definitely not a fundamental causal force in the universe (but can still be accounted for in causal terms, that is, we can explain how consciousness arises, and in fact how the brain causes it to arise in virtue of instantiating the proper kinds of representations).

    Try that argument again without the help of the weasel-word “consciousness” and its hopeless equivocation between “access” and “feeling” (the theme of Dan’s video).

    1. reporting (doing) or *felt* reporting? (but then why felt?)

    2. representing (doing) or *felt* representing? (but then why felt?)

    — Thrasymachus

  48. Stevan, that wasn’t an argument. It was a summary of the commitments of a certain kind of theory. But the problem here is very clearly illustrated by your #2 above. Some representing may be merely doing but it doesn’t follow that all representing is! On the view I was summarizing the difference consists in this: felt representing (i.e. consciousness) occurs when one represents oneself as being in some other representation in a way that seems subjectively unmediated (by the way I am not suggesting this is the only possible answer to your question, only the one I know most about and which I think is best supported by the evidence). Why believe this? Well as I have said already (and linked to a couple of papers where this is actually argued for instead of summarized), we have pretty good empirical evidence that this kind of view might be true and it fits with our folk-theoretical outlook. There is no equivocation here; the claim is that feeling (i.e. consciousness) consists in a certain kind of cognitive access. What’s the argument against this view? That there can be these kinds of representations without feeling? That is called begging the question. Why can there be these kinds of representations without feeling? Because these kinds of representations are not a fundamental causal force in the universe? We agree, so that can’t be used as an argument against the view. Anything else? Not so far.

    But we have been over all of this before and we should stop threadjacking this discussion and let Dan respond (if he wants to).

  49. Thanks, everyone, for these stimulating reactions. I’m afraid this response may sound rushed and insufficiently patient. I plead nolo contendere. I’ve been at TED for the last three days and it’s late at night on the last day of the discussion, and I am hastily dashing out a few reactions.

    Eric Schwitzgebel says the issue is, as Ned would have it, between ‘cognitive access’ and ‘non-cognitive access’. I don’t see any issue there at all. OF COURSE there is both ‘cognitive’ and ‘non-cognitive’ (affective, visceral, motivational . . . . ) access, and all together this “access” adds up to consciousness—but access by whom? Not by some homunculus in a Cartesian Theater. By the, um, person, the whole person. But of course that is not a well-defined or delineated recipient; it’s an irresistibly convenient oversimplification, like a center of gravity. ROUGHLY speaking, if “the person” can’t reflect on, talk about, report, react multifariously to some contentful episode, we say the person is not conscious of it, even though it may have detectable effects on the person, making her heart race, her blood pressure surge, her GSR to rise, etc., etc.—‘non-cognitive access’—or prime various cognitive decisions by the person, without otherwise being able to report, reflect on or react.

    This is why I say consciousness is more like “fame in the brain” than like television. CONSCIOUSNESS IS NOT A MEDIUM. There is no second transduction, turning the neural spike trains into, um, figment, translating voltage into qualia, properties enjoyed by the witness in the Cartesian Theater. So why does there seem to be a theater? I’ve tried to explain this illusion.

    That’s all from the “third-person point of view”. When you insist on going to the first person point of view, I am happy to go along with you, but I observe that all this gets us is a wealth of further third-person data—heterophenomenological data about what Stevan believes, and what Eric believes, etc., etc. All of that needs to be accounted for, but it does not automatically get CREDITED. I know this sounds, well, rude, but that is what a proper scientific study of consciousness must insist on:
    Thank you, Stevan, for your repeated insistence that you know what feeling is, and that you feel, and that you’re pretty sure that mammals feel, and hope that plants don’t feel, and doubt that any computers now feel. All these convictions of yours are data points, and I accept the burden of explaining them. I’ve sketched a causal account of why you, like many others, THINK these are truths: you’re taken in something like a Humean strange inversion. It’s like being dead sure that cuteness is an intrinsic property of babies, or that colors are intrinsic, non-relational properties of physical objects. I may be wrong that these are explainable illusions, but you must admit, I think, that my view is one that has both precedents and at least superficial plausibility. And of course it explains why I simply categorize your “objections” to my view as further data points, nothing I am obliged to rebut.

    Bence:
    I agree that I don’t want to get tangled up in Gibsonian metaphysics, but I think it’s OK to talk of affordances without committing those sins. Affordances are no more metaphysically problematic than opportunities (or voices or dollars or colors). A topic for another occasion.

    Megan:
    I don’t accept your friendly amendments re “filling in” I continue to argue that that idiom seriously misleads neuroscientists. The brain does not engage in “filling in”; it engages in “finding out” WITHOUT “filling in.” That’s a long story, in CONSCIOUSNESS EXPLAINED.

    Jonathan:
    “To suggest that once you get inside the brain observations can be based on non-local (i.e. distributed) events would be a stark form of dualism.” I strongly disagree, if I understand you. It is this reliance on localism that leads people like Crick and Koch to maintain bizarre views about the NCC (such as, for instance, the view that a bit of visual cortex kept alive in a petri dish might be an instance of consciousness of red).

    Arnold:
    I take your point, but this is a delicate issue of who has the burden. I don’t know (yet) what is being asserted when someone insists that some elements (e.g., letters) are “in consciousness” but not capable of being “identified.”

    Stevan:
    “Do we all know what we mean when we say organisms feel? I think we do.” And I think we don’t. An impasse, but it seems quite obvious to me that your insistence that we do is a defense of “common sense” and the manifest image that has all the earmarks of an ordinary-language-philosophy defense of ignoring scientific possibilities. So I agree with David’s assessment of your position—if I understand him correctly.

Comments are closed.