MindMods Blog

  1. New Study Uses Biofeedback to Predict a Gamer's Gameplay



    Hungarian researchers are using GSR Biofeedback in a new study using video games.

    Laszlo Laufer and Bottyan Nemeth from the Budapest University of Technology and Economics
    are using GSR Biofeedback (Galvonic Skin Response, or skin conductance) in a study where they've shown
    that a gamer's actions can be predicted up to two seconds before they occur.

     

    Laufer says "There are quite a few situations in life where there would be a need to provide a support for making a good decision
    at a good time. I have military applications (pilots) in mind, but surely we can find others as well." He also sees it being used in
    video games "Another application I have in mind could be called a frustration game" This type of game could detect when a player
    was going to act and change gameplay to throw off the player. This type of technology could be integrated into game controlers
    easily.

    This type of technology (GSR Biofeedback) should be used in more video games, but I'm not too sure that it would be very
    successful if used in a manner that would frustrate game players! It could definitely be used to help speed up a person's reaction
    time while playing a game.

    The game used for the research project is called YetiSports JungleSwing and can be found here at YetiSports.org

  2. Ambient Corporation's New Human-Computer Interface called Audeo Intercepts Words When 'Thought'

    A company called Ambient has developed a device that intercepts signals sent to the voice box from the brain via a sensor laden neck band. They claim to be able to decode these signals and match them to a pre-recorded series of words - even when the words are voiced out-loud. Theses 'words' can then be used to control things via a computer.

    They are currently using this system to direct a motorized wheelchair, allowing a paralysed person to navigate without moving or speaking out-loud. Ambient is developing the technology with the Rehabilitation Institute of Chicago to help people with neurological problems operate computers and other electronic equipment despite their problems with muscle control.

     

    This is the first time (that I know about, anyway) that a device has been able to convert electrical impulses from the brain into actual words. This is different from traditional EEG, which measures brainwaves, as it is analyzing signals outside the brain on their way to the larynx.

    Audeo is currently selling a developer kit that allows researchers to develop new applications with their technology. If this works as well as they claim, the possibilities are endless.

     


    Check out the rest of this article for a video presentation of the device.

     

  3. New EEG System Develops Visual Images from Brain Activity

    Found on Neurofeedback on the Brain Blog

      


    .."BrainPaint extracts a new metric on the complexity of the EEG and feeds that back visually in a language the brain functions in.  Our brains and BrainPaint are complex systems -- BrainPaint takes information communicated directly from the brain and creates real-time fractal images that the brain appears to understand."


    An Image Gallery of BrainPaintings can be found here

    More on Bill Scott's EEG biofeedback system here

  4. Comparing the 3 Neurofeedback Gaming Interfaces

    Emotiv's EPOC, NeuroSky's ThinkGear, and OCZ's NIA

    emotiv epoc

    The Epoc from Emotiv

    Release Date: Summer 2009
    Number of Electrodes: 16
    Electrode Type: Pure Neural Signals / EEG
    Movements: Head Rotation through two-axis gyros
    Estimated Cost: $299
    SDK Available? YES

     

     

     

    neurosky thinkgear

    The ThinkGear from NeuroSky

    Release Date: Will be released OEM only from 3rd party software/game developers
    Number of Electrodes: 1
    Electrode Type: Pure Neural Signals / EEG
    Movements:
    Estimated Cost: Available only to companies developing games or software.
    SDK Available? YES

     

     

     


    ocz nia

    The NIA from OCZ

    Release Date: May 2008
    Number of Electrodes: 3 (Front)
    Electrode Type: Uses Biopotentials from forehead. Mixture of muscle, skin & nerve activity (sympathetic and parasympathetic)
    Movements: Multiple mapped profiles
    Estimated Cost: $160
    SDK Available? NO

     

  5. Scientists Mimic Out-Of-Body Experience using Technology

    Prof. Olaf Blanke and his colleagues from the Laboratory of Cognitive Neuroscience at EPFL in Switzerland have been doing research on the neural-correlates of out-of-body-experiences since at least 2002. This new study is very unusual, as they claim to be able to produce an out-of-body-experience when the user of special goggles is shown a projected image of themselves while being poked with a stick.

       

    Out-of-body experiences are most common in people who endure intense meditation practices, experience sleep paralysis, and following certain types of head injuries. Research such as this strives to discover exactly how the brain creates the out-of-body-experience sensation.

    It is arguable whether these experiencies re-produce bona-fide NBE's, but it is an interesting effect nonetheless.

    NewScientist just posted a video to YouTube featuring Olaf's group inducing out-of-body-experiences:

     

    The out-of-body experiments were conducted by two research groups using slightly different methods intended to expand the so-called rubber hand illusion.

     

    In that illusion, people hide one hand in their lap and look at a rubber hand set on a table in front of them. As a researcher strokes the real hand and the rubber hand simultaneously with a stick, people have the vivid sense that the rubber hand is their own.

     

    When the rubber hand is whacked with a hammer, people wince and sometimes cry out.

     

    The illusion shows that body parts can be separated from the whole body by manipulating a mismatch between touch and vision. That is, when a person’s brain sees the fake hand being stroked and feels the same sensation, the sense of being touched is misattributed to the fake.

     

    The new experiments were designed to create a whole body illusion with similar manipulations.

     

    In Switzerland, Dr. Olaf Blanke, a neuroscientist at the École Polytechnique Fédérale in Lausanne, Switzerland, asked people to don virtual reality goggles while standing in an empty room. A camera projected an image of each person taken from the back and displayed 6 feet away. The subjects thus saw an illusory image of themselves standing in the distance.

     

    Then Dr. Blanke stroked each person’s back for one minute with a stick while simultaneously projecting the image of the stick onto the illusory image of the person’s body.

     

    When the strokes were synchronous, people reported the sensation of being momentarily within the illusory body. When the strokes were not synchronous, the illusion did not occur.

     

    In another variation, Dr. Blanke projected a “rubber body” — a cheap mannequin bought on eBay and dressed in the same clothes as the subject — into the virtual reality goggles. With synchronous strokes of the stick, people’s sense of self drifted into the mannequin.

     

    A separate set of experiments were carried out by Dr. Henrik Ehrsson, an assistant professor of neuroscience at the Karolinska Institute in Stockholm, Sweden.

     

    Last year, when Dr. Ehrsson was, as he says, “a bored medical student at University College London”, he wondered, he said, “what would happen if you ‘took’ your eyes and moved them to a different part of a room? Would you see yourself where you eyes were placed? Or from where your body was placed?”

     

    To find out, Dr. Ehrsson asked people to sit on a chair and wear goggles connected to two video cameras placed 6 feet behind them. The left camera projected to the left eye. The right camera projected to the right eye. As a result, people saw their own backs from the perspective of a virtual person sitting behind them.

     

    Using two sticks, Dr. Ehrsson stroked each person’s chest for two minutes with one stick while moving a second stick just under the camera lenses — as if it were touching the virtual body.

     

    Again, when the stroking was synchronous people reported the sense of being outside their own bodies — in this case looking at themselves from a distance where their “eyes” were located.

     

    Then Dr. Ehrsson grabbed a hammer. While people were experiencing the illusion, he pretended to smash the virtual body by waving the hammer just below the cameras. Immediately, the subjects registered a threat response as measured by sensors on their skin. They sweated and their pulses raced.

     

    They also reacted emotionally, as if they were watching themselves get hurt, Dr. Ehrsson said.

     

    People who participated in the experiments said that they felt a sense of drifting out of their bodies but not a strong sense of floating or rotating, as is common in full-blown out of body experiences, the researchers said.

     

    The next set of experiments will involve decoupling not just touch and vision but other aspects of sensory embodiment, including the felt sense of the body position in space and balance, they said.


    Link to the New York Times Article


    Here are some of the previous studies involving Prof. Olaf Blanke on out-of-body experiences - the links are to PDF documents.


    Blanke O, Thut G (2006). Inducing out of body experiences. In: Tall Tales (ed. G. Della Sala), Oxford: Oxford University Press.

    Blanke O et al.Linking OBEs and self processing to mental own body imagery at the temporo-parietal junction. J Neurosci 25:550-55

    Blanke O, Arzy S. (2005) Out-of-body experience, self, and the temporoparietal junction. Neuroscientist 11:16-24.

    Buenning S, Blanke O. (2005) The out-of body experience precipitating factors and neural correlates. Prog Brain Res 150 333-353.

    Blanke O (2004) Out-of-body experience and their neural basis. Brit Med J 329:1414-1415.

    Blanke O,et al(2004) Out-of-body experience and autoscopy of neurological origin. Brain 127:243-258.[Editorial Frith C (2004) Brain 127 2.

    Blanke O. (2004) The neurology of Out-of-body experiences Proceedings of the 5th Symposium of the Bial Foundation, Vol 5:193-218.

    Blanke O (2007) From out-of-body experiences to the neural mechanisms of self consciousness. Companion to Consciousness, Oxford University Press. (in press)

     

    Blanke O, Arzy S, Landis T. (2007) Illusory perception of body and self. In: Handbook of Neurology (Ed. G. Goldenberg) (in press).

     

    Easton S, Blanke O, Mohr C (2007) A putative implication for fronto-parietal connectivity in out-of-body experiences. Cortex (in press).

     

    Blanke O, Castillo V (2007) Clinical neuroimaging in epileptic patients with autoscopic hallucinations and out-of-body experiences.Case report and review of the literature. Epileptologie 24: 90-96.

    mmm mmmm - Good Stuff!

    More articles about this at ArsTechnica NeuroPhilosophy MindHacks Physorg

  6. Scientists use Pac-Man, Electric Shocks and Neuroimaging to study Fear in the Brain

    Scientists from Wellcome Trust claim to have identified for the first time what happens in our brain in the face of an approaching fear. They measured activity in the brain using fMRI while a subject played a game similar to Pac-Man and received an electric-shocks when they were caught by the video game predator.

    They found that activity in the ventromedial prefrontal cortex (behind the eyebrows) increased when the enemy was in the distance - this part of the brain is active when one is planning how to respond to a threat. As the video game enemy approached, predominant activity shifted to the periaqueductal grey - the part of the brain responsible for flight or fight and preparing for reaction to pain.

    The title of their study is 'Free Will Takes Flight', as it shows that we act more on impulse when a threat increases.

    Abstract can be found here

    Article in Science Magazine can be found here

    From Science 24 August 2007: Vol. 317. no. 5841, pp. 1043 - 1044

    Neuroscience: The Threatened Brain

    Stephen Maren

    The world is a dangerous place. Every day we face a variety of threats, from careening automobiles to stock market downturns. Arguably, one of the most important functions of the brain and nervous system is to evaluate threats in the environment and then coordinate appropriate behavioral responses to avoid or mitigate harm.

    Imminent threats and remote threats produce different behavioral responses, and many animal studies suggest that the brain systems that organize defensive behaviors differ accordingly (1). On page 1079 of this issue, Mobbs and colleagues make an important advance by showing that different neural circuits in the human brain are engaged by distal and proximal threats, and that activation of these brain areas correlates with the subjective experience of fear elicited by the threat (2). By pinpointing these specific brain circuits, we may gain a better understanding of the neural mechanisms underlying pathological fear, such as chronic anxiety and panic disorders.

    To assess responses to threat in humans, Mobbs and colleagues developed a computerized virtual maze in which subjects are chased and potentially captured by an "intelligent" predator. During the task, which was conducted during high-resolution functional magnetic resonance imaging (fMRI) of cerebral blood flow (which reflects neuronal activity), subjects manipulated a keyboard in an attempt to evade the predator. Although the virtual predator appeared quite innocuous (it was a small red circle), it could cause pain (low- or high-intensity electric shock to the hand) if escape was unsuccessful. Brain activation in response to the predatory threat was assessed relative to yoked trials in which subjects mimicked the trajectories of former chases, but without a predator or the threat of an electric shock. Before each trial, subjects were warned of the contingency (low, high, or no shock). Hence, neural responses evoked by the anticipation of pain could be assessed at various levels of threat imminence not only before the chase, but also during the chase when the predator was either distant from or close to the subject.

    How does brain activity vary as a function of the proximity of a virtual predator and the severity of pain it inflicts? When subjects were warned that the chase was set to commence, blood oxygenation level-dependent (BOLD) responses (as determined by fMRI) increased in frontal cortical regions, including the anterior cingulate cortex, orbitofrontal cortex, and ventromedial prefrontal cortex. This may reflect threat detection and subsequent action planning to navigate the forthcoming chase. Once the chase commenced (independent of high- or low-shock trials), BOLD signals increased in the cerebellum and periaqueductal gray. Activation of the latter region is notable, as it is implicated in organizing defensive responses in animals to natural and artificial predators (3, 4). Surprisingly, this phase of the session was associated with decreased activity in the amygdala and ventromedial prefrontal cortex. The decrease in amygdala activity is not expected, insofar as cues that predict threat and unpredictable threats activate the amygdala (5, 6).

    However, activity in these brain regions varied considerably according to the proximity of the virtual predator and the shock magnitude associated with the predator on a given trial (see the figure). When the predator was remote, blood flow increased in the ventromedial prefrontal cortex and lateral amygdala. This effect was more robust when the predator predicted a mild shock. In contrast, close proximity of a predator shifted the BOLD signal from these areas to the central amygdala and periaqueductal gray, and this was most pronounced when the predator predicted an intense shock. Hence, the prefrontal cortex and lateral amygdala were strongly activated when the level of threat was low, and this activation shifted to the central amygdala and periaqueductal gray when the threat level was high.

  7. Is Consciousness Definable? Video from PBS

    PBS's Closer to Truth featuring Christof Koch, Leslie Brothers, Joseph E. Bogen & Stuart Hameroff try to answer this question. These four scientists have the same question but give four different answers.

     

    Is Consciousness Definable?


    One problem is that there are too many definitions! And getting these four guests to agree on what consciousness is and what causes it, is a fun but hopeless task that is revelatory at the same time. These four leading brain scientists couldn't even agree on at what level a simple "memory" was stored, whether as a gross "brain circuit," at the synapse between nerve cells, or in the microstructure of the nerve cells as some sort of quantum effect. But why should it be any different now? Philosophers have debated the "mind-body problem" and the existence of "free will" for thousands of years. However, never before have we been in a position to examine the brain with such precision. Even as we begin to understand the deep science that underlies our cognitive processes, there is no letup in arguments whether we are anything other than automata, just reacting to stimuli -- vastly more complex than a bacterium to be sure -- but fundamentally little different.


    Although this spirited and highly qualified group manages to disagree on just about everything, in the midst, they give off a tremendous amount of information about the key issues involving the understanding of consciousness today: Are our "minds" just the artificial integration of multiple brain systems? Are our feelings of self, that unique personal sense of mental "qualia" (e.g., does the color "red" look the same to you as it does to me?) anything other an "epiphenomenon," seemingly real but in reality an illusion? How do firings of neurons, or ultimately vibrations of atoms, emerge up into human self-awareness? Psychiatrist/author Leslie Brothers firmly believes that there is something of the mind that is not in the brain, but it is not spirit or soul. To her, the seat of consciousness resides in the social interaction of living things between brain and brain in society. Says Brothers, without others to reflect ourselves off of, there would be no consciousness.

    Click 'Read More' below to download the video

    Description: Closer to Truth brings together leading scientists, scholars and artists to debate the fundamental issues of our times. One problem is that there are too many definitions! And getting these four guests to agree on what consciousness is and what causes it, is a fun but hopeless task that is revelatory at the same time. Joining host Robert Kuhn are Leslie Brothers, Psychiatrist; Joseph E. Bogen, Neurosurgeon; Stuart Hameroff, Anesthesiologist; and Christof Koch, Computation and Neural Systems.

    Video in Quicktime or Windows Media format here

  8. MindMods CogSciTech Consciousness Paper Posting #2

    What is consciousness for? A History

    This paper is called "Consciousness Redux" and is something of a history of theoretical positions on the function of consciousness. It was written by George Mandler of the University of California & University College London.

     

    Consciousness Redux

    George Mandler

    University of California, San Diego and University College London

    Copyright (C) 1993 George Mandler


    I start with a review of 20 years of proposals on the functions of consciousness. I then present a minimal number of functions that consciouness subserves, as well as as some remaining puzzles about its psychology. In the process I stress a psychologist's functional approach, asking what consciousness is for. The result is an attempt to place conscious processes within the usual flow of human information processing.

    Twenty years of - progress?

    Twenty years ago I was writing a paper for a conference on information processing and cognition organized by Robert Solso at Loyola University in Chicago. The paper asserted in its title that consciousness was respectable, useful, and probably necessary.[1] As late as 1975 the topic and the assertion of consciousness' respectability, utility, and necessity were still beyond the pale for many of my peers.[2] Like other taboo topics recently rehabilitated, the mention of consciousness still occasioned embarrassed looking-at-the-ceiling and examining-of-cuticles, or --- from the bolder ones --- sage advice that I was foolish/misguided/downright-wrong to approach such a can of worms. Ulric Neisser had tried to come to terms with consciousness in 1963 in the psychoanalytic context, but tended to avoid it in his 1967 book that defined parts of the new cognitive psychology. By 1970 consciousness started to become respectable and useful in human information processing systems, in order to accommodate serial processing (Atkinson & Shiffrin, 1968), to account for attention in choice and rehearsal (Posner & Keele, 1970; Posner & Boies, 1971), and to select and set goals for action systems (Shallice, 1972).

    Since then the proliferation of interest in consciousness has been truly awesome. Philosophers --- as usual on the hunt for a juicy topic --- joined the fray, and as of 1993 no respectable cognitive scientist can be without a position on consciousness. An informal recent survey suggests that for the N (a large number) proponents of theoretical positions on consciousness, there are now N+1 (a larger number) of theoretical positions. And there is little sign of any centripetal tendency to find a core agreement among those N+1 positions. As a result, I shall first describe briefly my own development from the early speculations to a more recent position (Mandler 1975a, 1984a, 1984b, 1985, 1986, 1988, 1989, 1992, 1993), [3] followed by a discussion of several central notions: conscious construction, the feedback function of consciousness, and the seriality and limited capacity of consciousness. I will then attempt to spell out some minimal requirements for a conscious mechanisms together with a sampling of several puzzles of consciousness that need more work. Finally I shall return to the big picture, offer some speculations about the uses of "mind" and end with a defense of a functionalist psychological approach.


    Functions of consciousness: Constructions and their consequences

    My first substantial paper on consciousness started with some musings about the possible functions of consciousness, which led to:

    Five functions in search of a mechanism

    I started with a psychologist's functional approach, i.e., by considering the functions and roles that consciousness apparently fills in mental life, and to proceed then to imagine what such a mechanisms would have to be like in order to accomplish all these functions.[4] All my relevant efforts have appealed to consciousness in terms of immediate experience, whether of extra- or intra-psychic events or reflective. The question then is why some mental contents appear in that conscious guise. I am concerned with certain functions of human cognition that map on to consciousness --- when mental contents are in the conscious state they tend to display such functions as seriality and limited capacity and are likely to prime underlying representations.

    In the 1975a paper I listed five adaptive functions that seemed to me at the time to require a conscious mechanism:

    Choice and the selection of action - short-term actions are reviewed and selected, and possible and desirable outcomes and possible alternative actions are consciously represented. Modification and interrogation of long range plans - alternative actions for long term plans are considered, and different substructures and outcomes are evaluated. Retrieval from long-term memory - explicit remembering is achieved, including the use of simple addresses to access complex structures. Construction of storable representations of current activities and events - (i) social/cultural products are stored and retrieved, in part by the use of language as an effective instrument for communication and storage, and (ii) information is stored and retrieved for future comparisons of present and past events. Troubleshooting - representations and structures are brought into consciousness when repair or emergency action is necessary on various - usually unconscious - structures.

    Over the next ten years I reconsidered more precisely what functions need consciousness, i.e., that could not be performed without some mechanism like consciousness. I also focused more on automatic and simpler process, though I did pursue problems of consciousness and memory in depth (Mandler, 1989). In a new book on emotion (Mandler, 1984b) and a small volume on cognitive psychology (Mandler, 1985, Ch. 3), I summarized a general view of unconscious and conscious processes:

    First, consciousness is limited in capacity and it is constructed so as to respond to the current situational and intentional imperatives.

    Second, unconscious representations and processes generate all thoughts and processes, whether conscious or not; the unconscious is where the action is!

    Third, all underlying (unconscious) representations are subject to activations, both by external events and by internal (conceptual) processes. The three levels of representation are: unconscious and not recently activated; unconscious but activated (essential the same as Freud's preconscious); and conscious.

    Fourth, activated structures (e.g., schemas) are necessary for the eventual occurrence of effective thought and actions. Only activated structures can be used in conscious constructions. Current models of schema theory and the more sophisticated, but compatible, models of parallel distributed processes are all based on these assumptions.

    Fifth, and a new assumption, conscious events prime; they provide additional activations to the relevant underlying structures.

    Assumptions 2, 3, and 4 are shared by many cognitive scientists; assumptions 1 and 5 need further elaboration. Both of them emphasize more automatic than deliberate processes in consciousness. That emphasis on automatic effects is found in particular in an analysis of the feedback effects of conscious contents. I shall discuss those at greater length later, but first wish to talk about the construction of consciousness, in general, as well as in the way it differs between daily life and dreams.


    Constructive consciousness

    The approach to consciousness developed by Marcel (1983b) has been very useful to my thinking. Marcel was concerned with the conditions under which mental structures reach the conscious state. However, in contrast to the view that structures become conscious so that consciousness is simply a different state of a structure, Marcel sees consciousness as a constructive process in which the phenomenal experience is a specific construction to which previously activated schemas have contributed. Marcel specifically rejected the identity assumption, which characterizes most current views of consciousness. The identity assumption postulates that some preconscious state "breaks through," "reaches," "is admitted," "crosses a threshold," "enters," into consciousness. A constructivist position states, in contrast, that most conscious states are constructed out of preconscious structures in response to the requirements of the moment. I found this position particularly attractive because it solved a problem that had confronted me in my approach to emotion. If, as I had argued (Mandler, 1975b) emotional experience has both physiological and cognitive components, how are these combined into a single emotional experience? The Marcel model which said that a particular conscious state is constructed out of two or more preconscious ones solved that problem for me (Mandler, 1984a). We are conscious of experiences that are constructed out of two or more adequately activated schemas that are not inhibited. We are not conscious of the process of activation. The resulting phenomenal experience makes "sense of as much data as possible at the highest or most functionally useful level possible . . . ." (Marcel, 1983b).[5]

    We are customarily conscious of the important aspects of the environs, but never conscious of all the evidence that enters the sensory gateways or of all our potential knowledge of the event. A number of experiments have shown that people may be aware of what are usually considered higher order aspects of an event without being aware of its constituents. Thus, subjects are sometimes able to specify the category membership of a word without being aware of the specific meaning or even the occurrence of the word itself (Marcel, 1983a; Nakamura, 1989). A similar disjunction between the awareness of categorical and of event-specific information has been reported for some clinical observations (Warrington, 1975).

    This approach to consciousness suggests highly selective constructions that may be either abstract/general or concrete/specific, depending on what is appropriate to current needs and demands. It is also consistent with arguments that claim that we have immediate access to complex meanings of events. These higher order "meanings" will be readily available whenever the set is to find a relatively abstract construction, a situation frequent in our daily interactions with the world. We do not need to analyze the constituent features or figures to be very quickly aware/conscious of the import of a picture or scene. In general, it seems to be the case that "we are aware of [the] significance [of a set of cues] instead of and before we are aware of the cues" (Marcel, 1983b).[6]

    Conscious constructions represent the most general interpretation of the current scene that is consistent with preconscious information and with the demands of the environment.[7] Thus, we are aware of looking at a landscape when viewing the land from a mountaintop, but we become aware of a particular road when asked how we might get down or of an approaching storm when some dark clouds "demand" inclusion in the current construction.


    The construction of consciousness in daily life

    Constructive consciousness argues that current conscious contents are responsive to the immediate history of the individual as well as to current needs and demands.[8] We start with the (unconscious) schemas that represent current mental life. Schemas are dispositional mental structures that are constructed/assembled out of distributed features. The unconscious mind is not a library of static schemas, but rather an assemblage of features and attributes that, on the basis of past experience and current activations, produce appropriate mental structures. Currently available information constructs (out of distributed features of previously developed schemas) representations that respond both to the immediate information and to regularities generated by past experience and events. Evidence (occurrences) from both extra and intra-psychic sources activates (often more than one) relevant schemas. Concrete schemas, as well as " specific memories," will be activated that represent objects and events in the environment and in past experience. More abstract and generic schemas will be activated by spreading activation; they may represent hypotheses about external events and appropriate action schemas. These assemblages of features and temporarily activated schemas provide the building blocks for conscious representations. I will experience (be conscious of) whatever is consistent with my immediate preceding history as well as with currently impinging events. The most important schemas that determine current conscious contents are those that represent the demands and requirements of the current situation.[9] The current situation activates and constrains schemas (hypotheses) of possible actions, scenes, and occurrences in terms of one's past experiences. In other words, current conscious contents reflect past habits and knowledge in addition to, and often instead of, the representations of the "real world."

    One of the best examples that conscious contents respond not merely to "veridical" representations is shown by the work of Nisbett & Wilson (1977). They show that conscious reconstructions of previous events reflect not just what "actually happened," but also respond to variables and structures of which we are not conscious and which distort "veridicality." Distortions (constructions!) of conscious memory, as for example in eyewitness testimony, provide many instances of this process. Vibration induced illusions (sensory misinformation) of limb motion produce novel but "sensible" apparent body configurations, so that, for example, biceps vibration of the arm while ones finger rests on one's nose produces the experience of an elongated nose (see Lackner, 1988, for this and many other examples). Similarly, misleading information about one's hand movements apparently requires and produces the experience of involuntary hand movements (Nielsen, 1963).

    It is the hallmark of sane "rational" adults that they are conscious of a world that is consistent with its usual constraints as well as with the evidential constraints experienced by others in the same situation and at the same time. But there is another frequent human activity that is relatively unconstrained by reality, yet is conscious - namely our nightly dreams.


    Dreams, reality, and consciousness

    In contrast to every day life, in dreams possible constructions are only partially constrained by current reality and by the lawfulness of the external world. But dreams are highly structured; they are not random events. They present a structured mixture of real world events, current events (sensory events in the environment of the dreamer), contemporary preoccupations, and ancient themes. They may be weird and novel, but they are meaningful. What they are not is dependent on the imperatives and continuity of the real world - inhabited by physically and socially "possible" problems and situations. In the waking world our conscious experience is historically bound, dependent on context and possible historical sequences. In contrast, dreams do not depend on current sensory activations; they are constructed out of previous activations. Similar arguments have been made by others in somewhat different contexts. Thus, "... in REM sleep the brain is isolated from its normal input and output channels ... (Crick & Mitchison, 1983, p. 112)." And Hobson, Hoffman, Helfand & Kostner (1987) note that the brain/mind is focused in the waking state on the linear unfolding of plot and time. In REM dreaming the brain/mind cannot maintain its orientational focus. The leftovers of our daily lives are both abstract themes - our preoccupations and our generic view of the world - as well as concrete and specific activated schemas of events and objects encountered. These active schemas are initially not organized with respect to each other, they are - in that sense - the random detritus of our daily experiences and thoughts. Without the structure of the real world, they are free floating. They are "free" to find accommodating higher order structures. These may combine quite separate unrelated thoughts about events, about happy or unhappy occurrences, but since there are few real world constraints, they may be combined into sequences and categories by activating any higher order schemas to which they may be relevant.

    It is in this fashion that abstract (and unconscious) preoccupations and "complexes" may find their expression in the consciousness of dreams. It is what Freud (1900/1975) has called the "residue" of daily life that produces some of the actors and events, whereas the scenario is free to be constructed by otherwise quiescent higher order schemas. The higher order schemas - the themes of dreams - may be activated by events of the preceding days or they may be activated simply because a reasonable number of their features have been left over as residues from the days before. I should note that dream theories that concentrate only on the residues in dreams fail to account for the obviously organized nature of dream sequences - however bizarre these might be. In contrast to mere residue theories, Hobson's activation-synthesis hypothesis of dreaming (Hobson, 1988) supposes that, apart from aminergic neurons, "the rest of the system is buzzing phrenetically, especially during REM sleep" (Hobson, 1988, p.291). Such additional activations provide ample material to construct dreams and, as Hobson suggests, to be creative and to generate solutions to old and new problems.[10]

    This view is not discrepant with some modern as well as more ancient views about the biological function of dreams (in modern times specifically REM dreams), which are seen as cleaning up unnecessary, unwanted, and irrelevant leftovers from daily experiences. However, these views of dreams as "garbage collecting" fail to account for their organized character (Crick & Mitchison, 1984; Robert, 1886).

    In short, dreams are an excellent example of the constructive nature of consciousness: they are constructed out of a large variety of mental contents, either directly activated or activated by a wide ranging process of spreading activation, and they are organized by existing mental structures.

    I now turn to the issue of feedback, the effect of consciousness on later constructions.


    The feedback function of consciousness


    The feedback assumption contrasts with the view that consciousness, because phenomenal, cannot have any causal effects. Conscious phenomena appear to occur after the event that they register (e.g., Gregory, 1981) and seem to be causally inert, i.e., [c]onsciousness is not good for anything (Jackendoff, 1987, p.26). In contrast, the feedback assumption asserts the causal utility of conscious events, as well as their effect on subsequent activations of the consciously represented events.

    The feedback assumption states that the alternatives, choices, or competing hypotheses that have been represented in consciousness will receive additional activation and thus will be enhanced.[11] Given the capacity limitation of consciousness combined with the intentional selection of conscious states, very few preconscious candidates for actions and thoughts will achieve this additional, consciousness-mediated activation.[12] What structures are most likely to be available for such additional activation? It will be those preconscious structures that have been selected as most responsive to current demands and intentions. Whatever structures are used for a current conscious construction will receive additional activation, and they will have been those selected as most relevant to current concerns. In contrast, alternatives that were candidates for conscious thought or action but were not selected will be relegated to a relatively lower probability of additional activation and therefore less likely to be accessed on subsequent occasions.

    The evidence for this general effect is derived from the vast amount of current research showing that the sheer frequency of activation affects subsequent accessibility for thought and action, whether in the area of perceptual priming, recognition memory, preserved amnesic functions, or decision making (for a summary of some of these phenomena, see Mandler, 1989). The proposal extends such activations to internally generated events and, in particular, to the momentary states of consciousness constructed to satisfy internal and external demands. Thus, just as reading a sentence produces activation of the underlying schemas, so does (conscious) thinking of that sentence or its gist activate these structures. In the former case, what is activated depends on what the world presents to us; in the latter the activation is determined and limited by the conscious construction. Note that in order for the feedback function to make sense, we must assume that the "adaptive" function of construction that selects appropriate mental contents is also operating.

    This hypothesis of selective and limited activation of situationally relevant structures requires no homunculus-like function for consciousness in which some independent agency controls, selects, and directs thoughts and actions that have been made available in consciousness. Given an appropriate database, it should be possible to simulate this particular function of consciousness without an appeal to an independent decision-making agency.

    The proposal can easily be expanded to account for some of the phenomena of human problem solving. I assume that activation is necessary but not sufficient for conscious construction and that activation depends in part on prior conscious constructions. The search for problem solutions and the search for memorial targets (as in recall) typically have a conscious counterpart, frequently expressed in introspective protocols. What appear in consciousness in these tasks are exactly those points in the course of the search when steps toward the solution have been taken and a choice point has been reached at which the immediate next steps are not obvious. At that point the current state of world is reflected in consciousness. That state reflects the progress toward the goal as well as some of the possible steps that could be taken next. A conscious state is constructed that reflects those aspects of the current search that do (partially and often inadequately) respond to the goal of the search. Consciousness at these points depicts waystations toward solutions and serves to restrict and focus subsequent pathways by selectively activating those that are currently within the conscious construction. Preconscious structures that construct consciousness at the time of impasse, delay, or interruption receive additional activation, as do those still unconscious structures linked with them. The result is a directional flow of activation that would not have happened without the extra boost derived from the conscious state.[13]

    Another phenomenon that argues for the re-presentation and re-activation of conscious contents is our ability to "think about" previous conscious contents; we can be aware of our awareness. There is anecdotal as well as experimental evidence that we are sometimes confused between events that "actually" happened and those that we merely imagined, i.e., events that were present in consciousness but not in the surrounds. Clearly the latter must have been stored in a manner similar to the way "actual" events are stored (Anderson, 1984; Johnson and Raye, 1981). It has been argued that this awareness of awareness (self-awareness) is in principle indefinitely self-recursive, that is, that we can perceive a lion, be aware that we are perceiving a lion, be conscious of our awareness of perceiving a lion, and so forth (e.g., Johnson-Laird, 1983). In fact, I have never been able to detect any such extensive recursion in myself, nor has anybody else to my knowledge. We can certainly be aware of somebody (even ourselves) asserting the recursion, but observing it is another matter. The recursiveness in consciousness ends after two or three steps, that is, within the structural limit of conscious organization.

    The positive feedback that consciousness provides for activated and constructed mental contents is, of course, not limited to problem-solving situations. It is, for example, evident in the course of self-instructions. We frequently keep reminding ourselves (consciously) of tasks to be performed, actions to be undertaken. "Thinking about" these future obligations makes it more likely that we will remember to undertake them when the appropriate time arrives. Thus, self-directed comments, such as, "I must remember to write to Mary" or "I shouldn't forget to pick up some bread on the way home," make remembering more and forgetting less likely. Such self-reminding not only keeps the relevant information highly activated but also repeatedly elaborated in different contexts, thus ready to be brought into consciousness when the appropriate situation for execution appears.[14] Self-directed comments can, of course, be deleterious as well as helpful. The reoccurrence of obsessive thoughts is a pathological example, but everyday "obsessions" are the more usual ones. Our conscious constructions may end up in a loop of recurring thoughts that preempt limited capacity and often prevent more constructive and situationally relevant "thinking." One example is trying to remember a name and getting stuck with an obviously erroneous target that keeps interfering with more fruitful attempts at retrieval. The usual advice to stop thinking about the problem, because it will "come to us" later, appeals to an attempt to let the activation of the "error" return to lower levels before attempting the retrieval once again. The fact that a delay may produce a spontaneous "popping" of the required information speaks to unconscious spreading of activation on the one hand and the apparent restricting effect of awareness on the other (see Mandler, 1994, for extensive discussion of these issues). Another example of the deleterious effects of haphazard activation is represented in the likelihood of consciousness being captured by a mundane occurrence. Thus, as we drive home, planning to pick up that loaf of bread, conscious preoccupation with a recent telephone call may capture conscious contents to the exclusion of other, now less activated, candidates for conscious construction, such as the intent to stop at the store. Or, planning to go to the kitchen to turn off the stove, we may be "captured" by a more highly activated and immediate conscious content of a telephone call. The "kitchen-going" intention loses out unless we refresh its activation by reminding ourselves, while on the phone, about the intended task. If we fail to keep that activation strong enough and the plan in mind -- our dinner is burned.

    The additional function of consciousness as outlined here is generally conservative in that it underlines and reactivates those mental contents that are currently used in conscious constructions and are apparently the immediately most important ones. It also encompasses the observation that under conditions of stress people tend to repeat previously unsuccessful attempts at problem solution. Despite this unadaptive consequence, a reasonable argument can be made that it is frequently useful for the organism to continue to do what apparently is successful and appears to be most appropriate. Finally, the priming functions of consciousness interact in important ways with its construction. If, as I have argued, conscious construction response to (subjectively) important aspects of the world, then it will be exactly those of course that will be primed and enhanced for future use and access.


    A waystation


    Between 1986 and 1992 I had occasion to elaborate and consolidate a number of old issues of capacity and seriality which resulted in my current waystation of the bringing back - the redux - of consciousness. Briefly, the issues on which I focused were these:

    Given our recent insights into the parallel and distributed nature of (unconscious) mental processing, the human mind (broadly interpreted) needed to handle the problem of finding a buffer between a bottleneck of possible thoughts and actions of comparable "strengths" competing for expression and the need for considered effective action in the environment. Consciousness handles that problem by imposing limited capacity and seriality. Conscious and unconscious processes are - in major ways - contrasted by their differences in seriality and capacity. Conscious processes are serial and limited in capacity to some 5 contemporaneous items or chunks, whereas unconscious processes operate in parallel and are - for all practical purposes - "unlimited" in capacity. Any speculations about the evolution of consciousness needs to take these distinctions into account.[15] And finally, given the assumption that current conscious contents are constructed out of available activated structures and current demands, it follows that under different demands the same underlying structures should give rise to different conscious representations (see Mandler, 1992).

    To illustrate the importance of limited, serial conscious representation, imagine consciousness as it is, behaving as yours does, but with one - and only one - exception, namely its seriality. Imagine consciousness as a parallel machine that permits everything currently relevant (or unconsciously active) to come to consciousness all at once. You would be overwhelmed by thoughts, potential choices, feelings, attitudes, etc. of comparable "strength" and relevance. As you read a book all the characters and their implications would cascade in your mental life. Consider the story of Lord Nelson and Lady Hamilton: As you read of one their trysts you would also be aware/conscious of his victory at Trafalgar, his defeats in the Mediterranean, his anti-republicanism, his narcissism - and her eventual obesity, her Lancashire beginnings, her lovers - and her husband's interest in classical vases and volcanos, - and ..... .[16] A huge mishmash of associations and ideas would envelop you, and that discounts simple environmental events such as the chair you are sitting on, the lamp that illumines your book, and so forth. A "humanly" impossible situation. All of this would come in simultaneous snippets, still constrained by the limited capacity of the machine. In this account I have not relaxed the constraint of limited capacity. To relax that restriction too, to permit all unconscious content to become conscious might strain the capacity of the reader to suspend disbelief. But wait just one more moment; would that consciousness not remind you of a consciousness discussed in some other place? Is that not a description of God - aware of all that all of "his children" (including the merest sparrow) ever do and think. Can one really move that easily move from humanity to deity - by just suspending seriality, limited capacity, and the current relevance of consciousness?


    A minimalist conjecture and some interesting problems


    I want to summarize this discussion by suggesting that some minimal number of assumptions and requirements might do justice to the known functions of consciousness. Can we entertain an understanding of conscious phenomena by considering only three basic functions of consciousness:


    a. The selective/constructive representation of unconscious structures.

    b. The conversion from a parallel and vast unconscious to a serial and limited conscious representation.

    c. The selective activation (priming) by conscious representations that changes the unconscious landscape by producing new privileged structures.

    Of these processes, the priming function is more directly and obviously associated with consciousness, whereas the others may be more indirect and inferred. However, all of these characteristics are amenable to empirical investigations, and in the end the question is whether these minimalist assumptions are adequate to handle the most obvious or inferred functions of consciousness. If not, what else is needed, and is it consistent with these assumptions? Or is there another core of assumptions that might command assent from a large number of theorists.

    Until that question can be settled (or even asked?), there are a number of specific questions with which I have been concerned, and which deserve further investigation.

    Consciousness and short-term memory (STM). The distinction between short-term and long-term memory goes back at least to the beginnings of the information processing movement. Is it not about time that we bring STM into line with what we know about consciousness? William James coined the term "primary memory" to designate information that is currently available in consciousness.[17] If in STM we "retrieve" only whatever consciousness will "hold," then we are limited to retrieving some 5+- items. The limitation is the same for STM and limited capacity consciousness, both of which a restricted to a single organized set with about 5 discernable constituent attributes, features, items, etc. But any operation on the limited material held in consciousness will further activate and bound that material, providing a small set of highly activated preconscious representations. Thus STM consists of "primary" currently conscious contents and additional material that is very easily retrieved because it is the product of these short term retrievals and activations. In addition items "in" STM may have been elaborated or merely activated, a difference that may determine their rate of decay or accessibility.[18] Does this do justice to what we know about STM?

    Why is the limited capacity what it is? Whether one wishes to define the limited capacity as 3 or 5 or 7 items/chunks, some such magnitude has been accepted ever since George Miller's seminal paper in 1956 on the "magical number." Of all the possible genetic determinants of human cognition, the one that defines the limited capacity of consciousness seems to demand more serious attention than some of the more extravagant evolutionary conjectures that circulate these days. It seems intuitively reasonable that the number needs to be more than 2 and probably less than 10 if fast decision processes on a reasonable number of alternatives are required for survival. But why the number we've got?

    How do we determine conscious contents? Nearly thirty years ago Adrian (1966) noted that psychology's "uncertainty principle" may well be the fact that the very interrogation of conscious contents may alter these contents. Can we circumvent this problem? What alternatives, such as Dennett's (1991) heterophenomenology, are available?

    Esoteric and other states of consciousness. Currently there is a rather wide gulf between the cognitive science community on the one hand and equally passionate investigators of esoteric and altered states of consciousness on the other. Neither side seems to pay much attention to what the other has to say, and given that we speak from the cognitive science side it may be time to take a look at the phenomena that the "others" cultivate. I tried to do that early on (Mandler, 1975b) when I suggested that a variety of different esoteric and meditative methods produce "conscious stopping," i.e., a frame-freezing experience. How does that come about?

    On not being conscious. Patients with very dense amnesias have given us some anecdotal guides on the experience of "not being conscious." Tulving (1985) reports such a patient's description of living in a permanent present and not being able to think about future plans or events. More extensive follow-ups to these interesting leads should be most useful for a better understanding of "being conscious."

    I want to conclude with comments on the place of consciousness in the discussions of "mind," even though the confusion between the two has created more heat than light.



    The mind is what the brain does


    Central to many disquisitions about consciousness are convoluted arguments about the mind-body problem which demonstrate its utility as a continuing mainstay of a philosophical cottage industry - - a continuing preoccupation with the 19th century equation of mind and consciousness. The very useful notion that the mind is what the brain does[19] refers to the fact that we usually assign observed or implied or subjectively reported behaviors of humans to some intervening "mental" set of variables. At least since the end of the 19th century, and surely, for most of the twentieth the term mental has been applied to conscious and unconscious events, and to a variety of theoretical machineries, ranging from hydraulic to network to schematic to computational models. In recent years, the last have achieved a unique status in the history of science as a variety of philosophers and cognitive scientists have acted as if the millennium had arrived and the final model that intervenes between the brain and behavior has been found in the computer analogy (cf. Dennett, 1991, Jackendoff, 1987).

    There are specific, and sometimes very precise, concepts associated with the function of larger units such as organs, organisms, and machines, concepts that cannot without loss of meaning be reduced to the constituent processes of the larger units. The speed of a car, the conserving function of the liver, and the notion of a noun phrase are not reducible to internal-combustion engines, liver cells, or neurons. But nobody talks about the Cadillac-acceleration, the liver-sugar, or the noun-phrase-cell problem. Complex entities may develop new functions - a notion that has sometimes been referred to as emergence. The mind has functions that are different from those of the central nervous system qua nervous system, just as societies function in ways that cannot be reduced to the function of individual minds. This is, of course, true even within bounded scientific fields; mechanics and optics cannot be reduced to nuclear physics.

    Some of the difficulty that has been generated by the mind-body distinction stems from the failure to consider the relation between well-developed mental and physical theories. Typically, mind and body are discussed in terms of ordinary-language definitions of one or the other. Because these descriptions are far from being well-developed theoretical systems, it is doubtful whether the problems of mind and body as developed by the philosophers are directly relevant to the scientific distinction between mental and physical systems.

    Once it is agreed that the scientific mind-body problem concerns the relation between two sets of theories, the enterprise becomes theoretical and empirical, not metaphysical. And the conclusion would be that we do not yet know enough about either system to develop a satisfactory bridging system/language. If, however, we restrict our discussion of the mind-body problem to the often vague and frequently contradictory speculations of ordinary language, then, as centuries of philosophical literature have shown, the morass is unavoidable and bottomless.

    For example, we can and do, in the ordinary-language sense, ask how it is that physical systems can have "feelings." A recurring philosophical blockbuster has been the question how a physical brain can generate mental qualia such as color sensations. The question has produced many premature explications, however ingenious some of them are (such as Dennett's, 1991). A healthy agnosticism, a resounding "I don't know" might be well-placed at the beginning of these interchanges. We don't know, and we might know sometime in the future, but is the question really different from any other island of human ignorance? Such questions assume that we know the exact nature of the physical system and of a mental system that produces "feelings." Usually, however, the question is phrased as if "feelings" were a basic characteristic of the physical and mental system instead of one of its products. Not only is the experience of a feeling a product, but its verbal expression is the result of complex mental structures that intervene between its occurrence in consciousness and its expression in language. If we have truly abandoned Cartesian dualism, then one may permit the question of how the brain "does" consciousness, seen as just another thing that it does.

    The study of consciousness also had a very modern hurdle in its way. In their preoccupation with the computer analogy, many cognitive scientists have become uneasy with consciousness as a characteristic of one aspect of mind. In part because the problem of computer consciousness is at the least complex (though not difficult for science-fiction writers), some philosophers and others have become closet epiphenomenalists - refusing to assign to consciousness any function in mental life (e.g., Jackendoff, 1987; Thagard, 1986 - who is however willing to let consciousness have some functions in applied aspects of behavior!!).



    Functionalism revisited - approaching consciousness from the outside in

    Assuming that the study of consciousness is important, how are we to approach it? There have been two major trends in modern treatments: Inside-out and Outside-in. The Inside-out approach starts with the the appearance and characteristics of some human action or behavior and then attempts to model a plausible mechanism or set of mechanisms that will generate those characteristics. The Outside-in theorist looks first at the functions of those human actions and then attempts to find a plausible mechanism or mechanisms that will carry out those functions. 20 The proverbial Martian, on encountering an automobile, would - if an Insider - immediately open the hood and try to understand the working of the engine, but - if an Outsider - would first try to understand what it is that automobiles do (and then open the hood). One of the results of the Inside-out method is a very real attraction to complex computational machines that can model some characteristics of mind, but tend to be rather less able to carry out its functions.

    Dennett, for example, much like most philosophers, is primarily concerned with the appearance and "feel" of consciousness and becomes uncharacteristically vague when talking about its possible functions (Dennett, 1991, pp.275 ff.; see also Mandler, 1993). Another inside-out theorist is very specific in his defense of the approach: Jackendoff (1987, p.327) rejects any inquiry as to what functions consciousness might serve. He specifically endorses the preferential use of evidence that is directed toward what consciousness is, not what it is for. The attractiveness of the inside-out/functionalism1 approach 21 is found in Chomsky's approach to linguistics. Questions of the function of language are secondary - in contrast to the linguistic "functionalists" who preferentially consider contemporaneously several aspects of language, including its communicative, cognitive, pragmatic, social "functions" in order to understand its origin and structure. The inside-out approach is also related to the preference for some sort of central homunculus that directs and knows all. Not all homunculi are bad, but this particular one usually adopts the language of the board room, with "executives" directing "slaves" and similar metaphors. It is, of course, inevitable that consciousness-talk at the end of the 20th century will reflect 20th century mores and prejudices, whether these are phrased in computer-language, boardroom talk, or whatever. The best we can do is to be aware of such obvious lures and to try to avoid evanescent sociocentric approaches that are likely to have a relatively short life. On the other hand, such pious exhortations may be useless since it is highly probable that we cannot truly escape our current situation and past history.



    References


    Adrian, E. D. (1966). Consciousness. In J. C. Eccles (Ed.), Brain and conscious experience. New York: Springer
    Anderson, R. E. (1984). Did I do it or did I imagine doing it?. Journal of Experimental Psychology: General, 113, 594-613.
    Atkinson, R. C. and Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes. In K. W. Spence and J. T. Spence (Ed.), The psychology of learning and motivation(Vol. 2). New York: Academic Press
    Baddeley, A. (1989). The uses of working memory. In P. R. Solomon, G. R. Goethals, C. M. Kelley, & B. R. Stephens (Ed.), Memory: Interdisciplinary approaches (pp. 107-123). New York: Springer Verlag
    Craik, F. I. M. and Watkins, M. J. (1973). The role of rehearsal in short-term memory. Journal of Verbal Learning and Verbal Behavior, 12, 599-607.
    Crick, F. and Mitchison, G. (1983). The function of dream sleep. Nature, 304, 111-114.
    Dennett, D. C. (1991). Consciousness explained. Boston: Little, Brown & Company
    Freud, S. (1900). The interpretation of dreams. In The Standard Edition of the Complete Psychological Works of Sigmund Freud(Vols. 4 and 5). London: Hogarth Press, 1975
    Gregory, R. L. (1981). Mind in science. New York: Cambridge University Press
    Hobson, J. A. (1988). The dreaming brain. New York: Basic Books
    Hobson, J. A., Hoffman, S. A., Helfand, R., and Kostner, D. (1987). Dream bizarreness and the activation-synthesis hypothesis. Human Neurobiology, 6, 157-164.
    Jackendoff, R. (1987). Consciousness and the computational mind. Cambridge, Mass.: MIT Press
    James, W. (1890). The principles of psychology. New York: Holt
    Johnson-Laird, P. (1983). Mental models. Cambridge: Cambridge University Press
    Johnson, M. K. and Raye, C. L. (1981). Reality monitoring. Psychological Review, 88, 67-85.
    Kahneman, D. and Treisman, A. (1984). Changing views of attention and automaticity. In R. Parasuraman & D. R. Davies (Ed.), Varieties of attention (pp. 29-61). New York: Academic Press
    Lackner, J. R. (1988). Some proprioceptive influences on the perceptual representation of body shape and orientation. Brain, 111, 281-297.
    Mandler, G. (1975a). Consciousness: Respectable, useful, and probably necessary. In R. Solso (Ed.), Information processing and cognition: The Loyola symposium (Also in: Technical Report No. 41, Center for Human Information Processing, University of California, San Diego. March, 1974., pp. 229-254). Hillsdale, N.J.: Lawrence Erlbaum Associates
    Mandler, G. (1975b). Mind and emotion. New York: Wiley
    Mandler, G. (1984a). The construction and limitation of consciousness. In V. Sarris and A. Parducci (Ed.), Perspectives in psychological experimentation: Toward the year 2000. Hillside, N.J.: Lawrence Erlbaum Associates
    Mandler, G. (1984b). Mind and body: Psychology of emotion and stress. New York: Norton
    Mandler, G. (1985). Cognitive psychology: An essay in cognitive science. Hillsdale, N.J.: Lawrence Erlbaum Associates
    Mandler, G. (1986). Aufbau und Grenzen des Bewusstseins . In V. Sarris & A. Parducci (Ed.), Die Zukunft der experimentellen Psychologie.. Weinheim und Basel : Beltz
    Mandler, G. (1988). Problems and directions in the study of consciousness. In M. Horowitz (Ed.), Psychodynamics and cognition (pp. 21-45). Chicago: Chicago University Press
    Mandler, G. (1989). Memory: Conscious and unconscious. In P. R. Solomon, G. R. Goethals, C. M. Kelley, & B. R. Stephens (Ed.), Memory: Interdisciplinary approaches (pp. 84-106). New York: Springer Verlag
    Mandler, G. (1992). Toward a theory of consciousness. In H.-G. Geissler, S. W. Link & J. T. Townsend (Ed.), Cognition, information processing, and psychophysics: Basic issues (pp. 43-65). Hillsdale, N.J.: Lawrence Erlbaum Associates
    Mandler, G. (1993). Review of Dennett's "Consciousness explained". Philosophical Psychology, 6, 335-339.
    Mandler, G. (1994). Hypermnesia, incubation, and mind-popping: On remembering without really trying. In C. Umilta & M. Moscovitch (Ed.), Attention and Performance XV: Concious and unconscious information processing (pp. 3-33). Cambridge, Mass.: MIT Press
    Mandler, G. (1995). Origins and consequences of novelty. In S. M. Smith, T. B. Ward & R. Finke (Ed.), The creative cognition approach. Cambridge, MA : MIT Press
    Marcel, A. J. (1983a). Conscious and unconscious perception: Experiments on visual masking and word recognition. Cognitive Psychology, 15, 197-237.
    Marcel, A. J. (1983b). Conscious and unconscious perception: An approach to the relations between phenomenal experience and perceptual processes. Cognitive Psychology, 15, 238-300.
    McClelland, J. L. and Rumelhart, D. E. (1981). An interactive activation model of context effects in letter perception: Part 1. An account of basic findings. Psychological Review, 88, 375-407.
    Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97.
    Nakamura, Y. (1989). Explorations in implicit perceptual processing: Studies of preconscious information processing. Unpublished doctoral dissertation, University of California, San Diego.
    Neisser, U. (1963). The multiplicity of thought. British Journal of Psychology, 54, 1-14.
    Neisser, U. (1967). Cognitive psychology. New York: Appleton-Century-Crofts
    Nielsen, T. I. (1963). Volition: A new experimental approach. Scandinavian Journal of Psychology, 4, 225-230.
    Nisbett, R. E. and Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84, 231-259.
    Posner, M. I. and Boies, S. J. (1971). Components of attention. Psychological Review, 78, 391-408.
    Posner, M. I. and Keele, S. W. (1970). Time and space as measures of mental operations. Paper presented at the Annual Meeting of the American Psychological Association.
    Posner, M. I. and Snyder, C. R. R. (1975). Attention and cognitive control. In R. Solso (Ed.), Information processing and cognition: The Loyola symposium. Potomac, Md.: Lawrence Erlbaum Associates
    Robert, W. (1886). Der Traum als Naturnothwendigkeit erklart. Hamburg: H. Seippel
    Shallice, T. (1972). Dual functions of consciousness. Psychological Review, 79, 383-393.
    Shallice, T. (1991). The revival of consciousness in cognitive science. In W. Kessen, A. Ortony & F. Craik (Ed.), Memories, thoughts, emotions: Essays in honor of George Mandler (pp. 213-226). Hillsdale, NJ: Lawrence Erlbaum Associates
    Thagard, P. (1986). Parallel computation and the mind-body problem. Cognitive Science, 10, 301-318.
    Thatcher, R. W. and John, E. R. (1977). Foundations of cognitive processes. Hillsdale, N.J.: Lawrence Erlbaum Associates
    Tulving, E. (1985). Memory and consciousness. Canadian Psychology, 26, 1-12.
    Warrington, E. K. (1975). The selective impairment of semantic memory. Quarterly Journal of Experimental Psychology, 27, 635-657.
    Woodward, A. E., Bjork, R. A., and Jongeward, R. H. Jr. (1973). Recall and recognition as a function of primary rehearsal. Journal of Verbal Learning an Verbal Behavior, 12, 608-617.

  9. MindMods CogSciTech Consciousness Paper Posting #1

    Being Conscious of Ourselves

    We're going to try and post an interesting paper on consciousness at least once a week. There are debates among those philosophers and scientists who study consciousness about pretty much every aspect of consciousness - especially about what consciousness actually is. Many of these are surprisingly easy to read, given the nature of their arguments.

    This first paper called 'Being Conscious of Ourselves' was written by David M. Rosenthal and published in The Monist issue 82, 2 (April 2004) in a special issue on self-consciousness.

    BEING CONSCIOUS OF OURSELVES

    Abstract: I argue that we can explain how we are
    conscious of ourselves by appeal to essentially indexical
    thoughts we have about ourselves, in particular
    about our own current mental states. I show that being
    conscious of ourselves in that way doesn't require that
    we are aware of ourselves in some privileged way that's
    antecedent to the higher-order thoughts we have about
    our own mental states. The account successfully
    resists, moreover, challenges based on the so-called
    immunity to error through misidentification. And an
    account based on such higher-order thoughts, finally,
    also does justice to the way we identify and locate
    ourselves as creatures in the world.



    Slightly revised from The Monist, 87, 2 (April
    2004), special issue on self-consciousness,
    guest edited by José Luis Bermúdez
    © David M. Rosenthal

     

    I. Consciousness of the Self

    What is it that we are conscious of when we are conscious of ourselves? Hume famously despaired of finding any self, as against simply finding various impressions and ideas, when, as he put it, "I enter most intimately into what I call myself."1 And, expanding on this, he wrote: "When I turn my reflexion on myself, I never can perceive this self without some one or more perceptions; nor can I ever perceive any thing but the perceptions" (Appendix, p. 634).

    It is arguable that the way Hume attempted to become conscious of the self seriously stacked the deck against success. Hume assumed that being conscious of a self would have to consist in perceiving that self. Perceiving things does make one conscious of them, but perceiving something is not the only way we can be conscious of that thing. We are also conscious of something when we have a thought about that thing as being present. I may be conscious of an object in front of me by seeing it or hearing it; but, if my eyes are closed and the object makes no sound, I may be conscious of it instead by having a thought that it is there in front of me.

    Not all thoughts one can have about an object result in one's being conscious of that object. We resist the idea that having thoughts about objects we take to be distant in place or time, such as Saturn or Caesar, make one conscious of those objects; so the thought must be about the object as being present to one. And the thought must presumably have an assertoric mental attitude; doubting and wondering something about an object do not make one conscious of that object. Nor does simply being disposed to have a thought about something make one conscious of it; the thought must be occurrent. But having an occurrent, assertoric thought about an object as being present does intuitively make one conscious of that object.

    Hume would presumably have argued that this alternative way of being conscious of things has no advantage here, since he maintained that thinking consists simply of pale versions of qualitative perceptual states. "All ideas," he insisted, "are borrow'd from preceding perceptions" (Appendix, p. 634). And his problem was about finding anything other than the mental qualities of our perceptions.

    But there is good reason to reject Hume's view about the mental nature of having thoughts. For one thing, it is difficult to see how perceptions could be combined to yield thoughts with complex syntactic structure. For another, though qualitative mental states arguably do represent things,2 they do so in a way that is strikingly different from the way the intentional content of thoughts represents things.3 Nor is there anything in qualitative mentality that corresponds to the mental attitudes exhibited by intentional states. And rejecting Hume's perceptual model of what thoughts consist in makes room for a more promising way to understand how it is that we are conscious of ourselves. We are conscious of ourselves by having suitable thoughts about ourselves.

    The contrast between Hume's sensory approach and the alternative that relies on the thoughts we have about ourselves mirrors a contrast between two views about what it is for a mental state to be a conscious state. On the traditional inner-sense model, a mental state is conscious if one senses or perceives that state; this is doubtless the most widely held view about the consciousness of mental states.4 The higher-order-thought (HOT) model, by contrast, holds instead that a mental state's being conscious consists in its being accompanied by a suitable thought that one is, oneself, in that state. On the version of the view that I have developed and defended, the thought must be assertoric and nondispositional. And, because the thought has the content that one is, oneself, in that state, the thought automatically represents the target mental state as being present.5

    The difference between the inner-sense and HOT views about what it is for a mental state to be conscious sheds light on the two models of consciousness of the self. Suppose that one's mental states are conscious in virtue of one's sensing those states. Sensing a state consists in having a higher-order sensation that represents the sensed states. But nothing in one's sensing a mental state would make reference to or in any other way represent any self to which the target state belongs. So nothing in one's sensing a mental state would make one conscious of the self.

    Things are different if one is, instead, conscious of one's conscious states by having thoughts about those states. One will then have a thought that one is, oneself, in the target state. And that HOT will thereby make one conscious not only of that target state, but also of a self to which the HOT represents the target state as belonging. The HOT model explains not only how we are conscious of our conscious mental states, but how it is that we are conscious of ourselves as well.

    But can the HOT model of how mental states are conscious do justice to the particular way we are conscious of ourselves? There are two main reasons to doubt that it can do so. The way we are conscious of ourselves seems, intuitively, to be special in a way that simply having a thought about something cannot capture. For one thing, there is a difference between having a thought about somebody that happens to be oneself and having a thought about oneself, as such. HOTs presumably must be about oneself, as such. But having a thought about oneself, as such, may seem to require some special awareness of the self that is antecedent to and independent of the thought.

    In addition, it seems to many that we are conscious of ourselves in a way that affords a certain immunity to error. The special epistemological access to the self which these phenomena seem to suggest have even been thought to provide a foundation for the identification of all other objects. How could such special self-awareness arise if we are conscious of ourselves simply by having thoughts about ourselves as being bearers of particular mental states?

    There is a second kind of challenge to a view about consciousness of the self that relies simply on such HOTs. Although it seems that we are conscious of ourselves in a way that is special in the respects just sketched, our consciousness of ourselves also fits with our ordinary, everyday ways of identifying and locating ourselves in the world. Each of us is a being with many conscious mental states. But each of us is also a creature that interacts with other objects in the world. And we are conscious of ourselves in both respects. How can simply having HOTs about our mental states explain the way we are conscious of ourselves as located within physical reality? How could having HOTs ground our identifying ourselves as creatures interacting with many other physical things?

    The two challenges seem to pull in opposing directions. It may be unclear at first sight how we could be conscious of ourselves in a way that underwrites some kind of immunity to error and yet also captures our contingent location and identity in the physical world.

    In what follows, I argue that a model of self-consciousness based on HOTs can meet both these challenges. In the next section I briefly sketch the reasons why the HOT model is preferable to the inner-sense model in explaining what it is for a mental state to be conscious. In sections III and IV, then, I take up the first of the two challenges just described, to explain how self-consciousness based on HOTs squares with the way our consciousness of ourselves seems to be special. And in section V I conclude by showing how that account also fits with the way we identify and locate ourselves as creatures in the world.

     

    II. Consciousness and HOTs

    There is extensive evidence from both everyday life and experimental findings that mental states occur without being conscious. Such evidence relies on situations in which there is convincing reason to believe that an individual is in some particular mental state even though that individual sincerely denies being in the state. Such sincere denials indicate that the individual is not conscious of being in the state in question. We take as decisive that a mental state is not conscious if an individual is in that state but is in no way conscious of being in it. It follows that whenever a mental state is conscious, the individual in that state is, in some suitable way, conscious of being in it.

    Both the inner-sense and HOT models agree thus far. They differ in what that suitable way is of being conscious of a state, in virtue of which that state is conscious. It has often been emphasized that, for a mental state to be conscious, one must be conscious of it in a way that carries some kind of immediacy. But the way we are conscious of our conscious states need only be subjectively immediate. It need not be that nothing actually mediates between the state and one's awareness of that state, but only that nothing seems to mediate.

    Ordinary perceiving seems to operate in just this way. Nothing seems, subjectively, to mediate between the things we see and hear and our seeing and hearing of them. From a first-person point of view, perceiving seems to be direct. This feature of perceiving makes the inner-sense model appealing, since higherorder perceiving can do justice to the way one's consciousness of one's conscious states seems to one to be unmediated.

    But the HOT model is no less successful in capturing this aspect of the way we are conscious of our conscious states. Some of the thoughts we have about things seem, subjectively, to rely on inference, while others do not seem to do so. When a thought seems subjectively to occur independently of any inference, I shall refer to it as a noninferential thought. Since subjective impression is all that matters here, a noninferential thought may actually arise as a result of some inference, as long as one is wholly unaware of the way the inference figures in that thought's occurring. So, if one is conscious of being in a state by having a noninferential HOT that one is in that state, it will seem subjectively that nothing mediates between that state and one's being conscious of it. HOTs can in this way explain the intuitive immediacy that our awareness of our conscious states exhibits.

    Thus far, HOTs and inner sense seem equally good at explaining what it is for a mental state to be conscious. But inner sense faces a difficulty that disqualifies it from serious consideration. Sensing and perceiving things take place by way of qualitative mental states. And, when sensing or perceiving is conscious, there is something it's like for one to be in these states, something it's like in respect of the mental quality that these states exhibit. So, if we are conscious of our conscious states by way of some inner sense, then there are higher-order qualitative states in virtue of which we are conscious of our conscious states.

    But it is clear that no such higher-order qualitative states actually occur. One way to see this is theoretical. Every mental quality belongs to some distinctive perceptual modality; but what modality might the mental qualities of such higher-order qualitative states belong to? It could not be the modality of the firstorder conscious state, since that perceptual modality is dedicated to perceiving a particular range of perceptible properties and the first-order qualitative state does not exhibit those perceptible properties. Visual states, for example, exhibit mental qualities that reflect the similarities and differences among the commonsense physical properties perceptible by sight; but the visual states, themselves, do not exhibit properties perceptible by sight. So the mental qualities of higher-order qualitative states could not simply reduplicate the first-order mental qualities. And there is no other perceptual modality to which such higher-order qualities could belong.

    Subjective considerations point to the same conclusion. When our mental states are conscious in the ordinary, everyday way, we are not conscious of the higher-order states in virtue of which we are conscious of those first-order states. But very occasionally we are actually aware of being conscious of those firstorder states; when we introspect, we are conscious of being aware of the introspected states. But even when we introspect, we are never conscious of any mental qualities that characterize the states in virtue of which we are conscious of those introspected mental states. The higher-order states in virtue of which we are conscious of our own mental states are not qualitative states.

    The only alternative is that those higher-order states are simply intentional states. We have already seen that being in an assertoric, nondispositional intentional state that represents the thing it is about as being present makes one conscious of that thing. And, if one is not conscious of any inference on the basis of which one holds that assertoric attitude, so that the awareness the attitude results in seems spontaneous and unmediated, then one will be conscious of the target state in the subjectively unmediated way characteristic of our conscious states. We are conscious of our conscious states by having HOTs to the effect that we are in those states.

    As noted at the outset, HOTs have the advantage over inner sense that, unlike higher-order sensations, a HOT makes one conscious of its target state as belonging to a particular self. So each HOT makes one conscious of that self. And just as a HOT, by being noninferential, makes one conscious of its target state in a way that seems subjectively unmediated, so that HOT will also make one conscious of the relevant self in a way that is subjectively unmediated. HOTs do justice to our intuitive sense that we have special, unmediated access to ourselves.

    A proponent for the inner-sense model might argue that, whatever one thinks about the foregoing considerations, inner sense has a decisive advantage over the HOT model. When qualitative states are conscious, there is something it's like for the subject to be in those states, and this is absent when qualitative states are not conscious. It seems, however, that HOTs could not be responsible for this difference, since HOTs have no qualitative mental properties. We can explain why there is something it's like for one to be in conscious qualitative states, the argument goes, only if the higher-order states in virtue of which we are conscious of the first-order states are themselves qualitative states.

    But this argument misconceives the situation. The higherorder states are typically not themselves conscious; they are conscious only when we are introspectively aware of our conscious states. The reason to invoke higher-order states in virtue of which some mental states are conscious is not because we are normally conscious of such higher-order states, but because invoking such higher-order states are theoretically well-founded. The higher-order states, whether sensations or thoughts, are theoretical posits, which we only occasionally become subjectively aware of.

    But, since the higher-order states typically are not conscious, their being qualitative in character could not help explain there being something it's like for one to be in conscious qualitative target states. There will be something it's like for one to be in a conscious qualitative state if one is conscious of oneself, in a way that seems subjectively to be unmediated, as being in a state of that qualitative type. And HOTs plainly make us conscious of ourselves in that way.

    There is some indirect evidence that HOTs actually do result in there being something it's like for one to be in conscious qualitative states. We sometimes become conscious of more finegrained difference among our qualitative states by learning new words for the relevant mental qualities. Consider the way new mental qualities seem consciously to emerge when we learn new words for the gustatory mental qualities that result from tasting similar wines or the auditory mental qualities that arise when we hear similar musical instruments. We can best explain how the learning of words for mental qualities can have that effect by supposing that we come to deploy new concepts corresponding to those words, which enable us to have new HOTs about our qualitative states. HOTs with more fine-grained content result in our qualitative states' being conscious in respect of more fine-grained qualities. And, if the intentional content of HOTs makes a difference to what mental qualities we are conscious of, we can infer that HOTs also make the difference between there being something it's like for one to be in those states and there being nothing at all that it's like. HOTs do result in our qualitative states' "lighting up."

     

    III. Self-Consciousness and the Essential Indexical

    A HOT makes one conscious of oneself as being in a particular mental state because it has the content that one is, oneself, in that state. So a HOT must somehow refer to oneself. But as already noted, not any way of referring to oneself do.

    There are many descriptions that uniquely pick me out even though I am unaware that they do so; I might believe that some other individual satisfies one such description, or simply have no idea who if anybody does. Consider John Perry's now classic example of my seeing in a grocery store that somebody is spilling sugar from a grocery cart and not realizing that the person spilling sugar is me. My thought that the person spilling sugar is making a mess refers to me, though it does not refer to me as such. Perry describes this special way of referring to oneself as the essential indexical; classical grammarians know it as the indirect reflexive, since it captures in indirect discourse the role played in direct quotation by the first-person pronoun.6

    For a mental state to be conscious, it is not enough that the individual one is conscious of as being in that state simply happens to be oneself. Suppose that I am the unique F and I have a thought that the unique F is in pain. That would not make me conscious of myself as being in pain unless I was also aware that I am the unique F. Suppose I thought instead that you were the unique F. My thought that the unique F is in pain would then hardly result in my pain's being conscious; it would not in any relevant way make me conscious of myself as being in pain.

    Essentially indexical self-reference is one way in which our consciousness of ourselves is special. And, as already noted, it is sometimes argued that essentially indexical self-reference is required for identifying everything other than oneself.7 But we rarely do identify other objects by reference to ourselves. We almost always use some local frame of reference in which we figure but which we identify independently of ourselves, by way of various objects we perceive and know about. Such local frames of reference occasionally fail, but when they do, referring to ourselves seldom helps. Essentially indexical self-reference cannot sustain such foundationalist epistemological leanings.

    What exactly does such essentially indexical self-reference actually consist in? How is it that we are able to refer to ourselves, as such? An essentially indexical thought or speech act about myself will have the content that I am F. So we must consider how the word 'I' functions in our speech acts and how the mental analogue of 'I' functions in the thoughts those speech acts express.

    The word 'I' plainly refers to the individual who performs a speech act in which that word occurs. Similarly, the mental analogue of that word refers to the thinker of the containing thought, the individual that holds a mental attitude toward the relevant intentional content. The word 'I' does not have as its meaning the individual performing this speech act; nor does the mental analogue of 'I' express the concept the individual holding a mental attitude toward this content. One can refer to oneself using 'I' and its mental analogue without explicitly referring to any speech act or intentional state.

    On David Kaplan's well-known account, the reference of 'I' is determined by a function from the context of utterance to the individual that produces that utterance; 'I' does not refer to the utterance itself.8 The connection between the words uttered and the act of uttering them is pivotal. 'I' refers to whatever individual produces a containing utterance, but not by explicitly referring to the utterance itself. Similarly, the mental analogue of 'I' refers to whatever individual holds a mental attitude towards a content in which that mental analogue figures, though again not by explicitly referring to that intentional state.

    Suppose, then, that I have the essentially indexical thought that I am F. My thought in effect describes as being F the individual that thinks that very thought, "in effect" because, although the thought does not describe the individual in that way, it still does pick out that very individual. It does not pick out that individual because the intentional content of my thought so describes the individual. But whenever I do have a first-person thought that I am F, my having that thought disposes me to have another thought that identifies the individual the first thought is about as the thinker of that first thought. In that way, we can say that every first-person thought tacitly or dispositionally characterizes the self it is about as the thinker of that thought. Nothing more is needed for essentially indexical self-reference.

    HOTs are simply first-person thoughts, and they function semantically just as other first-person thoughts do. So, when I have a HOT that I am in a particular state, my thought in effect describes as being in that state the individual who thinks that thought. Though the thought itself does not describe that individual as thinking that thought, the thinker of the thought is disposed to pick that individual out in just that way, by being disposed to have another thought that does so identify the individual the first thought is about. Because HOTs function semantically as other first-person thoughts do, the HOT hypothesis explains why, when a mental state is conscious, one is conscious of oneself in an essentially indexical way as being in that state.

    It is important for the HOT model that when a thought refers to oneself in this essentially indexical way, its content does not describe the individual it refers to as the thinker of the thought. If an essentially indexical first-person thought did describe the individual it is about as the thinker of that thought, simply having that thought would make one conscious of having it. And, since HOTs are essentially indexical first-person thoughts, one could not have a HOT without being conscious of oneself as having it. But we are wholly unaware of most of our HOTs.

    It is sometimes objected to the HOT model that nonlinguistic beings, including human infants, could not have HOTs. But this is far from obvious. Many nonlinguistic beings presumably do have some thoughts, and the conceptual resources HOTs use to describe their target states could well, for these beings, be fairly minimal. These beings would not be conscious of their conscious states in the rich way distinctive of adult humans, but that is not at all implausible.

    Still, it might be thought that the essentially indexical self-reference HOTs make preclude their occurring in beings without language. It is natural to suppose that such creatures have, in any case, no HOTs about their intentional states. And a thought can make essentially indexical self-reference only if one is disposed to identify the individual one's thought refers to as the thinker of that thought. But it is natural to suppose that such nonlinguistic beings would indeed so identify the individual their HOTs refer to if they had suitable conceptual resources. And that should be enough for a HOT to result in the creature's being conscious of itself in the relevant way as being in the state in question.

    As noted above, the phenomenon of essentially indexical self-reference encourages the idea that a certain kind of reference to oneself occurs which provides an epistemic foundation for the identification of all other objects. And that idea may, in turn, make it seem tempting to urge that we must have some special access to the self that is independent of the thoughts we have about it. Some other form of self-consciousness, antecedent to those thoughts, might then be needed for the essentially indexical selfreference that our HOTs involve.

    The foregoing explanation of the essential indexical helps dispel that illusion. Reference to oneself as such is simply reference to an individual one is disposed to pick out as the very individual doing the referring. That disposition is independent of the thought that refers to oneself in an essentially indexical way. And that may encourage the idea that essentially indexical selfreference requires independent, antecedent access to the self. But the disposition to have another thought that identifies the individual the first thought is about as the thinker of that thought does not rest on or constitute independent access to the self. It is simply a disposition to have another thought. Essentially indexical self-reference raises no difficulty for an account of self-consciousness in terms of HOTs.

     

    IV. Self-Consciousness and Immunity to Error

    Thereis, however, another way in which our consciousness of our own conscious states appears to raise problems for such an account. There is a traditional view on which our awareness of our conscious states is both infallible and exhaustive. When a mental state is conscious, on this view, there is no feature we are conscious of the state as having which it fails to have, and no mental feature the state has of which we fail to be conscious. There is thus no distinction, on this view, between the reality of mental states and their appearance in consciousness.9

    Few today would endorse such a strong form of privileged access. There is doubtless much about the mental natures of our conscious states that we are unaware of, and much that we are wrong about. Our access to our mental states often falls short of exhaustiveness; we are often unclear about what we actually think about things. Nor is that access infallible; there is robust experimental evidence, for example, that we are sometimes wrong about what intentional states issue in our choices and other actions. People often confabulate being in intentional states to explain their choices in situations in which the reported intentional states could not have been operative.10 In these cases, we are conscious of ourselves as being in states that we are not actually in.

    Errors of both types occur not only with intentional states, but also in connection with conscious qualitative states. When we consciously see red, we are often conscious of the conscious sensation in respect of a relatively generic shade, though the sensation exhibits a fairly specific shade of red, as subsequent attention reveals. And it may even happen that we have one type of bodily sensation or emotion but we are conscious of ourselves as having a type different from that. When local anesthetic blocks any actual pain, a dental patient may still react to the fear and vibration caused by drilling by seeming to be in pain; in such a case, one is conscious of oneself as being in pain, though no pain actually occurs.

    The idea that our access to our conscious states is privileged often goes hand in hand with the view that a state's being conscious is an intrinsic property of that state. If a state's being conscious were intrinsic to that state, that would explain our subjective sense that nothing mediates between the states we are conscious of and our consciousness of them. We have that subjective sense because nothing actually does mediate. And it may be tempting to hold that, if nothing mediates between our consciousness of a state and the state itself, consciousness could not be erroneous; there would be no room for error to enter. But that picture is unfounded. Even if one's consciousness of a state were intrinsic to that state, it could still go wrong.

    But there is an echo of such privilege which persists in a view about the way we are conscious of ourselves. This echo pertains not to the mental nature of the states we are conscious of ourselves as being in, but to the self we are conscious of as being in those states. Suppose I consciously feel pain or see a canary. Perhaps I can be wrong about whether the state I am in is one of feeling pain or seeing a canary. But it may well seem that, if I think I feel pain or see a canary, it cannot be that I am right in thinking that somebody feels pain or sees a canary, but wrong in thinking that it is I who does those things. Such first-person thoughts would, in Sydney Shoemaker's now classic phrase, be "immune to error through misidentification," specifically with respect to reference to oneself.11

    Shoemaker recognizes that such immunity to error fails if one comes to have such thoughts in the way we come to have thoughts about the mental states of others. As he notes (7), I can wrongly take a reflection I see in a mirror to be a reflection of myself; I thereby misidentify myself as the person I see in the mirror. And I might thereby think that I have some property, being right that somebody has that property but wrong that it is I who has the property. So such immunity to error through misidentification does not occur every time one has a thought that one has a particular property. It must be that one's thought that one has that property arises from the special way we seem to have access to our being in conscious states.

    There is reason to doubt, however, that such immunity to error actually obtains. The way we have access to being in conscious states is a matter simply of our having noninferential HOTs that we are in those states. We have a subjective impression that this access is special, since it appears to arise spontaneously and without mediation. But that subjective impression arguably results simply from the relevant HOTs' being based on no conscious inference, and indeed from their typically not themselves being conscious in the first place.

    As with other thoughts, we come to have these HOTs in a variety of ways, and the process by which HOTs arise can, like any other process, go wrong. So, however unlikely it may be that one is ever right in thinking that somebody is in a particular state but wrong that the individual in that state is oneself, such error is not impossible. One might, perhaps, have such strongly empathetic access to another's state that one becomes confused and thinks that it is oneself that is in that state. Such strong immunity to error through misidentification does not obtain.

    Still, something like this immunity to error does hold. I can be mistaken about whether the conscious state I am in is pain, for example, and perhaps even about whether I am the individual that is actually in pain. But, if I think I am in pain, it seems that I cannot be wrong about whether it is I that I think is in pain. Similarly, if I think that I believe or desire something, perhaps I cannot be mistaken about whether it is I that I think has that belief or desire.

    This differs from the immunity to error that Shoemaker and others have described. On that stronger sort of immunity, if I think I am in pain and am right that somebody is, I cannot go wrong specifically about whether it is I who is in pain. On the weaker type of immunity described here, all I am immune to error about in such a case is who it is that I think is in pain. I shall refer to this as thin immunity.

    Plainly there are ways in which we can misidentify ourselves. Not only might I misidentify myself by wrongly taking a reflection I see in a mirror to be a reflection of myself; I might wrongly take myself to be Napoleon, perhaps because of delusion of grandeur, perhaps because of evidence about Napoleon that seems to lead to me. And, if I do misidentify myself as the person in the mirror or as Napoleon, I will also in that way misidentify the person who has the pains, thoughts, desires, and feelings that I am conscious of myself as having.

    How can we capture the specific kind of misidentification that thin immunity rules out? What distinguishes such thin immunity to error through misidentification from the ways in which we plainly can and sometimes do misidentify ourselves? The error I cannot make is to think, when I have a conscious pain, for example, that the individual that has that pain is somebody distinct from me, but I can be mistaken about just who it is that I am. How can we capture this distinction? And how can we explain the thin immunity that we do actually have?

    When I have a conscious pain, I am conscious of myself as being in pain. If I think I am Napoleon, I will think that Napoleon is in pain. What I cannot go wrong about is simply whether it is I that I think in pain, that is, whether it is I whom I am conscious of as being in pain. The question is what this amounts to. The earlier discussion of essentially indexical selfreference gives us a clue. When I refer to myself as such, I refer to the individual I could also describe as doing the referring. Similarly, the error of misidentification I cannot make when I am conscious of myself as being in pain is to think that the individual I think is in pain is distinct from the individual who is conscious of somebody's being in pain. We can readily explain this in terms of the HOT model. The mental analogue of the word 'I' refers to whatever individual thinks a thought in which that mental analogue occurs. So every HOT tacitly represents its target state as belonging to the individual that thinks that very HOT.12

    Suppose, then, that I have a conscious pain. Since the pain is conscious, I also have a HOT to the effect that I am in pain, and that HOT tacitly represents the pain as belonging to the very individual that thinks that HOT, itself. The HOT in virtue of which my pain is conscious in effect represents the pain as belonging to the very individual who thinks that HOT. But the individual who has that HOT is thereby the individual for whom the pain is conscious; so one cannot in that respect misidentify the individual that seems to be in pain. I am conscious of a single individual as being in pain and also, in effect, as the individual who is conscious of being in pain. The reason I cannot misidentify the individual I take to be in pain as being anybody other than me is simply that my being conscious of myself as being in pain involves my identifying the individual I take to be in pain as the very individual who takes somebody to be in pain.

    These considerations clarify the connection between thin immunity to error through misidentification and essentially indexical self-reference. The word 'I' and its mental analogue refer to the speaker or thinker, thereby forging a connection between an intentional content in which 'I' or its mental analogue figures and the mental attitude held toward that content or the illocutionary act that verbally expresses it. My essentially indexical use of 'I' or its mental analogue to refer to myself relies on that connection; my thought or assertion that I am F in effect represents as being F the very individual that thinks that thought or makes that assertion. Similarly, because I identify the individual I take to be in pain as the individual who takes somebody to be in pain, no error of misidentification is in that respect possible.

    This explanation leaves open all manner of mundane misidentification, such as my taking myself to be Napoleon or the person in the mirror. All that I am immune to error about is whether the individual I take to be in pain is me, that is, whether it is the very individual that takes somebody to be in pain. My immunity is simply a reflection of the way the first-person pronoun and its mental analogue operate. But however they operate, it is plain that I can mistakenly think that I am Napoleon or the individual in the mirror.

    The stronger immunity to error that Shoemaker describes trades on the special way we have access to being in conscious states. Since that access is a matter of noninferential HOTs, which like any other thoughts can be mistaken, such strong immunity fails. Thin immunity, by contrast, is wholly independent of the processes by which HOTs arise. No matter how one comes to have a HOT, one is disposed to identify the individual it represents as being in a particular state as the very individual that thinks that HOT. And this amounts to representing the individual that is in the target state as being oneself. One cannot go wrong about its being oneself that one represents as being in the state.

    Shoemaker writes that "[m]y use of the word 'I' as the subject of [such] statement[s as that I feel pain or see a canary] is not due to my having identified as myself something" to which I think the relevant predicate applies (9). But one is disposed to identify the individual one takes to do these things as the individual who takes somebody to do them. So one is, after all, disposed at least in this thin way to identify as oneself the individual one takes to feel pain or see a canary.

    Shoemaker offers the mirror case as an example of a thought about oneself that is not immune to error through misidentification; I see somebody's reflection in a mirror and mistakenly think that I am that person. But so far as thin immunity goes, this case is completely parallel to that of conscious pain. If I take the person in the mirror to be me, I can be wrong about whether the reflection is actually of me. But even here I cannot be wrong about who it is that I take the reflection to be of; I take the reflection to be of the very individual who is doing the taking. In just that way, I could be wrong about whether the person I take to be in pain is Napoleon, but I cannot be wrong about whether the person I take to be in pain is the individual doing the taking.

    The contrast Shoemaker sees between cases in which immunity does and does not occur echoes Wittgenstein's idea that, though I could be mistaken about whether a particular broken arm is mine, I cannot be mistaken about whether a particular pain is mine. He writes: "To ask 'are you sure that it's you who have pains?' would be nonsensical" (67, emphasis original).

    But the cases do not differ in any significant way. The error at issue for the strong immunity Shoemaker and Wittgenstein see may be less likely for cases of conscious pain than for broken arms, but it is not impossible. And the cases are parallel in respect of thin immunity. I can be wrong about who the individual is whose arm is broken or who is in pain. But just as I cannot be wrong about whether the individual who takes somebody to be in pain is the individual taken to be in pain, so I cannot be wrong about whether the person who takes somebody's arm to be broken is the person taken to have a broken arm. Thin immunity results simply from the way 'I' and its mental analogue function in our firstperson thoughts and speech acts.

    As noted earlier, claims of privileged access to conscious states tend to rely on the view that a state's being conscious is an intrinsic property of the state itself. But the way one is conscious of a mental state could misrepresent that state even if it were intrinsic to that state. Misrepresentation need not be external to the thing being represented. But the idea that being conscious of our mental states is intrinsic to those states does shed light on why, even in respect of thin immunity, the mirror and broken-arm cases seem to be different from the pain case.

    Suppose I am in pain and the pain is conscious. Its being conscious consists in my being conscious of myself as being in pain. And suppose that the pain's being conscious is intrinsic to the pain itself. It follows that my being conscious of myself as being in pain will then be intrinsic to the pain itself. But my being conscious of myself as being in pain means that the individual I am conscious of as being in pain is the very individual who is conscious of somebody as being in pain. So it will then be intrinsic simply to my being in pain that I cannot, in that respect, be mistaken about the individual I am conscious of as being in pain.

    When I take myself to be Napoleon or to be reflected in a mirror or to have a broken arm, the individual I take to have these properties is again the individual doing the taking. But now an apparent difference from the pain case emerges. Even if one is conscious of oneself as being Napoleon or having a broken arm, one's being thus conscious plainly is not intrinsic to those conditions. So, if a pain's being conscious were intrinsic to the pain, the Napoleon and broken-arm cases would indeed differ from the case of a mental state.

    The idea that a mental state's being conscious is intrinsic to that state even helps explain the initial plausibiliy of the stronger immunity that Shoemaker describes. If being conscious of a mental state were intrinsic to that state, it would be intrinsic simply to being in a conscious state that one is disposed to regard as being in that state the individual that takes somebody to be in that state. Since it would be intrinsic to one's being in a conscious state that it is oneself that one takes to be in that state, there would be no process that leads to one's identifying oneself as the individual that is in the state in question. So there would be no identifying process that could go wrong, and so no way for one to be right in thinking that somebody is in a conscious state but wrong that it is oneself who is in the state.

    It is subjectively tempting to see consciousness as an intrinsic feature of our mental states precisely because we are seldom aware, from a first-person point of view, of anything that mediates between conscious states and our consciousness of them. To sustain this subjective impression, however, one would need some way of individuating mental states on which our awareness of a conscious state is not distinct from the state itself. It is hard to see what means of individuation would have this result which would be independent from the subjective impression under consideration.

    Indeed, the way we do actually individuate intentional states seems to deliver the opposite result. No intentional state can have two distinct types of mental attitude, such as the attitudes of mental affirmation and doubt. And having a doubt about something does not result in one's being conscious of that thing. So when a case of doubting is conscious, our consciousness of that doubting must exhibit an assertoric mental attitude. And that means that the consciousness of the doubting is distinct from the doubting itself.

    There are other more general reasons to reject the idea that being conscious of a mental state is intrinsic to that state. States may be conscious at one time but not another, as with minor aches or pains that last all day but are not always conscious. If a state's being conscious were intrinsic to that state, it would be puzzling how a particular state could at one moment be conscious but not at another. And, if consciousness of mental states is not intrinsic to those states, there is no reason to hold that the stronger immunity Shoemaker describes obtains, nor that the thin immunity that holds for conscious states differs from that which holds for any other self-ascription.

    What about an account, then, of the way we are conscious of ourselves that appeals simply to HOTs? It seemed possible at the outset that the phenomena of essentially indexical self-reference and immunity to error through misidentification might undermine a HOT account of our consciousness of the self. That was because both immunity and the essential indexical seemed to presuppose our having some special access to the self independent of whatever thoughts we have about the self.

    Essentially indexical self-reference, we saw, presents no such problem. Essentially indexical first-person thoughts refer to oneself in effect as the individual doing the referring; they refer to oneself as an individual one is disposed to pick out as the individual that thinks the essentially indexical thought. So having such thoughts requires no access to the self beyond that which we have by having thoughts about the self.

    A similar conclusion holds for thin immunity to error through misidentification. It might seem that such immunity requires a privileged type of access to the self; how, otherwise, could we be immune to error in referring to the self? But the error to which we actually are immune is wholly trivial. It is the error, when I take myself to have some property, F, of supposing that the individual taken to be F is distinct from the individual that takes somebody to be F.

    We are immune to error through misidentification of the self. But that immunity presupposes no special access we have to the self. It is simply that one cannot, when one thinks that one is, oneself, F, be wrong about whether it is the individual doing the thinking that one takes to be F. No independent, antecedent access to the self figures here, only a particular form of self-reference. So nothing about immunity to error through misidentification blocks a HOT account of the way we are conscious of ourselves.13

     

    V. Identifying Oneself and Self-Consciousness

    The idea that immunity to error through misidentification occurs in connection only with the self-ascribing of conscious states, but not in connection with broken arms and being Napoleon, may seem to support the Cartesian view that we identify ourselves in the first instance as mental beings. Why else would misidentifying oneself be impossible only in connection with mental self-ascriptions?

    This picture is unfounded. For one thing, the error we actually are immune to does not, in any substantive way, involve the identifying of anything. It is simply the error, when I take myself to have the property, F, of thinking that the individual taken to be F is distinct from the individual that takes somebody to be F. It is perhaps even a bit misleading to describe the error we are thus immune to as one of misidentification.

    Such immunity fails to support the Cartesian conclusion for other reasons as well. Since the immunity applies not simply to the ascribing to oneself of conscious states, but to the selfascribing of nonmental properties as well, it cannot sustain the idea that we identify ourselves primarily in mental terms. How, then, do we identify ourselves? And how does our identifying ourselves fit with the way our thoughts about ourselves involve essentially indexical self-reference and are thinly immune to error through misidentification?

    There is no single way in which we identify ourselves. We rely on a large and heterogeneous collection of factors, ranging from considerations that are highly individual to others that are fleeting and mundane. We appeal to location in time and place, current situation, bodily features, the current and past contents of our mental lives, and various psychological characteristics and propensities, indeed, to all the properties we believe ourselves to have. Contrary to the picture conjured up by essentially indexical self-reference and immunity to error, the factors that figure in our identifying ourselves are theoretically uninteresting and have relatively little systematic connection among themselves.

    Each of these factors reflects some belief one has about oneself, such as what one's name is, where one lives, what one's physical dimensions and location are, and what the current contents are of one's consciousness. And all these beliefs self-ascribe properties by making essentially indexical self-reference, and they are all immune to error through misidentification in the thin way described above. Such thin immunity and indexical self-reference figure in the way we identify ourselves not because they provide or presuppose any special access to the self, but only because the first-person beliefs on which all self-identification relies exhibit such thin immunity and indexical self-reference.

    We can be in error about any of these beliefs about ourselves; indeed, we could be in error about most of them. One could be wrong about all one's personal history, background, and current circumstances. One might even be mistaken about one's location relative to other objects if, for example, one lacked relevant sensory input14 or the input one had was suitably distorted.

    One can be wrong even about what conscious states one is currently in. One may take oneself, in a distinctively firstperson way, to have beliefs and preferences that one does not way be wrong even about the sensations or emotions one is conscious of oneself as having.

    We identify ourselves by reference to batteries of descriptions which our first-person thoughts and beliefs ascribe to ourselves. And we can successfully distinguish ourselves from others even if many of those descriptions are inaccurate. What, then, if all identifying thoughts and beliefs of the sorts just described are erroneous? Can one identify oneself even then?

    Arguably not. We distinguish ourselves from other beings, just as we distinguish among all other individuals, on the basis of various properties. So, if one's beliefs about what properties one has are all incorrect, one has nothing accurate to go on. Our incorrect self-ascriptions would still make essentially indexical self-reference and would still exhibit the thin immunity to error described above. But these features of one's self-ascriptions would not help in identifying oneself, since they tell us only that the individual thought to satisfy a particular description is the individual doing the thinking. Essentially indexical selfreference and immunity to error through misidentification cannot short circuit the need to appeal to substantive properties in identifying oneself.

    Even if I am conscious of myself as having some thought or desire or as being in pain, I may nonetheless lack that thought, desire, or pain. Consciousness of our mental states is not infallible. But can I also, in such a case, be wrong even about whether it seems that I have that thought, desire, or pain? Perhaps there is, after all, a kind of privileged access we have not in connection with whether our consciousness is correct, but in connection with what our consciousness is. Perhaps appearance and reality coincide at least in that respect; perhaps, then, it makes no sense to talk about the way things seem to seem to one, as against simply the way things seem.15 If so, perhaps our conscious states do provide an unimpeachable basis for identifying ourselves, not because we are infallible about the states we are conscious of ourselves as being in, but because we are infallible about whether we are conscious of being in them.

    But no infallibility arises here either. One may be wrong about any mental state one takes oneself to be in. Being conscious of oneself as being in some mental state is itself, however, just another higher-order mental state; on the HOT hypothesis, it is a thought one has that one is, oneself, in that state. So one could about whether one has the HOT in question. Such higher-order infallibility fares no better than infallibility about first-order states, and can provide no certain foundation for identifying oneself.

    A HOT account of the way we are conscious of ourselves relies on a subset of the essentially indexical first-person thoughts we have about ourselves, namely, our HOTs. But the HOTs an individual has are about the same individual as all the other essentially indexical first-person thoughts that that individual has. In this respect, if not in others, the pronoun 'I' and its mental analogue function somewhat as proper names do. When we use a proper name, we take each token to refer to the same individual as other tokens do unless countervailing information overrides that default. Similarly with 'I' and its mental analogue; we assume that each token refers to the same thing unless something interferes with that ordinary default assumption.16

    The upshot is that we take all our essentially indexical first-person thoughts and beliefs to refer to one and the same individual. The way we are conscious of ourselves is therefore but one aspect of the way we identify ourselves as individuals. We are in the first instance conscious of ourselves by way of our HOTs, but we identify ourselves by way of all our essentially indexical first-person thoughts and beliefs.

    David M. Rosenthal City University of New York Graduate Center Philosophy and Cognitive Science Internet: dro@ruccs.rutgers.edu May 18, 2004

     

    NOTES

    1 David Hume, A Treatise of Human Nature [1739], ed. L. A. Selby-Bigge, Oxford: Clarendon Press, 1888, 2nd edn., revised by P. H. Nidditch, 1978, Book I, Part IV, sec. vi, p. 252.

    2 One can capture the way qualitative states represent things by seeing each mental quality as representing the perceptible physical property that occupies a corresponding place in the quality space of the relevant perceptual modality. I have defended this view in "The Colors and Shapes of Visual Experiences," in Consciousness and Intentionality: Models and Modalities of Attribution, ed. Denis Fisette, Dordrecht: Kluwer Academic Publishers, 1999, pp. 95-118, and "Sensory Quality and the Relocation Story," Philosophical Topics, 26, 1 and 2 (Spring and Fall 1999): 321-350.

    3 Pace the representationalist or intentionalist views of writers such as D. M. Armstrong The Nature of Mind, St. Lucia, Queensland: University of Queensland Press, 1980, ch. 9; William G. Lycan, Consciousness, Cambridge, Massachusetts: MIT Press/ Bradford Books, 1987, ch. 8; Gilbert Harman, "Explaining Objective Color in terms of Subjective Reactions," Philosophical Issues: Perception, 7 (1996): 1-17; and Alex Byrne, "Intentionalism Defended," The Philosophical Review, 110, 2 (April 2001): 199-240. I discuss representationalism in "Introspection and Self-Interpretation," Philosophical Topics 28, 2 (Fall 2000): 201-233.

    4 Kant first used the term 'inner sense' (K.d.r.V., A22/B37); Locke used the similar 'internal Sense' (Essay, II, i, 4). The view is currently championed by D. M. Armstrong, "What is Consciousness?", in David Armstrong, The Nature of Mind, St. Lucia, Queensland: University of Queensland Press, 1980, 55-67, and by William G. Lycan, Consciousness and Experience, Cambridge, Massachusetts: MIT Press/Bradford Books, 1996, ch. 2, pp. 13-43, and "The Superiority of HOP to HOT," forthcoming in Higher-Order Theories of Consciousness, ed. Rocco J. Gennaro, John Benjamins Publishers, 2004.

    5 See, e.g., Rosenthal, "Two Concepts of Consciousness," Philosophical Studies 49, 3 (May 1986): 329-359; "Thinking that One Thinks," in Consciousness: Psychological and Philosophical Essays, ed. Martin Davies and Glyn W. Humphreys, Oxford: Basil Blackwell, 1993, 197-223; "A Theory of Consciousness," in The Nature of Consciousness: Philosophical Debates, ed. Ned Block, Owen Flanagan, and Güven Güzeldere, Cambridge, MA: MIT Press, 1997, 729-753; and "Explaining Consciousness," in Philosophy of Mind: Contemporary and Classical Readings, ed. David J. Chalmers, New York: Oxford University Press, 2002, 406-421. The first two will appear, along with other papers that develop the HOT model, in Consciousness and Mind, Oxford: Clarendon Press, forthcoming 2004.

    6 John Perry, "The Problem of the Essential Indexical," Noûs XIII, 1 (March 1979): 3-21. For reference to oneself as such, see P. T. Geach, "On Beliefs about Oneself," Analysis 18, 1 (October 1957): 23-24; A. N. Prior, "On Spurious Egocentricity," Philosophy, XLII, 162 (October 1967): 326-335; Hector-Neri Castañeda, "On the Logic of Attributions of Self-Knowledge to Others," The Journal of Philosophy, LXV, 15 (August 8, 1968): 439-56; G. E. M. Anscombe, "The First Person," in Mind and Language, ed. Samuel Guttenplan, Oxford: Oxford University Press, 1975, pp. 45- 65; David Lewis, "Attitudes De Dicto and De Se," The Philosophical Review LXXXVIII, 4 (October 1979): 513- 543; and Roderick M. Chisholm, The First Person, Minneapolis: University of Minnesota Press, 1981, chs. 3 and 4.

    7 See, e.g., Sydney Shoemaker, "Self-Reference and Self- Awareness," The Journal of Philosophy LXV, 19 (October 3, 1968): 555-567, reprinted with slight revisions in Shoemaker, Identity, Cause, and Mind: Philosophical Essays, Cambridge: Cambridge University Press, 1984, pp. 6-18 (pages references are to the reprinted version); Roderick M. Chisholm, Person and Object: A Metaphysical Study, La Salle, Illinois: Open Court Publishing Company, 1976, ch. 1, §5, and The First Person, ch. 3, esp. pp. 29-32; and David Lewis, "Attitudes De Dicto and De Se."

    8 David Kaplan, "Demonstratives," in Themes From Kaplan, ed. Joseph Almog, John Perry, and Howard Wettstein, with the assistance of Ingrid Deiwiks and Edward N. Zalta, New York: Oxford University Press, 1989, 481-563, pp. 505-507.

    9 See, e.g., Thomas Nagel: "The idea of moving from appearance to reality seems to make no sense" in the case of conscious experiences ("What Is It Like to Be a Bat?" The Philosophical Review LXXXIII, 4 [October 1974]: 435-450; reprinted in Mortal Questions, Cambridge: Cambridge University Press, 1979, 165-179, p. 174). If every mental state is identical with some physical state, then every mental state has both mental and physical properties. This thesis that we have exhaustive access to our own mental states therefore applies only to the mental properties.

    10 The classic study is Richard E. Nisbett and Timothy DeCamp Wilson, "Telling More Than We Can Know: Verbal Reports on Mental Processes," Psychological Review LXXXIV, 3 (May 1977): 231- 259. A useful review of the extensive literature that follows that study occurs in Peter A. White, "Knowing More than We Can Tell: 'Introspective Access' and Causal Report Accuracy 10 Years Later," British Journal of Psychology, 79, 1 (February 1988): 13- 45.

    11 Sydney Shoemaker, "Self-Reference and Self-Awareness," p. 8. Shoemaker urges that such immunity applies even when I take myself to be performing some action. See also Ludwig Wittgenstein, The Blue and Brown Books, Oxford: Basil Blackwell, 1958, 2nd edn. 1969, pp. 66-7, Gareth Evans, "Demonstrative Identification," in Evans, Varieties of Reference, ed. John McDowell, Oxford: Clarendon Press, 1982, pp. 142-266, José Luis Bermúdez, The Paradox of Self-Consciousness, Cambridge, Massachusetts: MIT/Bradford, 1998, chs. 1 and 6, and Roblin Meeks, Identifying the First Person, unpublished Ph.D. dissertation, The City University of New York Graduate Center, 2003, chs. II-IV.

    12 Tacitly, once again, because the content of a HOT never explicitly describes the individual as the thinker of the HOT.

    13 I discuss both essentially indexical self-reference and the thin sort of immunity to error through misidentification in "Unity of Consciousness and the Self," Proceedings of the Aristotelian Society 103, 3 (April 2003): 325-352, in explaining the apparent unity of our conscious states. Here, unlike in the earlier discussion, I stress the difference between thin immunity and the kind of immunity described by Shoemaker.

    14 As G. E. M. Anscombe imagines; see "The First Person," in Mind and Language: Wolfson College Lectures 1974, ed. Samuel Guttenplan, Oxford: Oxford University Press, 1975, 45-65, p. 58.

    15 See Daniel C. Dennett's contention that it makes no sense to talk "about the way things actually, objectively seem to you even if they don't seem to seem that way to you" (Consciousness Explained, Boston: Little, Brown and Company, 1991, p. 132). 16 As presumably happens with so-called Multiple Personality Disorder (now more often known as Dissociative Identity Disorder).


  10. Video: Ken Wilbur enters into various meditative states during a EEG Neurofeedback session

    You may have already seen this, but it is new to me. Ken Wilbur narrates a video of his own experience using neurofeedback while navigating various meditative states.

    From YouTube:

    'We asked Ken to do a short 10-minute commentary on these various meditative states and the corresponding brain-wave patterns that are shown on the EEG machine in the video. Ken enters four meditative states (nirvikalpa closed eyes, nirvikalpa open eyes, sahaj, and mantra-savikalpa), each of which has a very distinctive brain-wave pattern. In his commentary, Ken emphasizes that the patterns shown on this machine may or may not be typical, but they do emphasize that profound consciousness states can be evoked at will, and these show immediate correlation in brain-wave patterns.'

  11. MorrisonDance - A dance performance using BrainMaster Neurofeedback

     

    MorrisonDance, a dance troupe founded by choreographer Sarah Morrison, teamed up with a team of engineers

    from NASA's Glenn Research Center to create a performance featuring live brainwaves of dancers using the

    BrainMaster.

     

    This is actually from September 2005 - but just in case you missed it: (like I did!)



  12. High School Student's Biofeedback Research Project Accepted at Conference

    Nancy Leo, a senior at Arizona's Hamilton High School, had her science fair research project selected as one of 18 projects to be presented at the Sixth World Congress on Stress in Austria.

     

    Leo's study focuses on HRV (heart rate variability) and salivary cortisol changes that occur during stressors in the laboratory while using biofeedback. She found that an increase in stress resulted in less heart rate variability and an increase in salivary cortisol. She also found that the stress response could be changed significantly with biofeedback.

    More from Arizona's East Valley Tribune here

Need help?
User login
3:38pm July 16-5:00 GMT