In the first chapter, we met one formulation of the Hard Problem: it is the challenge of explaining subjective experience.
“The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience.”
In this book, I will be arguing that phenomenal consciousness – what Chalmers calls “subjective experience” is a cognitive construct used for managing attention, and that qualia are perceptual concepts usually accessed by dedicated cognitive modules, such that their functional nature is obscured. We contemplate consciousness and qualia from within a cognitive system that has its representational commitments wired in and taken for granted, so their essential nature cannot be fully appreciated or reproduced through any mechanistic or computational approach to the underlying substrate.
That makes these concepts difficult to think about; it does not make them fundamentally mysterious.
But this sort of argument – or indeed, any functional theory of consciousness or qualia – can be expected to meet one particular, overwhelming objection, the one made famous by Chalmers. It is the objection, leveled at any functional account of consciousness, that it is only addressing one of the Easy Problems of Consciousness, not the Hard Problem.
So what would it mean to address the Hard Problem?
And here’s where Jackson’s story of Mary helps. The Easy Problems of colour are the ones Mary learns about in her laboratory. The Hard Problem of colour is finding a way to let her derive redness from black-and-white textbooks. We’ve already seen how that is both impossible and misguided.
So what would it mean to address the Hard Problem for consciousness-the-container?
And here we find that the challenge laid down by Chalmers is inherently vague – so vague that we cannot even conclude, as we did with Mary, that the challenge is impossible.
The Hard Problem has been expressed in many ways, but in an oft-quoted line from his original paper, Chalmers says it is“the question of how physical processes in the brain give rise to subjective experience.” The question touches on a legitimate field of inquiry, but it is already a little less innocent than the one considered earlier. The words “give rise to” jump out as an early warning that something is off with Chalmers’ conceptual framing. From the alternate perspective that I’ll explore in this book, Chalmers’ question crosses representational contexts in an unnecessarily confusing way. (How do curved lines of ink on paper give rise to a thought bubble? How do the generic V4 neurons in Mary’s textbook give rise to an actual example of an occipital lobe V4 circuit imagining redness?)
Giving rise to something implies a causal, generative process for which we have no evidence, and it assumes a dualistic perspective in which subjective experience is something separate from and produced by the merely physical brain. It also implies a slow, indirect or unexpected side effect, rather than something that is quick and deliberate or automatic – it is not an expression we would use for a relationship that is constitutive or representational, and so this expression discounts the possibility that subjective experience and its underlying physical processes might merely be different views of the same thing. Gone is any sense that we just want to understand a subjective aspect of cognition; Chalmers is now asking us to explain something outside and distinct from cognition – without first proving that it is appropriate to view consciousness this way.
Perhaps it might seem that, in objecting to this expression, I am picking out a minor editing mishap in Chalmers’ paper, a single poor word choice. But no. This dualistic perspective is the dominant formulation in Chalmers’ paper, which is sprinkled with twenty-odd uses of similar phrases, referring to experience arising in the brain or to the brain giving rise to experience.
By assuming a dualist perspective, Chalmers is creating space to ask an ill-posed question about the relation between two things (which are really one thing), and the trap is set. We won’t be able to find a causal process linking those two things because they are different views of the same thing. (There is no causal process linking Mary’s textbook V4 neurons with the cognitive effects of her actual V4 neurons, and we know she can study one without deriving the other.) Given the explanatory gap between theory and example, we will easily be able to imagine the mechanical substrate of the physical brain without accessing what that substrate represents, which will in turn seem to justify our belief in the ghost of consciousness.
The idea of consciousness as something non-functional and non-cognitive that arises from the physical brain is completely counter to conventional science and it creates an even deeper mystery of how the physical and non-physical have anything to do with each other. If consciousness is not an intrinsic aspect of any functional process within the physical brain, but something separate that the brain’s physical processes have given rise to, how does it subsequently get sensed by the brain, to be puzzled about in our cognition and talked about in the physical world?
The physical brain has been thoroughly explored, and although not all of its functions have been explained, its individual components are now well known, and it is very clear that there are no suitable sensors for detecting mysterious essences. Every neuron behaves just as conventional neuroscience would expect. The physical components of the brain follow known physics, and if there are non-physical elements, they have no appreciable effects.
If we rephrased Chalmers’ question in theory-neutral terms, it would be something like this: what is the relationship between physical processes in the brain and subjective experience? That’s an improvement, but it is still vague, lacking a clear definition of the explanatory target. My answer, as outlined above, is that subjective experience is a cognitive construct used for managing attention; the relationship between physical brain processes the construct of consciousness is essentially representational, analogous to the relationship between ink lines and thought bubbles. One does not give rise to the other. They are two views of the same entity.
But rephrasing the core question is not enough to escape the trap Chalmers has laid. There is something self-defeating built into the defining feature of Chalmers’ approach, his famous division of the mystery of consciousness into the Easy and Hard Problems, which, once accepted, provides grounds for dismissing any functional explanation of consciousness.
“To explain a cognitive function, we need only specify a mechanism that can perform the function. The methods of cognitive science are well-suited for this sort of explanation, and so are well-suited to the easy problems of consciousness. By contrast, the hard problem is hard precisely because it is not a problem about the performance of functions. The problem persists even when the performance of all the relevant functions is explained.
In the last few years, a number of works have addressed the problems of consciousness within the framework of cognitive science and neuroscience. This might suggest that the analysis above is faulty, but in fact a close examination of the relevant work only lends the analysis further support. When we investigate just which aspects of consciousness these studies are aimed at, and which aspects they end up explaining, we find that the ultimate target of explanation is always one of the easy problems.
(Chalmers, Facing Up to the Problem of Consciousness, 1995. Emphasis added.)
I will refer to this objection as Chalmers’ Razor. It is the practice of dismissing all functional accounts of consciousness as only targeting the Easy Problems. The Razor potentially offers a test for any candidate theory of consciousness, but it also provides a conceptual framework that guarantees every new theory will fail the test. For a theory to fail to address the Hard Problem, and therefore succumb to Chalmers’ Razor, the theory only has to be explicable in functional terms that can be followed while imagining phenomenal consciousness to be absent.
And it is here that we begin to see that Chalmers’ innocent central inquiry about the nature of subjective experience comes with a complex web of dubious assumptions. Even if a correct functional account of phenomenal consciousness were delivered to us by time-travelling neuroscientists, there is no reason to believe it could survive Chalmers’ test.
Suppose we point at a cognitive structure within our own heads, using our private introspective mechanisms. We could compare that cognitive entity with a scientist’s description of the low-level mechanical details of what we are pointing at, and we could simply refuse to believe that we were reading an account of the very thing we were pointing at in our heads. Contemplating the mechanical details, on the one hand, and actually engaging in the simple acting of pointing will not feel the same – for reasons we can understand. One view uses the neural structure with its representational meaning intact; the other view mentions the disembodied structure with the meaning disengaged. These two views shouldn’t feel the same; it would be bizarre if they did. But the feel, for Chalmers, is the very thing we’re trying to explain, so the functional theory can be rejected on the ground that it leaves out the feel.
In the previous chapter, Mary’s back-and-white textbooks left out the redness. Applying Chalmers’ Razor, her textbooks only tackle the Easy Problems of colour. They don’t tackle the (impossible, misguided) challenge of deriving colour qualia from neural substrate. Chalmers is similarly dismissive of functional accounts of consciousness – but with all the added problems that consciousness is invisible, nebulous, and plays a cognitive role that can be difficult to appreciate.
The argument that culminates in the Razor begins with seductive plausibility. Chalmers’ Easy/Hard divide rests on the idea that wherever we can describe what function is being fulfilled by some aspect of cognition, we can envisage a reductive explanation. (A reductive explanation, in this context, is not an attempt to reduce the importance of some phenomenon, but merely the attempt to explain it by virtue of its constituent components.) Explaining the function of interest merely consists of breaking down the relevant mechanisms, considering the workings of the relevant parts, and showing that those mechanisms support that function. At that stage, the explanation is complete. We can envisage this approach yielding an explanation for almost any describable cognitive function.
By contrast, Chalmers argues, the difficult, central challenge in understanding consciousness, the Hard Problem, is understanding why it is like something to be inside a human brain, and that’s not a question that seems directly amenable to functional explanation, because it is not obvious what function is being fulfilled. Whatever physical processes take place in the brain, he suggests, we could imagine those processes taking place in the experiential dark, producing behaviour without any accompanying inner feel. So why is there an inner feel? Subjective experience seems tangential to any objectively describable brain function because we can always imagine the biological substrate of consciousness as a machine, and the ghost of consciousness as an optional extra.
To illustrate the point more vividly, Chalmers popularised the idea of philosophical zombies – hypothetical creatures that are physical duplicates of us, but without any subjective experience, so that they’re dead inside. He could have made the same point with advanced robots that were exact functional duplicates of humans without being subjectively conscious, so-called silicon zombies. His Hard Problem essentially asks: why aren’t we zombies? Science can get us from atoms to neurons to brains to behaviour… But what adds the inner light of consciousness? This imaginative exercise suggests, to Chalmers, that subjective experience somehow exists in addition to and outside of the causal, functional flow of cognition, putting it completely beyond the reach of conventional science.
Essentially, this is an appeal to intuition, but it is often presented as a formal argument: If the functional processes of the brain necessarily entailed subjective experiences, or were identical to those experiences, it would be illogical to imagine those physical processes without the experience. But it seems logical to imagine beings (zombies) that keep the functional attributes of humans while lacking subjective experience, so subjective experience must be something over and above the brain’s functions.
I don’t think the formal argument achieves anything over and above the original intuitive exercise, but let’s grant that many people can entertain the idea of zombies without an overt sense of contradiction. For Chalmers, this chain of reasoning means that no functional account of cognition can resolve the mystery of consciousness, and we will need a radical overhaul of how we think about reality. The search for the explanation of consciousness is directed away from where neurocience would naturally look, in the cognitive functions of the brain, and instead the search is directed towards mysterious hidden domains and strange essences that don’t have any functional basis and don’t actually influence the brain’s functional activities.
Intuitively, Chalmers’ appeal to the intrinsic ineffability of consciousness might seem fair. Viewed from within, consciousness doesn’t seem to consist of parts, or to have any structure that can be deconstructed. Its functions are not immediately obvious. We can’t seem to get to its subjective nature by considering its objective substrate – we can’t quite capture the feel of awareness by talking about attentional mechanisms, for instance, just as Mary couldn’t capture the flavour of redness by studying circuit diagrams of the occipital cortex. Ditto for pains and sounds and tastes and scents. All of these inherently non-mechanical and apparently intractable features of our mental experience seem intuitively obvious to us, and they all seem incompatible with the idea of the brain as a purely physical organ, and so they add to the difficulty of understanding consciousness.
This book will explore functional explanations for the mysteriousness of these phenomena, but first it should be noted that a major part of the difficulty arises from the vague way the central question has been formulated. Dismissing a theory as not addressing the Hard Problem only makes sense if there is a clear concept of what it would mean to address the Hard Problem. But what’s the question at issue? Chalmers asks: why are brain processes accompanied by experience? But until we define “experience”, that’s not even a real question. It is waving our hand towards our heads and saying that we are puzzled. It’s saying we can imagine the deadness of zombies coexisting with physical brains, so we don’t have to understand physical brains in detail to know that physicalism is fundamentally dead.
Chalmers’ Razor (and the associated Zombie Argument) relies on our ability to imagine experience as absent, but until we know what it is that we are imagining as absent, it proves nothing. The explanatory target of the Hard Problem is said to be subjective experience. But what is that, and what do we need to explain about it, exactly? The implied challenge laid down by Chalmers incorporates all the difficulties Mary faced in deriving redness, but adds the vagueness of an undefined target concept.
Subjective experience and phenomenal consciousness are sometimes weakly defined as “something it is like something to be”, but that turns the Hard Problem into the observation that something, which we have not defined, is like something else, also not defined, and like it in some unspecified way. But what is like what? And in what way?
Try this: formulate a sentence about consciousness that you know to be true. I propose that we can in principle explain why you think what you just thought. If your sentence is indeed true and well formulated (and not itself suffering from fatal vagueness), we can know why it is true. In fact, as we shall see, even Chalmers admits that we can probably explain everything that has ever been said about consciousness, including everything he has ever said about it. So what is it that we can’t explain? No one knows. Or, if they do know, they could not put that knowledge into words without those words immediately falling under the domain of the Easy Problems. Anyone who formulated the issue in more precise terms would immediately face Chalmers’ Razor.
Often, in place of a definition, discussions of the Hard Problem are accompanied by an appeal to introspective pointing. Consciousness is like this thing in our heads we’re pointing at. But that’s a tautology: consciousness is like itself.
Consciousness is also like a fanciful, ethereal cloud of thought, as already noted, but I believe we can explain why that is the case.
Chalmers usually expresses his central puzzlement as a “why” question: Why is the performance of these functions accompanied by experience? This implies that we already know what we’re talking about. And we sort of do; we’re talking about what we’re pointing at. But we sort of don’t, too, because Chalmers’ framing necessarily discounts the cognitive process of pointing. He wants to discuss an entity that we point at with our cognition, but in such a way that we can’t consider the functional entity our cognition actually engages with. Until we define what we’re pointing at, the central issue is not really a “why” question; it’s a “what” question. What exactly are we pointing at, when we ask science to explain “subjective experience”? The term is so vague it might as well be a pronoun: why do we have this in our heads? If anyone clearly defined “subjective experience” in any terms other than inarticulate pointing, there is no good reason to think science would have trouble explaining why that well-defined entity exists. If we looked inside Chalmers’ brain with a fancy scanner to see what he is pointing at, for instance, whenever he accuses a theory of not explaining that feeling, or that thing, or this, we could explain why the target of his pointing exists. We could trace its embryological history, consider its evolutionary value, describe its utility in our daily lives, and even explain its apparent resistance to functional explanation, as we did with Mary. But Chalmers doesn’t really mean that thing, the thing his cognition actually points at, because that would fall under the Easy heading. He means some other thing, vaguely imagined to be beyond the reach of science.
The vagueness I’m complaining about is not something that can be addressed within Chalmers’ framing – he has had three decades to pin down what he’s talking about, and no definition has emerged. The target of the Hard Problem is intrinsically vague, and it must remain so for the purposes of Chalmers’ argument. But supposedly that’s okay, within the framing he has set up, because his argument is that consciousness is fundamentally resistant to definition and analysis. For him, the vagueness is not a temporary embarrassment, but a core feature.
In theory, we could address the vagueness by considering the actual physical identity of what we’re pointing at in cognitive terms when we introspectivelty point at phenomenal consciousness or qualia, but that would inevitably lead us to a functional description, which would then fail to have the right feel. Whatever we pointed at, Chalmers’ Razor would apply: we could imagine 1) the target of our pointing (described functionally) as mysteriously lacking 2) the target of our pointing (held up as a live example). There are complex cognitive reasons that those two conceptual entities are difficult to reconcile, but there is also a very simple explanation – a description is only ever a description. A description of any mechanism of consciousness – such as an attention schema – leaves out the actual entity it describes. (A description of a cat is not a cat. An ever more accurate description of a cat is still not a cat, and in fact mysteriously lacks the cat.)
If something important can be conceptualised as lacking itself, then we can expect to be puzzled. We can expect functional accounts to run aground, and we can expect science to balk at following us down the same rabbit-hole. This is not a failure of science, but a failure of the conceptual framework.
The core conceptual error in the Hard Problem is really that simple, but it will take me a whole book to consider all the places that this fundamental error can hide, and all the subtle ways that it can twist our thinking and take on the guise of legitimacy.
Chalmers’ Razor can be applied to consciousness itself, but it is usually applied to qualia, as well: the vivid redness of red, the painfulness of pain, the scent of cinnamon, and so on. For most people contemplating the Hard Problem, it is with qualia that the failure of objective science to account for the nature of mental states is at its most obvious, and where the Razor is usually applied with the greatest rhetorical force. Zombies, for instance, are sometimes described as beings who lack qualia. This emphasis on qualia is understandable. When we imagine a merely functional account of qualia, we seem to lose the colour and other sensations; we are left with a vivid mental contrast showing what is missing in the functional account. We don’t have to be fans of the Hard Problem to feel the force of this intuition. With qualia, it is difficult to get a grip on what it is that has been left of the functional account, because qualia are usually not well defined, and we have to wrestle with the apparent paradox that a full physical description of reality has left out something that is ultimately part of the physical world. As before, Jackson’s example is helpful. A description of V4 neurons in the occipital cortex, even a description that has been internalised and is now represented by neurons, leaves out actual V4 neurons in the occipital cortex.
For those who want to understand consciousness (and qualia) in scientific terms, Chalmers’ Razor is devastatingly disempowering. Even the idea that attentional mechanisms are involved in consciousness has been anticipated by Chalmers, and discounted. The control of the focus of attention is explicitly included in his extensive set of Easy Problems, as listed in the first chapter (one of the bullet points was “the focus of attention”).
For Chalmers, the Hard Problem goes beyond the explanation of any of these overt cognitive functions, which collectively cover most of the important issues, so that it is exceedingly difficult to conceive of what a scientific explanation of consciousness might look like or what it could possibly be about.
The first half of Chalmers’ Facing-up paper seems to leave room for the idea that subjective experience might play some hitherto unsuspected, subtle cognitive role. The Easy Problems are directly susceptible to standard techniques; the Hard Problem might merely require a less direct approach or non-standard techniques. By the end of the very same paper, though, he has strengthened this claim substantially, arguing that what we are trying to explain is essentially non-functional, something separate from and unexplained by any conceivable function of cognition, including all reductive explanations that future neuroscience might ever offer. This is not a throw-away line, a speculative musing, but an oft-repeated theme built up with increasing confidence throughout the paper.
“What makes the hard problem hard and almost unique is that it goes beyond problems about the performance of functions.”
“To explain experience, we need a new approach. The usual explanatory methods of cognitive science and neuroscience do not suffice. These methods have been developed precisely to explain the performance of cognitive functions, and they do a good job of it. But as these methods stand, they are only equipped to explain the performance of functions. When it comes to the hard problem, the standard approach has nothing to say.”
“We have seen that there are systematic reasons why the usual methods of cognitive science and neuroscience fail to account for conscious experience. These are simply the wrong sort of methods: nothing that they give to us can yield an explanation.”
“There is an explanatory gap (a term due to Levine 1983) between the functions and experience, and we need an explanatory bridge to cross it. A mere account of the functions stays on one side of the gap, so the materials for the bridge must be found elsewhere.”
“The problem of consciousness is puzzling in an entirely different way. An analysis of the problem shows us that conscious experience is just not the kind of thing that a wholly reductive account could succeed in explaining.”
“At the end of the day, the same criticism applies to any purely physical account of consciousness. For any physical process we specify there will be an unanswered question: Why should this process give rise to experience? Given any such process, it is conceptually coherent that it could be instantiated in the absence of experience. It follows that no mere account of the physical process will tell us why experience arises. The emergence of experience goes beyond what can be derived from physical theory.”
(Chalmers, Facing Up to the Problem of Consciousness, 1995, emphasis added.)
This non-functional view of consciousness goes well beyond a simple desire to understand the subjective aspect of experience. Chalmers explicitly states that consciousness cannot possibly be accounted for within conventional science, which is necessarily limited to functional explanations of physical processes, all of which are susceptible to what I’ve called Chalmers’ Razor. He decides that consciousness must be irreducible, something that has to be merely accepted as a fundamental component of the universe, like the existence of mass or electric charge. But because consciousness has no effects that can be approached through a functional analysis of the physical brain, it must be separate from physical reality, connected to the physical brain by as yet unknown psycho-physical laws.
Soon after he released his paper, Chalmers wrote a book-length defence of this view (The Conscious Mind, 1996), which is widely regarded as one of the most influential attacks on physicalism ever written. That book and the associated framework represent the primary target of this one, but Chalmers was not the first philosopher to suggest that functional explanations of consciousness were impossible. He was merely the most effective salesman for this particular view.
Similar issues had been raised at least as far back as the 1700s. Writing in 1714, the philosopher Leibniz invited his readers to imagine that the brain was a giant thinking machine operating on purely mechanical principles. He then challenged his readers to imagine the machine blown up to such proportions that one could walk around inside it, as one might tour a mill. Leibniz thought it obvious that, on examining such a machine, one would only find parts pushing and pulling in various combinations and patterns, and nothing in all of that mechanical activity accounting for the subjective quality of perception. “One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles.”
This thought experiment anticipates the Zombie Argument and the associated Hard Problem, suggesting it is likely that, if Chalmers had not codified the Easy/Hard distinction, someone else would have expressed the same idea. When he wrote his landmark paper in 1995, philosophers were already pondering the so-called “explanatory gap” between brain and mind (Nagel 1974, Levine 1983), and the philosopher Ned Block drew a very similar distinction in the same year, when he began to talk of “access consciousness” and “phenomenal consciousness” (Block, 1995).
Chalmers’ formulation soon came to dominate the debate, though, in part because he had offered a pithy name and argued his case with eloquence, and in part because he had focused attention on something genuinely puzzling: functional accounts of subjective experience seem to leave out something; we find them unsatisfying.
Historically, before this issue came to the fore, most philosophers had referred to the Mind-Body Problem, which was the general difficulty of explaining how the mind could be instantiated in a brain. This earlier problem covered the full spectrum of what would later be known as the Easy and Hard Problems of Consciousness, but it also included other aspects of cognition that were not even related to consciousness. Wet squishy stuff seemed a poor substrate for any form of cognition, conscious or otherwise, and for many it was the brain’s cognitive performance that seemed difficult to explain, not the fact that thinking felt like something from the inside.
There has been considerable progress in neuroscience over recent decades, so cognition is now considered a puzzle, rather than a mystery, in the sense that we know the right ways to think about it, and we know the sorts of answers we’re looking for. It no longer seems miraculous that a wet grey mass in our heads is capable of, say, inventing calculus. We know in broad outline how neurons process information, and how that information can be distributed across a neural network. We know how neural networks can extract features, and how they can apply arbitrarily complex transformations to multidimensional informational spaces, warping or carving them via processes that can either be trainable, through feedback, or self-directed. We know how the information gets into the nervous system through sensors, how neurons can subsequently represent entities that are thoroughly unlike neurons, and how the resulting representations can be used to model the world, to make predictions and to control behaviour. We have detailed accounts of the modular nature of cognition, and we can imagine neural networks sustaining most of the individual cognitive functions associated with being awake, and responsive, and able to report on our own internal states. We know that cognition is a constant wrestle between bottom-up and top-down processes, and we know that competition plays a role at every level: between neurons, between perceptual hypotheses, between computational routines vying for limited resources, between predictive models, and between behavioural choices.
We also know that there is a risk of combinatorial explosion, with every perceptual and cognitive feature and every behavioural decision in the brain capable of being divided and reconsidered and divided again, necessitating an attentional system that constantly prunes the decision tree and makes cognition manageable. That attentional system will prove to be a major focus of this book and, as noted earlier, I support the view that consciousness itself is primarily a schema for managing attention.
There are still many important details missing, but we have a sense of the sorts of processes that might be at work for most important cognitive functions, and the rapid advance of machine intelligence based on artificial neural networks is proof that we are broadly on the right track. Nothing on Chalmers’ Easy list seems likely to prove insoluble.
But, off to one side of this impressive series of successes, there is still the awkward issue that troubled Chalmers: why doesn’t all of that functional activity just do its thing, quietly, in the dark? Why aren’t we zombies?
Chalmers’ Razor – not known by that name, because the term is my own – has had a deep impact on the field of consciousness studies. Chalmers’ radical anti-physicalist conclusion in response to the Razor has been rejected by most philosophers and neuroscientists, but the Hard Problem is nonetheless depicted in most of the consciousness literature as an unsolved mystery. In the emerging domain of AI, a similar intuition leads many to imagine that something very special and mysterious would need to be done to give machines consciousness. By default, machines are imagined as being inherently like zombies, and it is often imagined that, as long as they are essentially algorithmic at their core, they can’t possibly be conscious.
The problem with this approach – in both humans and in AIs – is that it confuses the ability to focus on low-level mechanisms with the question of what high-level features are present. The ability to ignore a high-level feature is not a fair test of whether that high-level feature is present. The ability to consider a representational structure in mention-mode does not necessarily give insights into what might be represented by the same structure in use-mode.
Some neuroscientists consider the Hard Problem to be a philosophical puzzle that should be ignored, because it is vague, it is not amenable to the methods of science, and it does not suggest any testable hypotheses. It nonetheless hangs over the field of consciousness research. Even for those who want to explain consciousness within conventional neuroscience, the Hard Problem is often considered to be a deep challenge, perhaps one that requires a radical conceptual overhaul or a major empirical breakthrough. Any scientific theory of consciousness anyone could ever come up with will necessarily be functional, and it will involve physical neurons following physicochemical process, and hence it will seem susceptible to Chalmers’ Razor. Any book on consciousness that failed to mention the Hard Problem could be expected to receive harsh reviews complaining that it had not addressed the most important issue in consciousness research – but Chalmers’ framing implies that no functional account could ever address this issue, so progress is effectively smothered.
In this climate, some researchers have settled for the unambitious aim of looking for the “neural correlates of consciousness”, either because they share Chalmers’ belief that they won’t ever be able to explain consciousness, or because they can’t find a way to escape his framing. Some might simply want to get on with the science without having their work judged against the stubbornly vague target of Chalmers’ concerns, so they adopt this unambitious wording defensively, to avoid a pointless discussion.
Some neuroscientists, I suspect – even those who consider themselves physicalists and might say that they reject the Hard Problem – are nonetheless in the grip of the very same intuitions that inspire Chalmers. Such folk are at risk of unwittingly applying his Razor to every theory around them, without ever forming the explicit thought that all functional theories are destined to fail. One at a time, every functional theory that anyone can think of seems to be imaginable without subjective experience, fails to capture a following, and is added to the growing pile of failures. This is even worse than endorsing the Razor explicitly, because the associated intuitions exert their effects without being examined, and hence without restraint. Some potentially useful theories are likely to be stillborn, abandoned before they are developed, merely because the scientists who would have pursued them can already see that they are vulnerable to a vague, unarticulated version of the Razor. Theories of colour vision are colourless, theories of pain, painless, theories of taste, tasteless, and theories of awareness seem dead on the page, leaving out the very thing they are trying to explain.
But this defeatist approach is at odds with everything else we know about the brain. Ultimately, it makes no sense for consciousness to be non-functional: it evolved, we know we have it, we can talk about it, it is sensitive to physical influences, including drugs and disease and sensory inputs, and it produces obvious physical effects, such as my fingers typing these words right now. We know when we are experiencing qualia – we know this in our physical brains, and we can talk about qualia in rich detail. All of those features locate consciousness and qualia in the physical world and suggest that they play a functional role, which should be amenable to analysis, but somehow Chalmers has convinced a large number of scientists and philosophers that this obvious conclusion is problematic.
In this book, I will be arguing that the idea of a mysterious non-functional consciousness standing outside cognition is no more than a bad idea given faux legitimacy by Chalmers’ persuasive powers, a few misleading intuitions, and faulty conceptual framing. The Hard Problem takes a misleading pre-theoretical intuition about a mysterious, functionless presence inhabiting the physical brain or bubbling up from its mechanisms, and it gives that intuition the appearance of legitimacy, providing a barrier to understanding that must now be torn down if we are to make any further progress.
Theories do not have to have a certain feel to be correct. Theories of heat do not have to be hot. Theories of brain-states representing redness do not have to themselves evoke a sensation of redness. Theories of awareness do not have to make us feel the awareness that they describe.
Asking theories to achieve this is not only a recipe for frustration; it is fundamentally wrong-headed.