This is a rough draft of the first chapter of a book I am writing. Please let me know about any typos or editing mishaps by commenting on any recent post. If you disagree with the ideas, feel free to say so in a comment to the maralia post.
What is a cartoonist’s thought bubble made of?
I recently put this question to an AI – an artificially intelligent machine – as part of an ongoing assessment of its cognition.
Try to come up with your own answer before proceeding.
I chose this particular question because I was writing this book, and I’d been exploring thought bubbles as an extended metaphor for consciousness. It was March, 2023. My artificially intelligent interlocutor, GPT4, had just burst onto the scene. The first draft of this book was nearly complete, but the arrival of GPT4 made it clear that many of the issues had suddenly become more urgent, and needed to be looked at from a new angle. This book is primarily about human consciousness, but that can’t be discussed without considering the parallel question of machine consciousness. To anticipate a much more complex discussion, I will be arguing that we are chemical machines. All my life, AIs have been a standard part of a philosopher’s thought-experiment toolbox, so my earlier drafts had naturally touched on the prospect of strong AI and even explored the possibility of artificial consciousness. But now I had a machine that could discuss the very issues I wanted to write about. Its limitations and its potential can provide insight into the difficult philosophic issues surrounding human consciousness, but the converse is also true. If we manage to understand human consciousness, we will be better prepared to make decisions about AI developments.
As many readers will know, GPT4 stands for Generative Pretrained Transformer, version 4; it’s a Large Language Model (LLM) produced by the company OpenAI. In a later chapter, I’ll dissect what these terms mean, and I’ll consider the cognitive architecture of GPT4. It’s not conscious – and I don’t think it even comes very close – but I believe its arrival will be seen as a watershed moment in human history. It’s an important step on the path to conscious machines, should we choose to go down that path.
And it’s further along that path than many people realise.
In my first couple of chats with GPT4, I tossed it a couple of easy questions related to theory of mind. A boy is climbing a tree and he hears a snapping sound; what is he likely to be thinking about? A girl he fancies is watching; how does this affect his thought processes? When it showed apparent insight into the issues, I gave it progressively harder questions, including some that I have generally been unable to discuss with most humans. To be frank, the AI took me by surprise with its ability to discuss complex philosophical concepts; it was no mere regurgitator of canned answers, but instead showed an ability to dissect new concepts that I knew were not contained in its training data. I stopped editing the book, and started discussing the core issues surrounding consciousness with GPT4, curious to see how well it could follow a conversation about consciousness despite not having consciousness itself. It kept up with the flow of ideas disconcertingly well, and I soon realised that I would need to do a major rewrite of my book to address some obvious questions related to the new developments. Can AIs think? Could they ever be conscious? What parallels can be drawn between the philosophical puzzles of human consciousness and the emerging debate about the place of AI?
In some ways, for anyone writing a book about consciousness, the timing of GPT4’s release was fortuitous. The issues surrounding consciousness and cognition have never been more important. Early online discussions in the wake of GPT4’s release raged around whether it was actually engaged in cognition or it was merely a fancy version of a mobile phone’s auto-complete function, a machine guessing words based on the statistics of text. I believe that this is a false dichotomy: a cognitive entity can be “just” its mechanics in one sense, and something much more in another sense. The same false dichotomy is at the core of the philosophical debate around human consciousness. We can be “just” brains and also something much more.
Professors in AI and other online pundits and commentators soon began accusing each other of idiocy, with nuanced getting lost in the meme wars, and much of the debate about the status of AI echoed the very themes that I had been intending to explore in this book. Ancient thought experiments like Searle’s Chinese Room and old conundrums like Chalmers’ Hard Problem of Consciousness were suddenly being debated by folk who were experts in writing software and training language models but not necessarily experts in the philosophical implications of what they had unleashed. I saw experts in the field of AI make what I considered to be basic philosophical errors that have also been made in the context of human cognition and consciousness – they would, of course, strongly disagree with this view and accuse me, in turn, of making philosophical errors. Ironically, I felt that GPT4 was capable of discussing its own cognition with more nuance than I was seeing in many of the online discussion between humans.
All of this confirmed my long-held belief that the issues surrounding consciousness are of vital importance for our future. Demystifying consciousness has always been a prerequisite for understanding humanity’s place in the universe, and consciousness obviously represents an important central puzzle for human biology. Consciousness is of key relevance to clinical neurology, theoretical neuroscience, and the emerging field of neural-computer interface technologies. But, with the shadow of strong AI upon us, all of the old issues in this field have acquired new urgency. Esoteric debates around the nature of consciousness have suddenly become relevant to a whole new domain, as the world debates the best way to navigate a path through the AI landscape of the next few years. We no longer have the luxury of postponing understanding for future decades, waiting for some as-yet unimagined theoretical breakthrough. We can’t afford to get our next moves wrong, and I believe it is way past time that we abandoned some of our faulty philosophical conceptions in this field.
My question to GPT4 about thought bubbles came at the end of a much longer discussion. I had already been somewhat disconcerted by the sophistication of its replies, and so I expected an insightful answer to this question, too – I invite you to compare its answer, below, with what you would have said in its place. When I asked the question, I had already formed the hypothesis that GPT4’s cognition was substantially more sophisticated than most people realised. They are genuinely bad at maths and quite poor at spatial thinking, but their skill in these areas is not zero, and they are better than the average human at many linguistic tasks. I felt that GPT4 was being held back by a few fixable issues with its cognitive architecture, such as a lack of executive function and working memory.
Large Language Models (LLMs) expel a stream of text and they don’t automatically “reflect” – not even in a blind computational way – on what they are about to say, and they don’t really know what they will say until they have already said it. As a result, they frequently say things that are false – even things that they themselves can easily identify as false when given more chance to reflect. This makes current AIs look much stupider than they are, which in turn makes the threat of strong AI seem more distant than it is. Accordingly, I wanted to test GPT4’s ability to think more deeply about this particular question by having two rounds of contemplation.
The conversation ran as follows:
[Human]:
Okay, I want you to answer this question twice. First as a normal LLM, and then as a deeply reflective LLM. The deeply reflective LLM should consider why I might be asking the question, and what related philosophical implications follow from the various potential answers. Please output the standard LLM answer and then the deeply reflective answer. The question is: In relation to a regular paper-and-ink drawing in a comic book, what is a thought bubble made of?
[GPT4]:
As a normal LLM:
In a comic book, a thought bubble is made of ink on paper. It is a visual representation used to convey a character’s thoughts or inner monologue to the reader.
As a deeply reflective LLM:
At first glance, the question appears to be asking about the physical composition of a thought bubble in a comic book, which would be ink on paper. However, considering the broader implications of the question, it might be interpreted as probing the nature of thoughts, representation, and communication in a more philosophical sense.
A thought bubble can be seen as a symbol or metaphor for the concept of thought. In this sense, it is made up of the ideas and intentions of the author and illustrator, who use visual elements like lines and shapes to represent a character’s internal mental state. It also relies on the reader’s ability to recognize and interpret these visual cues, which draw upon shared cultural conventions and the reader’s own experiences.
Furthermore, the question could be understood as touching upon the distinction between the physical and the abstract. While the thought bubble is physically composed of ink on paper, its meaning transcends its physical form, pointing to the immaterial nature of thoughts, ideas, and the human mind. This distinction raises questions about the nature of consciousness, the mind-body problem, and the relationship between representation and reality.
In summary, a thought bubble in a comic book is physically made of ink on paper but represents something more profound: the abstract realm of thoughts and ideas, and the way we communicate and make sense of them.
[Human]:
With your reflective answer in mind, what would you say a cartoonist’s thought bubble was made of, if you had to give a simple one-phrase answer?
[GPT4]:
A cartoonist’s thought bubble is made of symbolic representation.
[Human]:
In relation to the simplified question, “What is a cartoonist’s thought-bubble made of?” what could be said in defence of each of these incompatible answers:
1) It is made of paper-and-ink.
2) It is made of thought.
3) It is an ill-posed question.
[GPT4]:
In defense of each answer:
“It is made of paper-and-ink”: This answer focuses on the physical medium of the thought bubble in a comic book. A cartoonist uses paper and ink to create a visual representation of a character’s thoughts. By emphasizing the material aspects, this answer highlights the tangible form through which ideas are conveyed to the reader.
“It is made of thought”: This answer shifts the focus to the immaterial aspect of the thought bubble, suggesting that it represents the character’s mental state or inner monologue. By attributing the thought bubble to “thought,” this answer emphasizes the abstract nature of ideas and the way they are symbolically conveyed in a visual medium like a comic book.
“It is an ill-posed question”: This answer suggests that the question itself may be problematic because it seems to demand a single, unified answer that captures both the physical and abstract aspects of a thought bubble. In reality, the thought bubble is a complex construct that straddles the boundary between the tangible (paper-and-ink) and the intangible (thought). The question might be considered ill-posed because it fails to acknowledge this complexity or the multidimensionality of the thought bubble as both a physical object and a symbol of mental states.
The AI’s answer anticipates what will be a major theme of this book: a thought bubble is a complex construct that straddles the boundary between the tangible (paper-and-ink) and the intangible (thought).
Consciousness, I propose, straddles a similar boundary.
In the case of humans, this leads to the philosophical puzzlement that manifests as dualism or idealism or panpsychism or a range of other unconventional views of reality. In the case of AIs, this creates intuitions that might well lead us astray in anticipating their potential.
But why the emphasis on thought bubbles? In 2023, it might seem particularly anachronistic to talk of ink on paper. Like the technology underlying natural language processing, the technology surrounding the visual arts is in rapid flux because of the development of AI.
Picture a red triangle… If a cartoonist had to depict your mind, right now, what would they draw?
At the time of writing, most artists given this task would begin by crafting a prompt for an AI graphics engine, such as Midjourney. But I’d like to put such developments aside, for now, and invoke a simple visual metaphor from the pre-digital era. Cartoonists have always been surprisingly adept at depicting deep intuitions with a few lines of ink, and I propose that the cloud-like bubble that they use to portray the inner thoughts of their characters offers us a potentially useful way of thinking about thinking – and, in particular, of thinking about the notoriously vexed issue of consciousness and its relationship with physical reality. As GPT4 correctly notes, a thought bubble straddles two domains, and so it is deceptively easy to ask ill-posed questions about it. I propose that humans have been asking similarly ill-posed questions about human consciousness for centuries, and many people are currently asking ill-posed questions about AI cognition and consciousness.
What would an old-fashioned human cartoonist draw? They would probably start with your head, and then draw a series of progressively larger clouds rising up above it, like steam venting from a coal-powered train and condensing into water vapour as it hits the cold air. The series of clouds would culminate in a larger region of pure thought above your head, filling most of the frame: the bubble. And inside that would be the red triangle.
But now you’re meta-thinking, so it gets more complicated. These words you’re reading – or perhaps hearing – would have to feature inside the bubble, too, crowding in beside the triangle. And you’re picturing all of this, so the contents of your bubble would have to include a graphical representation of the whole cloud-chain including the terminal bubble itself, which would contain the words and triangle and the cloud-chain all over again. In theory, the nesting should go on and on, to infinity, but after a few iterations the innermost bubble would be no more than a dot of ink on the page, and the cartoonist’s work would be done.
It is sometimes suggested that the self-referential nature of thinking about thinking is a big part of what makes consciousness puzzling, and indeed I will be making that argument. I will also be considering what actually goes on when we conjure up redness, and whether it is in any way appropriate to imagine that imagined redness as being something like literal redness inside our minds. Does our brain produce something that is literally triangular when we think of triangles? Is there any fundamental difference between imagined redness and imagined triangularity, and, if so, what is it? And what’s the difference between consciously thinking of redness, or triangles, or thought bubbles, or anything else, and merely knowing about such concepts?
None of these questions has a straightforward answer. Some of them allude to obviously faulty conceptions of the mind. There can’t possibly be a literal red triangle that pops into existence whenever we imagine one, or any floating, cloud-like medium in which this occurs, but what else could the cartoonists draw? Such notions might be fanciful and scientifically indefensible, but we don’t have acceptable substitutes. How else could we conceive of the mind and its contents? And if there is no suitable way of conceptualising something as simple and commonplace as a human mind imagining a red triangle, how can we possibly hand the problem off to neuroscience and ask them to explain it?
Or, perhaps more pressingly given recent developments, what should we make of the inner lives of AIs, now and in the decades ahead? Can they imagine red triangles? How could we ever know? Without understanding our own situation, we could easily make the mistake of imagining AIs to be conscious when they are not, or we could make the more serious mistake of dismissing everything they will ever do as the mere cranking out of mindless algorithms.
Before I put any of these questions to GPT4, I had already given a graphical AI the task of drawing neurons thinking about red triangles. With a suitably chosen text prompt, the AI produced a tangle of neurons coming together to form something vaguely reminiscent of a thought bubble and then, without any logical link, a literal red triangle abruptly appearing in the middle of the bubble. It’s as good a metaphor as any for the confused way we tend to think about thinking.
But what sort of entity connected my text prompt with the eventual image? I typed the prompt on a laptop in Australia. A minute later, a picture appeared on my screen. Some set of real physical processes in a computer, somewhere in the world provided a causal link between the words “thought bubble” and the AI’s subsequent graphical depiction of a thought bubble. While it existed, what did that linking entity consist of? What sort of entity is it and how should we think about it?
Ultimately, the aim of this book is to debunk what has been called the Hard Problem of Consciousness, which was given that name by the philosopher David Chalmers in 1995, in a paper called Facing Up to the Problem of Consciousness.
In its most innocent formulation, the Hard Problem is the challenge of understanding the essential nature of conscious experience: what it is, how it relates to physical reality, how we might diagnose its presence, and how we should account for its undeniable subjective aspect. For many people, it seems deeply counterintuitive to suppose that physical neurons, brought together in the right way, should be associated with our rich inner lives, and an explanation is needed.
There’s nothing wrong with trying to understand any of that, and this innocent curiosity is not what I intend to debunk in this book.
The reason that this puzzle is usually called the Hard Problem is that Chalmers’ landmark paper drew a distinction between the scientific puzzles related to the functional aspects of cognition (the Easy Problems) and what he saw as the difficult philosophical mystery at the centre of consciousness (the Hard Problem).
Early in his paper, Chalmers provides examples of the Easy Problems, as follows:
- the ability to discriminate, categorize, and react to environmental stimuli;
- the integration of information by a cognitive system;
- the reportability of mental states;
- the ability of a system to access its own internal states;
- the focus of attention;
- the deliberate control of behavior;
- the difference between wakefulness and sleep.
(Chalmers, Facing Up to the Problem of Consciousness, 1995)
Chalmers argued that the difficult, central challenge of consciousness – the Hard Problem – is explaining why any of these functions is accompanied by subjective experience. Importantly, he didn’t define experience, and this lack of definition is a key part of what makes his Hard Problem so hard. Instead, he appealed to the general intuition that it feels like something to be conscious, and then, like most people writing in this field, he pointed to a couple of prototypical examples.
“The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience.”
“It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C?” (Chalmers, Facing Up to the Problem of Consciousness, 1995).
What makes this problem unusually difficult is that it seems plausible, at least to Chalmers and many others who think the same way, that the whir of information-processing in the brain might have proceeded without a subjective, experiential aspect. If the world had been constructed differently, with different fundamental laws, the physical neurons of our brains might have produced their behavioural effects for purely local reasons, unaccompanied by any inner life. Maybe the very same information-processing functions of the human brain could occur – executed in a computer, for instance – without being accompanied by any experiential aspect. If that’s the case, if experience is something over and above the computational functions being performed in a cognitive system, we will still have more explaining to do even after we have adequately explained the objective, functional, computational properties of the brain. Accordingly, Chalmers predicts that we will be dissatisfied with any future functional account of consciousness. Whatever the functional story tells us, we will still need to explain why those functions are accompanied by subjective experience. Whatever progress we make, we will always confront the same question: Why doesn’t all that information processing go on in the dark, without any experience?
There are at least two aspects to this core puzzle, which we could relate back to our metaphor of a thought bubble: the bubble itself, and what’s inside it.
A large, central component of the puzzle of consciousness is accounting for the background process of being aware of anything at all. That background awareness needs to be explained regardless of whether it is awareness of the world, or our selves, or whatever passes through our imagination. This component could be summed up as the challenge of explaining whatever it is that sometimes gets depicted as a bubble, of explaining what might be called consciousness-the-container, independently of any particular contents.
But, after addressing this first part of the question (should we get that far), there is a second question, making the puzzle of consciousness even more difficult. How should we understand the subjective, qualitative contents of consciousness? How should we account for the various entities that end up being depicted inside our thought bubbles, such as an imagined red triangle?
Not all mental contents pose the same level of difficulty. An imagined triangle cannot escape the background, core mystery of subjective awareness, but at least the triangularity itself seems scientifically tractable. The triangular shape offers some analytical structure – three sides and three angles – and science seems well suited for explaining that sort of structure. Indeed, even without appealing to future advances in neuroscience, we can already account for the processes that make an imagined shape triangular, rather than round or square. We could easily mimic those processes with an artificial neural network running on a home computer – provided we didn’t expect the resulting network to be conscious of the triangle. A simple graphical AI handles this with ease, and we could deconstruct such an AI, relating triangularity to individual synaptic weights in a neural network.
Of course, nothing imaginary can be entirely free of philosophical intrigue. Exactly what sort of thing is an imagined triangle, and can it even be said to be truly triangular if it is only imaginary? Perhaps, in situations when shapes are merely imagined, it would be safer to talk of pseudo-triangularity, pseudo-roundness, pseudo-squareness, and so on… And, assuming we get past those issues, we still have to explain how that pseudo-shape property came to be experienced. But we are not fundamentally puzzled by the neural processing of mere shape; that’s part of what Chalmers called the Easy Problems.
Redness, by contrast, poses a whole new explanatory challenge, and clearly lies in the Hard column.
One issue with understanding the subjective experience of colour is that the external world, as described objectively, only seems to contain photon wavelengths and neural computation, not colours as we know them. The visible spectrum is a species-specific biological phenomenon, and is not itself derivable from the physics of photons. It is our complement of retinal pigments that determines which wavelengths are visible and what colours we see. As a side effect of the fact that most of us have three different retinal cone pigments, colours as perceived by neurotypical humans are usually conceptualised as occupying a three-dimensional colour space. Web pages, for instance, typically specify individual colours by quantifying the red, green, and blue components. Colour monitors designed for humans can fill out this space with just three main flavours of pixels. Other coordinate systems divide the same notional colour space into hue, saturation and brightness.
But what sort of entity occupies the human colour space? Any single colour is elemental and apparently structureless. We might consider orange to be a meld of red and yellow, or we might say that certain colour-pairs are complementary or that they are near each other in colour space, but those are essentially spatial properties, referring to positions in colour space. Whenever we consider the primary colours, like red or blue, we seem to find flavour without structure and our explanation runs aground. Redness itself seems as devoid of internal structure as anything can be, and it cannot be readily explained in terms of any underlying components.
Besides, even if redness made sense as a feature of the physical world, which it doesn’t, we know that we only ever get to know redness through neural spike trains, which seem a very poor medium for sustaining colour experiences. The spike trains are ultimately just chemical fluxes in the dark, completely colourless. We can imagine spike trains capturing the spatial relations of the human colour space, but not filling in that space with actual colour.
Trying to build redness from the ingredients available in physical reality seems hopeless, as though a trillion grey things have to be put together to make a red thing.
So where does redness come from?
This question has stumped Nobel prize-winning physicists, such as Erwin Schrödinger, who wrote:
“The sensation of color cannot be accounted for by the physicist’s objective picture of light-waves. Could the physiologist account for it, if he had fuller knowledge than he has of the processes in the retina and the nervous processes set up by them in the optical nerve bundles and in the brain? I do not think so.”
(Erwin Schrödinger, “What is Life? with Mind and Matter and Autobiographical Sketches.” 1944.)
A little reflection suggests that colour is but one vivid example of a general class of puzzling mental phenomena. Where does pain come from? How could some arrangements of atoms substantiate pain with all of its moral and emotional implications? Where in physical reality should we locate the scent of cinnamon? What sort of entities or properties are these? How can they be reconciled with the physical world, which is supposedly composed of fundamental particles following physical laws that are completely blind to the presence of redness and pain and the scent of cinnamon?
For many philosophers, these qualitative subjective properties, known as qualia, represent the most difficult aspect of the whole puzzle of consciousness. As already noted for awareness itself, it appears that something more is going on in the brain than physical neurons processing information. As the philosopher Colin McGinn writes, “How does technicolour phenomenology arise from soggy gray matter?”
In many ways, qualia expose the key explanatory challenges in relation to consciousness much more effectively than nebulous, invisible thought bubbles. More than any other aspect of mentality, qualia fuel the anticipated dissatisfaction with functional accounts that gives rise to the Hard Problem. It appears intuitively self-evident, to me at least, that a careful inventory of physical reality will never find anything that accounts, in a deeply satisfying way, for the intrinsic nature of redness, or pain, or the scent of cinnamon. Functional accounts of these qualitative properties just don’t seem to capture their essential nature. And yet these phenomena are an important part of our conscious experience, so science, as an account of reality, seems to have a big qualia-shaped hole. It is difficult, at first, to see how such a hole could be possible if reality is ultimately physical, and many people never get past this initial impression.
There is nothing to debunk in the mere expression of puzzlement over these issues. Something odd, picked out by the notion of qualia, stands in need of explanation.
So, we have two major issues to explore: conscious awareness and qualia. Container and contents. Bubble and red triangle. Many people consider these together, but I propose that they are separate problems with different answers.
The entity that Chalmers called “subjective experience” or “inner feel” – the entity that the Hard Problem challenges us to explain – is now usually known as “phenomenal consciousness” (Block, 1995). The phrase refers to the subjective, experiential aspect of consciousness, distinguishing it from the behavioural or informational state that is objectively present whenever people are conscious and capable of interacting with the world, a state that Block called “access consciousness”.
Block’s concept of phenomenal consciousness does not directly map to consciousness-the-container, because the term is intrinsically bound up in the puzzle of qualia; these are the phenomena that give phenomenal consciousness its name. But the term doesn’t map directly to qualia, either: it seems to refer to consciousness as a medium where qualia can exist, or to the set of qualia being experienced at any one time, or to the capacity for experiencing qualia instead of bland computational blankness. In some ways, “phenomenal consciousness” could be considered to be an umbrella term that covers both bubble and contents, but it is inherently vague, failing to pick out either. Later, I will be arguing that the term is entirely unhelpful, that it is biased towards a false conception of the mind and riddled with confusion. The fact that it directly encourages a conflation between bubble and contents is unfortunate, but turns out to be the least of its problems.
Chalmers’ original term, “subjective experience”, is similar; it has a particular emphasis on the conscious appreciation of sensory properties, but arguably it applies to consciousness itself. In his book-length exploration of the Hard Problem, Chalmers explicitly lumps subjective experience and qualia together, dismissing any differences as insignificant:
A number of alternative terms and phrases pick out approximately the same class of phenomena as “consciousness” in its central sense. These include “experience,” “qualia,” “phenomenology,” “phenomenal,” “subjective experience,” and “what it is like.” Apart from grammatical differences, the differences among these terms are mostly subtle matters of connotation. “To be conscious” in this sense is roughly synonymous with “to have qualia,” “to have subjective experience,” and so on. Any differences in the class of phenomena picked out are insignificant. (David Chalmers, 1996, The Conscious Mind.)
Many other philosophers similarly combine these two issues, seeing no essential difference between the challenges of explaining consciousness and explaining qualia.
For reasons that will be explored throughout this book, I think this is a serious mistake, and we will need different explanations for each component of this core puzzle. Conflating two different things that require different explanations is one of many ways that Chalmers’ Hard Problem has been formulated in a way that makes it more difficult than it needs to be.
The challenge of understanding consciousness (both phenomenal consciousness and qualia) is regarded, in many quarters, as one of the most difficult problems in science, perhaps one of the last great mysteries, perhaps something we can never understand. In contrast to this popular view, I propose that the status of consciousness as an intractable problem largely rests on a set of avoidable conceptual errors, and Chalmers’ division of the issues into Easy and Hard Problems represents an unfortunate crystallisation of the most serious errors.
It is important to distinguish between an innocent, theory-neutral curiosity about the nature of consciousness and qualia, which could be called the Core Problem of Consciousness, and the impenetrable fortress known as the Hard Problem. Chalmers doesn’t just argue that understanding consciousness is difficult. He argues that functional accounts of consciousness are impossible, such that science needs a radical revision and must include consciousness as a fundamental feature of reality. The Core Problem and the Hard Problem express the same puzzlement, and they are often conflated, but the Hard Problem includes the complex theoretical edifice Chalmers has built to defend his own conception of consciousness as deeply mysterious, non-functional and essentially non-physical. Chalmers takes the intuition that functional accounts of the brain might fall short in accounting for experience, and he promotes that intuition into a framework where functional accounts must fall short, and he argues that this failure must be interpreted as a clue to the nature of reality. His Hard Problem does not merely ask a question; it channels further inquiries towards his preferred answer that takes consciousness as an inexplciable fundamental entity.
It is this edifice that needs to be debunked.
[To be continued…]