Philosophy
I love reading philosophy. Recently I have been especially interested in machine consciousness, and in the disagreement between property dualism and functionalism. Property dualism treats conscious experience as involving real mental properties that are not reducible to physical or functional organization. Functionalism, by contrast, asks what role a mental state plays in a larger system: what causes it, how it interacts with other states, and what behavior it helps produce.
I personally lean toward functionalism, especially in the spirit of Gilbert Ryle’s philosophy. Ryle’s idea of a category mistake is useful for thinking about consciousness because it warns against looking for the wrong kind of thing. In his Oxford example, a visitor is shown the colleges, libraries, offices, and lecture halls, but then asks where the university itself is, as if the university were another building hidden somewhere else. In the team spirit example, someone watches the players, tactics, passes, and coordination, but then asks where team spirit is, as if it were an additional object beside all of those activities.
I find these examples helpful because they suggest a way of resisting the temptation to treat consciousness as a mysterious extra ingredient. Consciousness may not be a separate substance or detachable property sitting behind cognition. It may instead be a way that certain cognitive capacities are organized and exercised. Perception, memory, attention, self-monitoring, planning, affect, and reportability may together form the kind of system in which conscious states can exist.
That is why I think machine consciousness is achievable through computation and AI systems, at least in principle. If the relevant functional organization can be implemented computationally, then a machine could instantiate the patterns that matter for consciousness. This does not mean that every large model is conscious, or that scaling alone settles the question. It means that I do not see biology as a necessary boundary. The important question is whether an artificial system has the right structure, dynamics, and integrated capacities.
I am more of a functionalism person, but I still enjoy reading David Chalmers’ philosophy. His work on the hard problem, philosophical zombies, and the explanatory gap is valuable because it keeps the functionalist view honest. Even if I think computation and AI systems can in principle support consciousness, Chalmers’ arguments are a useful reminder that explaining behavior and information processing may not automatically explain subjective experience.