Rethinking Integrated Information Theory and the Metaphysics of Φ
Integrated Information Theory (IIT) is a theory of consciousness which attributes the immediate, unified phenomenon of subjective experience to a certain type of information which can be measured quantitatively and articulated as a value, Φ. Φ Represents the total quantity of consciousness within a system; if Φ were 0, that system would not experience consciousness. If two systems each had Φ, and the first system had 100Φ while the second had 500Φ, the second system would be more conscious than the other. It is a way to standardize how we conceptualize degrees of consciousness by associating it with the level of information that is integrated in the system. The weeds of these assertions span nearly twenty years now and have undergone multiple iterations and mathematical modifications, but the root of the theory is that we can empirically study a physical substrate of consciousness by means of measuring the amount of integrated information in that system. IIT has shown success in predicting conscious states by virtue of the “Zap Zip” method which, in short, reads and compresses electrical information of the brain to output a number which correlates to how integrated the system’s information is. This Zap Zip method is mostly consistent with empirical data relating to patients which have varying levels of reportable consciousness. There are difficulties and debates regarding the difference between a clinical definition of consciousness and a neuroscientific definition of consciousness, and it is also important to note that the PCI (Zap Zip is an analysis that is properly called the perturbational complexity index [PCI]). Papers which reference the PCI do not tend to use it as a proof of IIT, however the relevance of the two to each other, as well as the scientific adjacency (researchers on PCI tend to be IIT researchers as well) imply their affiliation.
That is all to say that IIT has proved an incredibly potent and successful theory of consciousness, and follows suit with the newer developments in the science of consciousness that associate systemic features of the brain with consciousness, as opposed to some regional identifier (in other words, most new theories of consciousness tend to be about what the brain does, not where the brain does it). However, IIT is not widely accepted, and there are numerous obstacles it has yet to overcome. One major claim which IIT makes is that Φ measurements are a proper quantitative correlate to consciousness. I’d like to make the case that though IIT’s Φ is indeed heavily correlated to what we describe as consciousness, Φ actually measures an alternative state of our neural system which is not identical to consciousness, and that our assertions of consciousness being necessarily related to this correlation are false. I will present Illusionist Integrated Information Theory of Consciousness as the groundwork for this claim. Illusionist Integrated Information Theory was introduced first by my professor Kelvin McQueen. Dr. McQueen has served during my undergraduate studies as my primary mentor in the field of consciousness science. It is important to note that though I adopt the same name as his theory, it is possible and in fact highly probably that my conclusions may vary slightly, however I think the major arguments tend to coalesce. That is to say, most of my presentation of Illusionist Integrated Information Theory will be a retelling of how McQueen argues it and lectured about it to me, however I will attempt to articulate a method of conceptualizing his theory in a way that may not align with how iIIT is represented in McQueen’s papers.
To properly articulate Illusionist Integrated Information Theory of Consciousness (iIIT), I will first present Illusionism as a theory of consciousness, and then summarize the case made by Kelvin McQueen which states that an illusionist framework which alters IIT’s assertions, makes its claims more coherent and less metaphysically demanding. I will then provide a positive theoretical claim which describes Φ as an indication of a system’s capacity to experience consciousness, and not consciousness itself.
Illusionism posits that phenomenal consciousness, as we conceive of it, does not exist. Most prominent theories of consciousness are “realist” and thus imagine consciousness as something which exists, and their task is to explain “how it comes to exist” (Frankish). Illusionism instead promotes the idea that consciousness is, yes, something which we experience, however based on rational scientific exploration and analysis, we should not interpret it as something which is at face value real. Keith Frankish likens this argument to a metaphor about how science would interpret something like Psychokinesis. There are a few options on how a scientist could approach the phenomenon of something which seems to have no method of being explained by our current scientific understanding. One relevant avenue we could take is to take the experience of psychokinesis at face value, and try to utilize our current scientific knowledge to explain it – IE there must be some paradigmatic shift in our understanding of how physics and mental manipulation of the world coexist and interact. Another avenue is to skeptically analyze the phenomenon and attempt to make sense of how the person with “psychokinetic powers” is really performing the illusion.
Frankish argues that the former example can be compared to realist theories of consciousness. The realist approach is to declare a phenomenon real, and then seek to explain the physical, provable mechanisms behind it. Whereas the illusionist approach concedes that the experience of a psychokinetic person is true and did occur, but is skeptical about our evaluation of what is real and what is illusory – thus we should try to explain the phenomenon while remaining skeptical about our experience of it and the metaphysical assumptions which come with it. Thus, the illusionist strategy is to declare consciousness as an illusory phenomenon, which tricks us into believing that it is real due to our beliefs about self, continuity, etc. Illusionism also posits that the explanation for how the illusion arises is far more achievable than previous theories’ tasks of explaining The Hard Problem.
My position is that some of the criticisms of IIT can be avoided if IIT embraces an illusionist framework. One main criticism of IIT is that of logic gates, and their associated Φ. It is consistent with IIT that a computational structure called a “logic gate” is theoretically capable of producing high amounts of Φ, despite our intuition that these logic gates do not experience consciousness.
A logic gate is a device that represents a Boolean function – essentially a logic gate inputs binary information and then outputs binary information which varies based on its computational structure. It is theoretically and computationally possible to create a logic gate which has recursion (which is a necessary component of IIT’s consciousness), and thus would have some non-zero value of Φ according to IIT. If this is the case, then it seems conceivable to create a large, integrated grid of logic gates, and then expand these in a third dimension to produce a massive logic gate system which has integrated information. In this example, the system would, by all metrics used by IIT, have a significant amount of Φ; perhaps even more than a human.
All of that is to say that it is not Φ as a quantity alone which generates consciousness. If that were the case, the logic gate machine would be more conscious than us, even if its output binary is completely nonsensical in its environment. Illusionism has a very simple solution to this problem, which is that consciousness does not exist, thus the Φ reading of the logic gate system can remain true, and IIT remains unchallenged. The implication of this however is that Φ does not necessarily correlate to consciousness anymore, but instead correlates to something else. It necessarily follows from this that current IIT defines consciousness incorrectly. I will articulate what IIT’s definition of consciousness is, and then propose a new understanding of consciousness, largely influenced by the illusionist theory. I will then attempt to articulate what Φ represents in this new, illusionist-IIT framework, if not consciousness. It is important to note also, that a similar argument can be made to say that the logic gates which possess more Φ than us are in fact, more conscious. However, this argument similarly necessitates the diluting of what consciousness means to us as humans. If we decide to ascribe consciousness to complex intellectual humans, as well as complex intellectual logic gates, consciousness means less - it is stripped of its anthropomorphic necessities. Both of these arguments, that of consciousness nihilism and of consciousness universalism, are in essence the same. This description can be analogized to that of the traditional mereological debate of objects as they exist ontologically. Describing objects implies that we must have a proper way to exclude and include what we mean to describe. The difficulty of this is that there are always exceptions. A table can never be properly defined because at some point the table as it is worn away - becomes not a table. But the point at which it loses its “tableness” is not clear, thus we can make the argument that a “table” never existed, but in fact we are able to pragmatically determine on an inconsistent basis that when certain things are arranged “table-like” we call it a table. This pragmatism, outwardly admits to not being a metaphysical assertion, but instead relies on one of two possible explanations. That all objects are objects, or that no objects are objects. There is no universal qualifier for something to exist as a table, thus each table exists as its own unique object. Or, alternatively, there is no universal qualifier for something to exist as a table, thus no table exists as its own unique object. To apply this to the landscape of integrated information, we can argue that the amount of Φ in a system arranged “human-like” tends to amount to consciousness as we define it. But each person’s conscious experience is slightly different and thus runs into the same problem as the table. As our consciousness drifts away from the mold (becomes less familiar, less anthropocentric) it becomes harder and harder to call it consciousness, but the point at which it stops being consciousness is impossible to consistently determine. Thus, like with the table, we can either conclude that all integrated information systems are considered conscious, or no integrated information systems are considered conscious. Regardless, we are saying the same thing which is that there is something which makes up or is associated with subjective experience, but our familiarity with that subjective experience is tied to what we impartially determine to be familiar. Thus, the logic gate either has consciousness - or doesn’t, but whatever we determine for the logic gate must be consistent with what we determine for the human brain as well.
IIT has not always had a consistent definition of consciousness. In Tononi’s 2004 paper, he describes it as “...what abandons us every night when we fall into dreamless sleep and returns the next morning when we wake up” (Tononi 2004). This statement seems alike to our colloquial definition of consciousness. More recent publications by Tononi state that consciousness “fades” during certain phases of sleep, or that there is a “marked reduction of consciousness” from dreaming to dreamless sleep. Tononi’s recent statements are more precise, and seem to suggest that there is still some consciousness in dreamless sleep. This is consistent with IIT’s readings of Φ during dreamless sleep (which are non-zero). There seems to exist an inconsistency in IIT’s determinations of consciousness in sleep, and what many would likely determine to be a useful or correct definition of consciousness.
It is largely recognized that consciousness is and should be synonymous with subjective experience. Thomas Nagel’s definition, I believe, is the most useful definition to articulate consciousness from other cognitive functions. Nagel posits that something is conscious if “there is something it is like to be that [thing]”. The important articulation here is that consciousness must be synonymous with phenomenal experience. This makes it difficult to satisfy IIT’s assertions of dreamless sleep.
I argue that there is no phenomenal or direct experience of dreamless sleep, but IIT’s axioms make it necessary to conclude that there is consciousness in dreamless sleep, however diminished. The phenomenal experience of consciousness seems to be closely related with certain beliefs regarding our capability for memory, and perceptions of a unified self. These beliefs to me seem necessary for an anthropocentric definition of consciousness (a pragmatic one), but are not necessary or sufficient for a unifying theory of consciousness.
When we lose the ability to remember, or to refer to our self as unified, it seems that we lose consciousness. If I were to describe a state when I did not have the capacity to store any memory, and was not interacting with any neurological correlates of “self-hood”, I would tend to describe that state as unconscious, but by doing so I am faltering on a unifying theory of consciousness. If I wanted to truly describe the state of change that involves my loss of memory, loss of self, loss of chronology, etc. I would need to properly articulate all the systems that I believe to be necessary for a human to be conscious, but doing so would not make sense for a theory which grants consciousness to all integrated information. To be consistent with a theory of consciousness based on integrated information, it becomes necessary to reject the useful, pragmatic definition of consciousness, and “bite the bullet” by declaring that we can experience consciousness, as defined by IIT, while being dead asleep and having no dreams or memories. It is difficult to defend this because this statement seems to be completely useless in the questions that we set out to answer in the neuroscience of consciousness. In other words, a properly unified theory of consciousness like IIT, has nothing useful to say about human consciousness specifically - but instead identifies an aspect of information systems which may, in the case of humans, be necessary for us to experience and reflect on life. Integrated Information Theory, when combined with materialist explanations of how are integrated information systems are uniquely arranged, can account for the more pragmatic question of “what is consciousness (as I experience it)”. IIT provides the necessary atoms, upon which the universe is built - but to answer what most people are asking, we must investigate how these atoms make up the Earth. And in doing so, if we encounter a supernova, we should not reject the notion that the supernova is made of atoms, but instead recognize that despite the substance being the same, it is the unique configuration of a system which gives it its relevance.
Φ to me is the ground on which consciousness is experienced. That is, Φ measures the integration of a systems informational structure, so the more a system’s structure is integrated, the more likely, or capable it will be of creating complex systems of memory, conceptions of self, etc.(the things which contribute to our belief in consciousness as related to our unique human experience). It is not enough for a system to have high Φ to be conscious (as we are interested in it). A system must also be arranged qualitatively in such a way that behaviors which produce a belief in consciousness may arise. Φ exists as the informational structure which allows for conscious experience to manifest, but is more accurately defined as denoting an attribute with a much more unified and broad scope.
A metaphor which may be helpful is that of a software program being built with binary as well as the elements which create the experience of said program. There are things which contribute to this program’s “program-ness” which are necessary to its existence as a program. All programs need to produce an output, have executable functions, the ability to make decisions (if, then), etc. If a program didn’t have these features it would not have program-ness any longer in any way that is meaningful to us. The underlying informational structure of programs is binary, and the more complex the binary system, the more capable a computer is of creating programs which have program-ness. This is to say that a computer can have highly complex binary systems which do not produce program-ness, simply because the complex binary systems do not necessarily entail things like executable functions, if-then decisions, etc. However the more complex a computer’s binary system is, the more capable it would be of creating the unique computations which are required by things like decision-making, executable function, output production, etc. It requires both the complex binary system as well as the things which contribute to program-ness to properly account for everything which makes up the experience of a program. Other binary systems can exist and have highly complex information structures, but these structures (like a logic gate) have no relevance to what we want to do with our program, thus despite having the same substance, they lack the relevant qualitative makeup to be considered by us as meaningful.
This analogy applies to Φ in that just like complex binary systems, high Φ systems are given the capability to produce complex cognitive behaviors which are uniquely mammalian or human –which contribute to the formation of our illusion of consciousness, and thus our experience of consciousness. This argument suggests that Φ is something more fundamental and foundational in the generation of consciousness, and is not a sole contributor to the emergence of conscious experience (as we usefully define it). In defining phi in this way, we are able to avoid certain obstacles which currently inhibit the success of IIT as a theory such as the logic gate objection. Instead, we produce a more tangible theory which has a more achievable question: How does Φ in a system relate and contribute to the production of the higher-order cognitive behaviors which we determine to be meaningful in our unique subjective experience? And how do these higher-order cognitive functions produce the belief of consciousness? By reframing Φ in this way, it loses some potency – as it is not the singular indication of consciousness any longer, but it is now capable of being explored with greater perspective.