Introduction

The issue of machine consciousness (MC) could be considered the culmination of all philosophical problems that trouble the field of Artificial Intelligence (AI) since its onset. It forces us to consider the hard problem and its (for some, putative) implications, and stretch the ideas of functionalism to their theoretical and practical limits.

Thanks to the renewed interest in consciousness studies in the past twenty years, we may start to have the concepts and frameworks needed to tackle long-standing elusive questions. The sub-field of machine consciousness is playing a valuable role in this effort, continuously challenging intuitions and forcing to put theories to empirical test.

Interestingly, according to current consensus, the short answer to the question "Could machines ever be conscious?" turns out to be a sound "Yes." Even John Searle, famous for his critiques of strong AI attempts, famously said “[…] of course some machines can think and be conscious. Your brain and mine, for example.” (Searle, 1997, p. 202).

However, the problems for Machine Consciousness lie in the wildly different reasons different theorists argue how and why a machine could ever be conscious.

In this essay, we are going to survey some of the most popular and theoretically relevant approaches to Machine Consciousness, with a critical eye for the problems that burden the field and the inevitable ethical issues associated with it.

Against Machine Consciousness

Before delving into our discussion on the possibilities of MC – and the wide range of debates internal to the field regarding how this possibility could be achieved – it is worth considering some arguments against MC in toto. Some of these positions have already been dismissed and criticised in the past, as early as Turing's seminal analysis of AI (Turing, 1950).

First, we have old-fashioned dualism, or the notion that consciousness is somehow a peculiar property of the nonphysical mind. Especially if religiously motivated, this position is a priori excluded from scientific discourse of any kind, but not surprisingly arises frequently in discussions on consciousness. Viewed by some as a desire to protect the mind from science (Dennett et al., 1994), claims of this kind can be rejected by asking why, of the many complex physical objects in the universe, the brain should be the only capable of interfacing with another realm of being. All of the usual problems of dualism then apply here.

Second, we have arguments on the importance of biological, organic brains in supporting consciousness. While it is possible that the computational efficiency achieved by biochemical processes is unreproducible in other physical systems, there is no reason to defend this claim in principle. If it is just a matter of efficiency, it can conceivably be overcome by technological progress or the adoption of neuromorphic engineering. If it is also a matter of supposed primality of some organisation of atoms over others, then it is just a dogmatic claim (Blackmore & Troscianko, 2018, p. 322; Dennett et al., 1994).

Third, there is the more popular notion that some processes are just too complex to be implemented in machines. Even if “scientifically boring” (Dennett et al., 1994), this may well be a real possibility. Nonetheless, there is no in-principle reason to back it, and most of the tasks once thought to be impossible for machines to carry have been solved in the past twenty years.

Overall, even beyond those surveyed here, no argument proves the impossibility of building conscious machines convincingly. This may be the reason underlying the excitement and diversity of work in the field.

On strong and weak MC

A separate theoretical challenge for MC, superficially far more serious than claims on its total impossibility, stems from Searle’s distinction between strong and weak AI. Here we will not tackle Searle's original arguments, especially not its infamous Chinese room (Searle, 1980). Not only is a discussion of the Chinese room far beyond the scope of this essay, but it is also not a very pertinent argument against MC. In fact, even if it is often used in this context, the argument was initially designed to deal with intentionality (here in the philosophical use of aboutness), and not consciousness. Before re-using the argument in this context, the relationship between the two must be made explicit, and this is delicate on its own.

Leaving aside Chinese rooms and their controversies, we are only adopting the strong and weak distinction to illustrate two kinds of approaches in MC (Seth, 2009). Weak MC – like weak AI – does only want to model part of the putative mechanisms underlying consciousness (derived from theories) to help reveal explanatory links and advance our understanding of the phenomenon. The models so built are not claimed to be conscious, much as simulated rainstorms are not claimed to be wet. On the other hand, the explicit aim of strong MC is to create phenomenally conscious systems. This pursuit is much more problematic, but it is also what we are more interested in in this discussion.

However, it must be noted how weak, and not strong, MC is the one that can perhaps better advance our scientific understanding of consciousness and its possible reproducibility in other media (Seth, 2009). The reason for this is an inherent circularity in most strong MC proposals: researchers set out to create an instantiation of consciousness that would reveal general principles, but the principles that would validate the interpretation of such models are either absent to begin with or even built in the model from the start. This is a complicated and pivotal issue, to which we shall return later.

As a last note on this matter, it is argued that we may once reach strong AI with weak MC (Gamez, 2008): on what grounds will we base our distinction at that point? This is where theory-driven approaches to MC come in.

Top-down approaches to MC

Most of the proponents of strong MC move their search from solid theoretical grounds. They either want to find the key "ingredient" of consciousness to reproduce it in machines or seek to reproduce some putative neural architecture somehow associated with consciousness.