By Jason M. Pittman, Sc.D.
Previously, I argued that the three types of consciousness were indistinguishable. I argued that this is because we cannot reliably detect any consciousness other than our own. I reasoned that an undetectable consciousness is an indistinguishable consciousness. Based on that, I made the claim that an undetectable, indistinguishable consciousness would be bad.
I left important details open, however. Bad for whom? Bad, in what way? Let’s find out.
First, I ought to clarify what I mean by bad. Bad is not a synthetic consciousness immediately seeking our destruction upon waking. You might suspect that is the worst case. However, while such an event would be existentially bad, I think there are other possibilities that are far worse. For example, bad is not knowing if a synthetic intelligence is conscious or if it is being deceptive in manner intended to get us to believe it is conscious. As well, bad is thinking that a synthetic intelligence is conscious when it is not.
Finally, the worst kind of bad is a conscious synthetic intelligence that is indifferent to other types of consciousness.
With the definition of bad behind us, I can explain why not being able to distinguish natural from artificial from synthetic is in fact bad.
Deception is a greased slide towards pathology in the best of cases or malevolence in the worst. I think these are remote, edge possibilities though. My rationale is that such badness is an exception in natural consciousness. Yet, net effect is an important factor in risk calculations. The effect of a pathology or malevolence here ought to be self-evident as a prelude to the existential risk pondered by ethicists.
More likely, I see the strong possibility of natural consciousness inappropriately projecting a state of consciousness onto a synthetic intelligence when the latter is not conscious. The result of that will not be as catastrophic to us as a deceptive, pathological consciousness but will certainly lead us down unproductive paths. Maybe we just waste time; maybe we place trust in a false entity that is not trustworthy. What we don’t get is a productive partnership that moves our species forward.
Lastly, there is the potential for a synthetic intelligence to become truly conscious but adopt a position of indifference towards other consciousnesses. I would argue that this is actually the absolute worst-case scenario. An entity that is as capable or more capable than us, standing by while we descend towards oblivion is a nightmare made real. Such a synthetic consciousness, beyond being indistinguishable, would essentially be alien.
Accordingly, I have begun to wonder what, if any, sociological and psychological constructs a synthetic entity might manifest. Will it manifest constructs like natural consciousness? If so, perhaps the lack of indistinguishable nature means we can be conceptually equal. However, if synthetic consciousness manifests a construct set alien to us, we may need a mechanism to shield other consciousnesses from the associated risks.