Synthetic intelligence and its type of mind

November 15, 2018
jason_pittman

By Jason M. Pittman, D.Sc.

Recently, I introduced the idea of synthetic intelligence containment using time. While we can't assume that synthetic intelligence would possess the attribute of consciousness, it feels prudent to have a means of rendering synthetic intelligence safe. This raises the question, though: safe from what?

While I understand that any non-human, non-natural intelligence may present existential risk, I am not certain about how a synthetic intelligence might view me (or you or the world) in such a way that carrying out an existentially risky behavior would be optimal. Perhaps understanding how synthetic intelligence may think can help us develop more certainty.

Perhaps the most prominent idea is that minds are computational. At its base, computation is algorithmic. In other words, input is transformed to output by way of a function. Such functions have a myriad of forms. A simple example is addition -- two and two are inputted to the addition function and out we get four. Now, as far as we know, consciousness (and thus, thinking) is not as simplistic as elementary arithmetic. However, the essential function remains, regardless of how complex the algorithm may grow.

Unsurprisingly, the other idea is that minds are not computational. There is a long, unsettled debate on these points. I want to put the debate aside for now and address the very important phrase missing from both ideas, however: human minds.

Some may object that we only know one type of mind for certain and that is human. Fair enough. I would suggest, however, that it is dangerous to assume there are no other types of minds. Further, it is equally dangerous to assume that, if there are other types of minds, it necessarily follows that such minds are like human minds. While we may not be able to know, we may come to know how to know. The difference is critical.

Here, we can look at our favorite examples: plants and colonizing insects. The Mimosa Pudica plant reacts to touch. That is, touch is an input to a function which results in the output of shriveling away from the input source. Is that not computational? Likewise, a bee colony guards against and reacts to intruders. If we take these behaviors as outputs based on a computational processing of input, we have a foundation to claim that plants and insects interact with the world computationally. Yet, we have another assumption we need to untangle now. That is, examples using plants and insects rely on observed behavior and an observer capable of observing such behavior.

A plant recoiling from touch is knowable, familiar to the human mind. What about consciousness that, while computational, doesn’t produce output (i.e., behavior) that we can observe? I think the lack of clarity in this area is why containment is necessary. I think so not because there is necessarily an existential threat that is likely, but because I think containment may provide the means to answer this question.

"I am fascinated by all things human and tech," Dr. Pittman writes. "I see the stars as our inevitable destination and work to do my part in helping our species get there." Learn more about Dr. Pittman at http://www.jasonmpittman.com/