Dr. Jason M. Pittman, Sc.D.
Previously, we have discussed synthetic intelligence, agency in synthetic intelligence, and how we might measure both. Now, I would like us to consider a specific aspect of intelligence and agency: trust. Trust is an important notion to discuss because of our indirect experience of trust as a mediator of knowledge. Imagine that we have created a synthetic intelligence that is (through our tests!) demonstrably (a) intelligent and (b) has agency. Can we trust this synthetic intelligence? I suggest that the answer is yes. Let's explore why!
What is trust?
Foremost, we need to outline what we mean by trust. I’m not certain that we can directly identify trust; trust seems to be an attribute of information. Okay, then we can state that knowing something (information) relies on trust. Thus, trust encapsulates sub-attributes such as belief, reliance, or confidence. The implication of course is that trust is an attribute of information being shared between two things. In the context of our discussion such things are, at a minimum, a human and a synthetic intelligence. These things, we can say, are the object of trust. Further, trust is an object of the information or knowledge passing between these things.
Now, trust requires three components as far as I can tell. First, there must be a prover. That is, one of the things must be capable of demonstrating that knowledge is reliable or otherwise believable. Second, this prover is nothing without a verifier. The verifier substantiates that the prover possesses some knowledge through belief, reliance, and confidence. Third, there must a mediator that brokers or shares information between the prover and verifier.
This is all well and good for abstract discussion, I suppose. However, I'm interested in the practical, applied trust between us and synthetic intelligence. So: let's look at an example to illustrate how this trust triangle functions.
Consider this essay as a knowledge object. As the author, I am the prover. You (the reader) are the verifier. I think those roles are self-evident. What then is the mediator? Well, broadly speaking, the technology you are using to read the information is mediating how you verify what I am proving. You don't trust the technology directly mind you. Instead, you trust the information. The technology serves a critical function in mediating how you come to trust the information. However, there is an implicitness to the technology; it is background to the information, to me and you, which exists as foreground.
By now, we ought to agree on what trust is and how trust functions. Even with the simplified view I've provided, we should have enough understanding to examine our trust relationship with synthetic intelligence and information more closely.
Why do I think we can trust synthetic intelligence?
Trust, particularly trust mediated by technology, is essential to what it means to be human. Thus, I think the form of trust we’ve discussed will naturally extend to synthetic intelligence. The roles of prover and verifier will likely be innate. Further we will undoubtably use technology to mediate the space between synthetic intelligence and us. As long as the synthetic intelligence exhibits agency, we can believe (trust) the information coming from the synthetic intelligence with reliable confidence. Thus, the assurance of trust will rest in the construction and monitoring of the mediator.