- Since AI holds great promises, experts and AI enthusiasts are especially concerned about how to build trust in AI to foster adoption or usage. Trust can be seen as a psychological mechanism to cope with uncertainty and is located somewhere between the known and the unknown.
- When it comes to Artificial Intelligence, there are a lot of unknowns, accompanied by hype, greed, undistributed power, and market pressure. Also many very unethical or simply bad AI cases become public or even viral, from black people being labeled as monkeys or women being consequently discriminated against in HR recruiting tools.
- If the full potential of artificial intelligence is to be harnessed, it needs something more intangible than technology and privacy laws: it needs human trust. A psychological phenomenon that is not only tangible, even measurable. Trust makes interaction possible, whether human or machine.
It is often said that Artificial Intelligence (AI) positively transforms almost every sector from medicine to urban planning, but it also brings questionable or even dangerous implications with it, like fake news and videos, a surveillance state and loss of privacy. Technical issues like a lack of predictability and explainability, as well as hype and misinterpretations in the media, and further, disagreement within definitions and applications, lead to skepticism and distrust. In the context of AI, there is a critical underlying assumption: No trust, no use.
Since AI holds great promises, experts and AI enthusiasts are especially concerned about how to build trust in AI to foster adoption or usage.
But what is trust anyway? Trust is not a mystical concept, there actually is a large corpus of empirical research that lies underneath this term. We all use trust or we can feel it, but this also means that we all have a different understanding of what it means to trust a person or anything else, whether it is a rocket or a car. Trust can be seen as a psychological mechanism to cope with uncertainty and is located somewhere between the known and the unknown (Botsman, 2017). On the individual level, trust is deeply ingrained in our personality. We are basically born with a tendency to trust or distrust people (or animals or any other things). Humans (and arguably animals as well) have the ability to tell in a snapshot if they trust a person or not. We look at the facial expression, body posture, or the context (background, surroundings, etc.).
When it comes to Artificial Intelligence, there are a lot of unknowns, accompanied by hype, greed, undistributed power, and market pressure. Also many very unethical or simply bad AI cases become public or even viral, from black people being labeled as monkeys or women being consequently discriminated against in HR recruiting tools. It is only natural that we have to become skeptical, isn’t it?
In fact, we are in the midst of exploring what trust means in the age of AI. We are in the midst of redefining our trust not only in technology but also in humans and society. For our company, scip AG (a cybersecurity and technology company), we felt the need to explore this topic further for ourselves, but also for our clients. We want to be prepared for what is coming and also provides these insights along with our standards to our clients. That is why we initiated the Titanium Trust Report 2019, where we are exploring the role of trust in AI, with specific focus on the trust-use relationship.
In an unpublished paper (work in progress), we asked 111 participants, mainly from Europe, to explain what they are most skeptical about when it comes to artificial intelligence. Here are the results:
Participants in the survey are very aware of the risks that go along with the use and deployment of artificial intelligence. It seems to be legitimate to think about the question if it is worth the risk?
Only 10% state that the risk is too high and efforts to further develop AI should be stopped. 69% are convinced that the advantages outweigh the risks.
Consequently, ways need to be established to develop artificial intelligence we can trust. But what are the most important variables to build, maintain or re-establish trust? Three pillars have prevailed in the past 15 to 20 years: Performance, process, and purpose. Performance relates to the what-questions, like: Does it work well? The process dimension considers the how-questions, like: Is it obvious how it works? These two dimensions are the rather hard technical part and are indispensable trust factors. These rather address the cognitive factors that influence trust, such as robustness or explainability. However, according to Lee and See, there is a tendency to overestimate the cognitive influences on trust and underestimate the affective, emotional influences. What do we feel about trust in artificial intelligence? This is part of the purpose-dimension, which relates to the why-questions, like will it be good for our society? The why-question is the hardest part to work on. As there are no rules or clear procedures to change or establish a company image or culture. On the other hand, there are topics such as ethical guidelines that lie somewhere between purpose, process, and performance, or to take it even further, affect all these three factors. We cannot tell yet, as further research is needed in this field. Consequently, the boundaries are not as clear as one might expect.
However, despite all the complexities and details, one needs to consider, there is also a simply first solution. At least these three trust questions, should be asked from the start and equally important, continuously, when developing or integrating artificial intelligence in your business models:
Will it help?
Will it be easy?
Will it do good?
If the full potential of artificial intelligence is to be harnessed, it needs something more intangible than technology and privacy laws: it needs human trust. A psychological phenomenon that is not only tangible, even measurable. Trust makes interaction possible, whether human or machine. Laws and technological security are necessary conditions to build trust in AI, but that’s not enough. In order to create trust in AI (its providers, programmers, programs and processes) on a human level, this must be counteracted on a human level: What does AI do with us as an individual? How does it affect our communication, our cooperation? There are no one-size-fits-all answers. These are questions that we now have to ask ourselves: how far can and should we trust artificial intelligence and where do we have to set clear boundaries?
Marisa Tschopp is a speaker at the ‘Women in AI’ conference being organised by the Embassy of Switzerland in the cities of New Delhi and Kolkata on 7th and 8th November respectively. She will be speaking (video keynote) about the initiatives taken by the organisation ‘Women in AI’ to promote women’s representation in the AI workforce in Switzerland.
References
Botsman, R. (2017). Who Can You Trust?: Who can you trust?: How technology brought us together ñ and why it could drive us apart. London: Portfolio Penguin.
Lee, J. D., & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors, 46(1), 50ñ80.
Sanders, T. L., Volante, W., Stowers, K., Kessler, T., Gabracht, K., Harpold, B. (2015). The Influence of Robot Form on Trust. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting 59 (1), Pages 1510ñ1514.
Siau, K., Wang, W. (2018). Building Trust in Artificial Intelligence, Machine Learning, and Robotics. In: Cutter Business Technology Journal (31), S. 47ñ53.
Tschopp, M. (2019). Artificial Intelligence – Is it worth the risk? retrieved at https://www.scip.ch/en/?labs.20190718
Zumstein, S., Gagliardi, R., Ruef, M., Friedli, S., Schneider, M., Altermatt, D., Hauser, A., Tschopp, M., Hrnjadovic, A. (2019). Labs 10. 1rst Edition. Z¸rich: scip press
NO COMMENT