Michael "Miko" Cañares

data. design. development.


Public Trust and Artificial Intelligence: 5 Key Points for Reflection

(This thinkpiece was published as part of the book “Artificial intelligence in the city: Buildingcivic engagement and public trust” edited by Ana Brandusescu and Jess Reia. The book can be downloaded for free here – https://libraopen.lib.virginia.edu/downloads/6w924c005)

I always feel a certain level of discomfort every time people talk about public trust in relation to artificial intelligence (AI). In this essay, I will enumerate five reasons why.

First, to trust or not to trust, is not a choice that everyone in the “public” can make (Lee and See, 2004). There are contexts wherein governments for example, impose that every citizen unknowingly shares all their data to the government, including what brand of toothpaste they use, or where they went for sundowner on a Friday night. Then governments use this data, along with others, to make decisions about a person’s future (Andersen, 2020) – whether they are trustworthy for a housing loan, or likely a suspect for a crime involving moral turpitude. In other contexts, a Facebook feed is a person’s only experience of the internet (Massola, 2018), and whether they like it or not, contribute and consume content sometimes without the sheer knowledge that one’s data is used in a lot of ways, including poisoning their thinking about the world (Macaraeg, 2021) or their opinion of others not within their cultural circle. The view that putting trust in a piece of technology is a choice that everyone can freely make is deeply problematic.  

Second, literature on trust formation and technology assumes an informed individual (Ashoori and Weisz, 2019) who knows how a piece of technology (Lee and See, 2004), a platform and its owners, make use of data. Sometimes people trust, not because they know that such technology or a platform does not pose any risk. They trust because others do so, and the magnitude of the perceived “trusters” becomes a basis of decisions (Nowak et al., 2019). If others have trusted these technologies, then nothing can potentially get wrong. If something bad happens, maybe that’s just an anomaly. Trusting requires one’s ability to know the other’s competence, integrity, consistency, credibility, and even benevolence (Marsh et al., 2020) – that such technology is actually concerned about one’s well-being. But for most people, at least in the context of where I live, the daunting unknowability of technology, and for this matter AI (Cassauwers, 2020), coupled with the lack of formal digital education (UNICEF, 2021), significantly impacts how they view and behave with technology. For several people, trusting in technology is a risk that they take, without sufficient information – hoping that it won’t do them actual harm.  

Third, I admire those that advocate for AI regulation. Kerasidou et al. (2021, p.1) argue, for example, that “Instead of agonising about how to facilitate trust, a type of relationship which can leave those trusting vulnerable and exposed, we argue that efforts should be focused on the difficult and dynamic process of ensuring reliance underwritten by strong legal and regulatory frameworks.” But isn’t it also true that states weaponize regulation to the disadvantage of the weak and the powerless (Najibi, 2020)? Or that they also use regulation to silence the dissenters (Guest, 2021) or tamper the power of technology to do good (Candelon et al., 2021)? If regulation is in the hands of the rich, the learned, and the powerful, how do we ensure that regulation protects people, especially the marginalized?  

Fourth, and this I strongly believe in, if we use regulation as a way of to make technology trustworthy, then we need to make that policy-making process inclusive (Global Partners Digital, 2015), in such a way that we are not only building a regulation that helps us trust something, but we are also building a process where everyone involved trusts that policy-making process and the institution that wields it. To make people participate meaningfully in that process, it is paramount to build their capacities (Lister, 2007) so that they can ask the right questions, articulate their issues and concerns, and propose solutions. On this note, I remember the European Commission’s (2020) white paper on AI that proposes an “ecosystem of trust.” But how would this policy-making process apply in contexts where governments themselves are the violators of the principles of this proposed ecosystem of trust (Roberts et al., 2021)? And how can we ensure that policy-making serves the interests of citizens, especially those that are habitually socially-excluded?

Finally, whose responsibility is it to ensure that technology is trustworthy? Is it those who share their data and use technology because, sometimes, they do not have any other choice but to do so, so that they are able to get the products or services that they need? If governments and businesses use AI, shouldn’t it be their responsibility to ensure transparency, explainability, accountability, and remedy (Access Now, 2018)? If we are to move forward towards building trust, we should not be easy on the responsibility of governments, companies, and organizations to promote individual and community benefits in the use of AI, as well as their responsibility to do no harm.