This attitude does not imply specific properties in the AI system that in fact only humans can have. In this account, to trust a medical AI is to rely on it with little monitoring and control of the elements that make it trustworthy.
To do so, we consider an account of trust that distinguishes trust from reliance in a way that is compatible with trusting non-human agents. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that trust describes human–human interactions. In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust.