Discussion
In this study we examined sociodemographic differences in preference for healthcare AI using a large weighted Australian sample that was calibrated to the LIA probability sample using a range of behavioural and lifestyle questions, as well as major sociodemographic variables. Overall, 56.7% (95% CI 53.8%–59.0%) of the participants were supportive of the development of AI, slightly lower than results from another recent Australian study that also used an online panel, which found 62.4% were supportive.15 In a separate analysis of the same AVA-AI survey, combining the LIA probability sample results with the online panel results,14 it was found that 60.3% (95% CI 58.4%–62.0%) of Australians were supportive of the development of AI. In the unweighted non-probability sample, 54.8% (95% CI 52.5%–57%) of participants supported the development of AI, suggesting that the use of an extensive set of variables in the weighting led to some improvement, but the potential of self-selection in online panels may not have been corrected fully by the sophisticated weighting methodology.
Similar to Zhang and Dafoe’s10 study in the USA, we found that support for the development of AI was higher among those with computer science experience, higher levels of education and higher household incomes. It has been suggested that support for AI is lower among groups with less education and more social disadvantage, whose livelihoods may be more threatened by automation.10 12 The potential for AI to threaten people’s livelihoods through taking jobs appears to be a poignant concern in Australia, where Selwyn et al15 found that the prospect of automation and job loss was the most commonly mentioned fear among their Australian sample. Results from our survey appear to support these findings, where metrics for social advantage (ie, household income and education) were strongly associated with support for development of AI.
The sociodemographic characteristics associated with support for HCAI were different from those associated with support for AI in general. The items assessing support for HCAI required participants to consider whether they supported the development of HCAI, on balance, when it involved a trade-off (lack of explainability, data sharing or physician deskilling). For each of the HCAI questions, household income and education were no longer predictors of support. For example, 66.3% of the weighted sample with incomes >$2000 per week supported the development of AI in general, and only 30.5% supported the development of unexplainable HCAI. In contrast, 45.9% of those with incomes <$500 per week supported AI in general and 29.7% supported the development of unexplainable HCAI. This suggests that measures of socioeconomic advantage are linked to a general support of the development of AI, but when assessing specific and potentially harmful applications of HCAI, there is a low level of support regardless of socioeconomic characteristics.
Qualitative research on HCAI with members of the public has found that attitudes towards HCAI are shaped by complex evaluations of the alignment of the technologies with the values of medicine.21 If this is the case, then support for HCAI may be driven less by economic values and more by values relating to healthcare.
The characteristics that we found to be consistent predictors of support for HCAI and their specified trade-offs were having computer science experience, being male and being aged 18–34. Similarly, Zhang and Dafoe10 found that younger people and those with computer science degrees expressed less concern about AI governance challenges than those who were older or did not have computer science qualifications.
Being male, having computer science experience and being in a younger age category were three characteristics among those Selwyn et al15 found were associated with higher levels of familiarity with AI. It is possible that subgroups more familiar with AI are perhaps more tolerant of its risks. However, the Selwyn and colleagues’ study did not control for potential confounding relationships between age, gender and computer science experience so it is unclear from this work whether age and gender were indeed associated with greater familiarity with AI or whether a greater proportion of their younger male sample also had computer science experience, which may be more likely associated with higher levels of familiarity with AI. The relationship between familiarity with AI and tolerance of its risks may warrant further investigation.
Our investigation into subgroup differences in the perceived importance of features of HCAI found that accuracy was regarded as particularly important by all subgroups. This differs from Ploug et al22 who found, in a choice experiment in Denmark, that factors like explainability, equity and physicians being responsible for decisions were regarded as more important than accuracy. The Danish experiment, however, offered the qualifier that the algorithm would at least be as accurate as a human doctor, whereas our questionnaire did not. Further research could test whether algorithmic performance is more important than other features in circumstances where there are no assurances that the algorithm is as accurate as a human doctor.
Health-related characteristics such as self-reported health and having a chronic health condition or disability had a strong effect on perceived importance attributed to traditionally human aspects of healthcare like explainability, human oversight and accountability. This result is echoed by Richardson et al’s21 finding that people’s discussions about the value of HCAI were often framed by their previous experiences with the healthcare system. Participants with complex health needs may have been more inclined to reflect on whether automated systems could meet all aspects of those needs.
Subgroups that were more likely to be supportive of HCAI were not necessarily more likely to see the features of care that they were trading off as less important. While those who identified as male, those aged 18–34 and those with computer science or programming experience were more likely to support the development of unexplainable AI in healthcare, they were just as likely as others to perceive explainability (‘knowing why a decision is made’) as an important aspect of AI-integrated care. This hints at a complex relationship between people’s support for the development of HCAI and their willingness to make compromises to their healthcare.
Limitations
Given the quickly shifting landscape around AI, it is possible that public support for AI has changed in the 2 years since the questionnaire was administered. In addition, the AVA-AI survey includes an online panel obtained by non-probability sampling, which is subject to self-selection biases. The weighting methodology assists in reducing these effects by accounting for more than basic demographic variables, such as age by education, gender, household structure, language spoken at home, self-reported health, early adopter status and television streaming. Any selection effects due to the prediction variables included in the analysis are also accounted for. However, it is possible that support for HCAI is mediated by confounding factors not considered in the weighting methodology or included in the analysis.
One key population that were not represented in the study were those who identified as a gender outside of the male/female binary. Only one participant identified as a gender outside of the binary and was excluded from the analysis due to insufficient participant numbers to form a third gender category. Given that support for AI is lower among certain marginalised groups, consulting gender diverse individuals about their support for AI is an important consideration for future research.
Finally, the present study is a cross-sectional analysis which cannot infer causation between any of the predictor and outcome variables. While we found an association between certain sociodemographic characteristics such as education, and outcomes such as level of support for AI, we cannot ascertain the reasons for this association. These reasons are likely complex and multifaceted and should be explored in further research.