Original Research

Measures of socioeconomic advantage are not independent predictors of support for healthcare AI: subgroup analysis of a national Australian survey

Abstract

Objectives: Applications of artificial intelligence (AI) have the potential to improve aspects of healthcare. However, studies have shown that healthcare AI algorithms also have the potential to perpetuate existing inequities in healthcare, performing less effectively for marginalised populations. Studies on public attitudes towards AI outside of the healthcare field have tended to show higher levels of support for AI among socioeconomically advantaged groups that are less likely to be sufferers of algorithmic harms. We aimed to examine the sociodemographic predictors of support for scenarios related to healthcare AI.

Methods: The Australian Values and Attitudes toward AI survey was conducted in March 2020 to assess Australians’ attitudes towards AI in healthcare. An innovative weighting methodology involved weighting a non-probability web-based panel against results from a shorter omnibus survey distributed to a representative sample of Australians. We used multinomial logistic regression to examine the relationship between support for AI and a suite of sociodemographic variables in various healthcare scenarios.

Results: Where support for AI was predicted by measures of socioeconomic advantage such as education, household income and Socio-Economic Indexes for Areas index, the same variables were not predictors of support for the healthcare AI scenarios presented. Variables associated with support for healthcare AI included being male, having computer science or programming experience and being aged between 18 and 34 years. Other Australian studies suggest that these groups may have a higher level of perceived familiarity with AI.

Conclusion: Our findings suggest that while support for AI in general is predicted by indicators of social advantage, these same indicators do not predict support for healthcare AI.

What is already known on this topic

  • Artificial intelligence (AI) has the potential to perpetuate existing biases in healthcare data sets, which may be more harmful for marginalised populations. Support for the development of AI tends to be higher among more socioeconomically privileged groups.

What this study adds

  • While general support for the development of AI was higher among socioeconomically privileged groups, support for the development of healthcare AI was not. Groups that were more likely to support healthcare AI were males, those with computer science experience and younger people.

How this study might affect research, practice or policy

  • Healthcare AI is becoming more relevant for the public as new applications are developed and implemented. Understanding how public attitudes differ among sociodemographic subgroups is important for future governance of healthcare AI.

Background

There are currently many applications for healthcare artificial intelligence (HCAI) in various stages of development and implementation.1 Defined as technologies that allow computer programs to perform tasks and solve problems without explicit human guidance,2 HCAI-based systems employ algorithms to complete the tasks typically performed by health professionals. Algorithms have been trained to read ECGs,3 detect skin cancer from smartphone images4 and predict people’s risk of disease using large-scale national data sets5 with ostensibly comparable accuracy to current approaches.

While these technologies have the potential to improve aspects of healthcare, they also have the potential to cause harm to patients.6 Algorithmic harms are exacerbated in already marginalised populations,7 8 as the causes and effects of historical structural disadvantage are embedded in healthcare data sets, and training sets often exclude marginalised groups. Obermeyer et al9 audited an algorithm used in the USA for determining whether patients should be referred to high-risk care, and found that patients who identified as black were less likely to be flagged by the algorithm as needing high-risk care, despite having more comorbidities than non-black-identifying counterparts. Similarly, Seyyed-Kalantari et al,8 using data from the USA, found that women, people aged under 20, those with lower socioeconomic status and black or Hispanic-identifying people were less likely to be diagnosed correctly by a chest radiograph algorithm. Factors preventing marginalised groups from accessing care in the past exist implicitly in many healthcare data sets, and algorithms trained on these data sets perpetuate these inequities.9

Surveys examining public attitudes towards artificial intelligence (AI) have found that certain sociodemographic characteristics are associated with higher levels of support for AI. Zhang and Dafoe10 in a survey in the USA found that younger people, males, those with computer science experience and those with a high annual household income were more likely to be supportive of the development of AI. A survey study in the Netherlands, using a representative panel of the Dutch population, studied trust in HCAI and found that the sociodemographic characteristics associated with higher levels of trust were being male, having a higher level of education, being employed or a student and having not stayed in hospital in the past 12 months.11 It is suggested that those who are less likely to suffer from the negative impacts of AI are more supportive of its implementation.10–12

We conducted a survey to examine whether Australians’ attitudes towards HCAI vary with different sociodemographic characteristics.

Method

Our aims for this study were threefold. We aimed to (1) examine the sociodemographic variables associated with support for AI in Australia, (2) examine the sociodemographic variables associated with support for HCAI and (3) determine whether sociodemographic characteristics were associated with different preferences in AI-integrated healthcare.

This paper reports results from an analysis of the Australian Values and Attitudes toward AI (AVA-AI) survey. The survey was conducted with the Social Research Centre’s Life in Australia (LIA) study, which regularly engages a representative panel of Australians in independent surveys.13 A shortened version of the AVA-AI questionnaire was included in the 36th wave of the LIA study, disseminated in March 2020. The full version of the questionnaire was disseminated to a non-probabilistically sampled online panel. We used the shortened version of the questionnaire as a reference survey to produce weights for the non-probability sample that account for characteristics that influence people’s propensity to participate in the online panel. A more detailed description of the data collection and weighting methodology is provided in Isbanner et al’s study.14 For this analysis, we report on results from the weighted non-probability sample using data obtained from the full questionnaire.

Predictor variables

We selected predictor variables analogous to two other surveys on public attitudes towards AI: Zhang and Dafoe’s study in the USA10 and Selwyn and colleagues’ study in Australia.15 Variables used in the analysis included age group, gender, self-identification as having a chronic health condition or disability, living in a capital city, highest level of educational attainment, area of socioeconomic advantage (henceforth referred to as Socio-Economic Indexes for Areas (SEIFA)) (This study used the Australian Bureau of Statistics’ SEIFA to measure the relative advantage and disadvantage of areas.16 Participants were classified into quintiles based on the SEIFA of their area (ie, postcode) of residence, with those in quintiles 4 and 5 coded as ‘least socioeconomic disadvantage’, those in quintiles 2 and 3 coded as ‘moderate disadvantage’ and those in quintile 1 coded as ‘most socioeconomic disadvantage’), household income, computer science or programming experience and speaking a language other than English at home. Additionally, we included self-reported health status as a predictor variable because evidence elsewhere indicated that health-related metrics were associated with attitudes towards HCAI.11 A copy of the questionnaire is provided in online supplemental file 1.

We removed any responses where the participant had not responded to all predictor and outcome variables (n=17). One participant identified with a gender outside of the male/female binary. This response was removed,17 and the limitations of this will be discussed further below. n=1983 responses were analysed.

We calculated Spearman’s r coefficients to identify multicollinearity between predictor variables (table 1). Some pairs of variables were moderately correlated. Those with high self-reported health status were less likely to identify as having a disability, and those living in a capital city were more likely to live in postcodes with less socioeconomic disadvantage. We deemed these moderate correlations unlikely to have a detrimental effect on model fitting or interpretation.

Table 1
|
Correlation matrix of predictor variables (Spearman’s r coefficients)

Outcome variables

Eleven outcome variables were selected for the three aims of the study (table 2). Item 1 replicated a question from Zhang and Dafoe’s study,10 asking participants to indicate their level of support for the development of AI on a 5-point semantic scale from strongly oppose to strongly support. Item 2 was a question that asked participants to consider their support for HCAI in a scenario where an unexplainable algorithm was being used to analyse patient health records and suggest treatments. Item 3 asked participants to consider their support for an algorithm that diagnosed diseases more accurately than physicians but required patients to share their health record. Item 5 asked participants to consider their support for HCAI in a scenario where its development leads to physicians becoming less skilled at tasks that were replaced by AI. Each of these questions asked participants to indicate their level of support on a 5-point scale.

Table 2
|
Aims and outcome variables

Items 5–11 were preceded by a scenario asking participants to imagine a situation where an algorithm was reading a medical test, diagnosing them with a disease and recommending treatments. Participants were asked to consider the importance of (5) explainability, (6) speed, (7) accuracy, (8) human oversight, (9) accountability, (10) cost to the healthcare system and (11) equity. Participants responded on a 5-point scale from not at all important to very important. Each outcome variable was recoded to binary categories, where the two highest categories (ie, strongly support and somewhat support, very important and extremely important) were recoded to 1 and remaining categories were coded to 0.

Statistical analysis

We generated frequency tables that incorporated the survey weights using the questionr package.18 We fit separate multiple logistic regression models for each of the outcome variables using the same suite of sociodemographic variables as predictors for each. All analyses were conducted in R.19 The survey package20 was used to incorporate survey weights in the analysis and calculation of SEs. ORs are reported with accompanying p values and 95% CIs. We considered results significant where p<0.05 and commented on all results where p<0.10.

Results

n=1983 responses were analysed. Weighted and unweighted sample demographics are shown in table 3. Weights primarily affected distributions in self-reported health, chronic health condition or disability status and speaking languages other than English at home.

Table 3
|
Weighted and unweighted sample demographics

Support for development of AI

Logistic regression results are displayed in figure 1 with weighted proportions in online supplemental file 2. Overall, 56.7% of the weighted sample supported the development of AI. Support was significantly higher among those with computer science experience (weighted proportion supportive=72.1%; OR=1.89; p=0.001) compared with those without such experience; those with moderate (55.6%; OR=1.39; p=0.043) or high (66.3%; OR=1.90; p=0.002) household incomes compared with those with low income; and those with trade certificates/diplomas (57.4%; OR=1.37; p=0.028), bachelor’s degrees (65.6%; OR=1.61; p=0.008) and postgraduate degrees (69.0%; OR=1.75; p=0.022) compared with those with only high school-level education.

Figure 1
Figure 1

OR plot of weighted logistic regression results. Error bar indicates 95% CI. Index categories displayed with OR=1. Plots indicate (1) participants’ level of support for artificial intelligence (AI), (2) participants’ level of support for unexplainable AI in healthcare, (3) participants’ support for AI in healthcare that necessitates sharing data and (4) participants’ support for healthcare artificial intelligence (HCAI) that leads to physician deskilling. pw, per week; SEIFA, Socio-Economic Indexes for Areas.

Support for the development of HCAI and trade-offs

Participants were asked to consider whether they supported the development of HCAI in three scenarios. Across the weighted sample, only 27.0% were supportive of HCAI that led to physician deskilling, 28.7% were supportive of unexplainable HCAI and 41.9% were supportive of HCAI that necessitated sharing personal data. Logistic regression results are displayed in figure 1.

Support for unexplainable HCAI was significantly higher among those with computer science experience (43.4%; OR=1.82; p=0.001) and males (32.5%; OR=1.44; p=0.007). Support was significantly lower among those aged 35–54 (25.3%; OR=0.63; p=0.005) and those aged 55+ (25.0%; OR=0.65; p=0.018) compared with those aged 18–34 (36.4%).

Support for AI that necessitates data sharing was significantly higher among males (46.0%; OR=1.37; p=0.011). Participants aged 35–54 (38.4%; OR=0.71; p=0.025) were less likely than those aged 18–34 (48.4%) to be supportive of HCAI that necessitates data sharing.

Support for HCAI that leads to physician deskilling was significantly higher among those with computer science experience (40.0%; OR=1.49; p=0.025) and males (31.6%; OR=1.60; p=0.001).

The analysis did not show an association between household income, living in areas with less social disadvantage, living in a capital city, speaking languages other than English at home or having a chronic health condition/disability and support for the HCAI trade-offs.

Importance of different features in AI-integrated healthcare

Participants were asked to respond to a series of questions about the importance of various aspects of HCAI implementation. Logistic regression results can be found in figure 2 and weighted proportions for each subgroup can be found in online supplemental file 3. Across all sociodemographic groups, accuracy was the feature most regarded as important, and reducing costs to the healthcare system was least likely to be regarded as important followed by speed.

Figure 2
Figure 2

OR plots of weighted logistic regression results. Error bar indicates 95% CI. Index categories displayed with OR=1. Plots indicate level of importance attributed to each aspect of artificial intelligence (AI)-enabled care. pw, per week; SEIFA, Socio-Economic Indexes for Areas.

Socioeconomic characteristics

Socioeconomic factors had a little effect on perceived importance of the features. Having a high (>$2000 per week) income had a weak positive effect on perceived importance of reducing costs to the healthcare system (64.5%; OR=1.44; p=0.073). SEIFA was not associated with perceived importance for any of the features.

Demographic characteristics

Demographic characteristics had some associations with perceived importance of the features. Those who spoke languages other than English at home were significantly less likely to regard explainability (68.0%; OR=0.66; p=0.035) and equity (65.1%; OR=0.66; p=0.035) as very/extremely important. They were also perhaps less likely to perceive accuracy (77.7%; OR=0.65; p=0.056) and accountability (70.6%; OR=0.70; p=0.074) as very/extremely important. Those aged over 55 were more likely than those aged 18–34 to perceive all features as very important, particularly human oversight (85.0%; OR=1.92; p=0.001); however, this effect was not significant for equity and explainability. Gender and living in a capital city had no significant association with any of the features.

Educational characteristics

Those with postgraduate degrees were less likely than those with a high school-level education to see accuracy (73.9%; OR=0.55; p=0.027), equity (64.0%; OR=0.56; p=0.014), speed (61.1%; OR=0.57; p=0.015) and accountability (61.1%; OR=0.57; p=0.018) as very/extremely important. Those with computer science or programming experience were slightly more likely to see equity (76.0%; OR=1.51; p=0.052) as very/extremely important.

Health-related characteristics

Those with higher self-reported health were significantly more likely to perceive all features as important, except for equity (at p=0.056), speed and accuracy. Those who identified as having a chronic health condition were significantly more likely than those who did not to perceive explainability (81.1%; OR=1.69; p=0.001) and human oversight (83.2%; OR=1.5; p=0.02) as very/extremely important.

Discussion

In this study we examined sociodemographic differences in preference for healthcare AI using a large weighted Australian sample that was calibrated to the LIA probability sample using a range of behavioural and lifestyle questions, as well as major sociodemographic variables. Overall, 56.7% (95% CI 53.8%–59.0%) of the participants were supportive of the development of AI, slightly lower than results from another recent Australian study that also used an online panel, which found 62.4% were supportive.15 In a separate analysis of the same AVA-AI survey, combining the LIA probability sample results with the online panel results,14 it was found that 60.3% (95% CI 58.4%–62.0%) of Australians were supportive of the development of AI. In the unweighted non-probability sample, 54.8% (95% CI 52.5%–57%) of participants supported the development of AI, suggesting that the use of an extensive set of variables in the weighting led to some improvement, but the potential of self-selection in online panels may not have been corrected fully by the sophisticated weighting methodology.

Similar to Zhang and Dafoe’s10 study in the USA, we found that support for the development of AI was higher among those with computer science experience, higher levels of education and higher household incomes. It has been suggested that support for AI is lower among groups with less education and more social disadvantage, whose livelihoods may be more threatened by automation.10 12 The potential for AI to threaten people’s livelihoods through taking jobs appears to be a poignant concern in Australia, where Selwyn et al15 found that the prospect of automation and job loss was the most commonly mentioned fear among their Australian sample. Results from our survey appear to support these findings, where metrics for social advantage (ie, household income and education) were strongly associated with support for development of AI.

The sociodemographic characteristics associated with support for HCAI were different from those associated with support for AI in general. The items assessing support for HCAI required participants to consider whether they supported the development of HCAI, on balance, when it involved a trade-off (lack of explainability, data sharing or physician deskilling). For each of the HCAI questions, household income and education were no longer predictors of support. For example, 66.3% of the weighted sample with incomes >$2000 per week supported the development of AI in general, and only 30.5% supported the development of unexplainable HCAI. In contrast, 45.9% of those with incomes <$500 per week supported AI in general and 29.7% supported the development of unexplainable HCAI. This suggests that measures of socioeconomic advantage are linked to a general support of the development of AI, but when assessing specific and potentially harmful applications of HCAI, there is a low level of support regardless of socioeconomic characteristics.

Qualitative research on HCAI with members of the public has found that attitudes towards HCAI are shaped by complex evaluations of the alignment of the technologies with the values of medicine.21 If this is the case, then support for HCAI may be driven less by economic values and more by values relating to healthcare.

The characteristics that we found to be consistent predictors of support for HCAI and their specified trade-offs were having computer science experience, being male and being aged 18–34. Similarly, Zhang and Dafoe10 found that younger people and those with computer science degrees expressed less concern about AI governance challenges than those who were older or did not have computer science qualifications.

Being male, having computer science experience and being in a younger age category were three characteristics among those Selwyn et al15 found were associated with higher levels of familiarity with AI. It is possible that subgroups more familiar with AI are perhaps more tolerant of its risks. However, the Selwyn and colleagues’ study did not control for potential confounding relationships between age, gender and computer science experience so it is unclear from this work whether age and gender were indeed associated with greater familiarity with AI or whether a greater proportion of their younger male sample also had computer science experience, which may be more likely associated with higher levels of familiarity with AI. The relationship between familiarity with AI and tolerance of its risks may warrant further investigation.

Our investigation into subgroup differences in the perceived importance of features of HCAI found that accuracy was regarded as particularly important by all subgroups. This differs from Ploug et al22 who found, in a choice experiment in Denmark, that factors like explainability, equity and physicians being responsible for decisions were regarded as more important than accuracy. The Danish experiment, however, offered the qualifier that the algorithm would at least be as accurate as a human doctor, whereas our questionnaire did not. Further research could test whether algorithmic performance is more important than other features in circumstances where there are no assurances that the algorithm is as accurate as a human doctor.

Health-related characteristics such as self-reported health and having a chronic health condition or disability had a strong effect on perceived importance attributed to traditionally human aspects of healthcare like explainability, human oversight and accountability. This result is echoed by Richardson et al’s21 finding that people’s discussions about the value of HCAI were often framed by their previous experiences with the healthcare system. Participants with complex health needs may have been more inclined to reflect on whether automated systems could meet all aspects of those needs.

Subgroups that were more likely to be supportive of HCAI were not necessarily more likely to see the features of care that they were trading off as less important. While those who identified as male, those aged 18–34 and those with computer science or programming experience were more likely to support the development of unexplainable AI in healthcare, they were just as likely as others to perceive explainability (‘knowing why a decision is made’) as an important aspect of AI-integrated care. This hints at a complex relationship between people’s support for the development of HCAI and their willingness to make compromises to their healthcare.

Limitations

Given the quickly shifting landscape around AI, it is possible that public support for AI has changed in the 2 years since the questionnaire was administered. In addition, the AVA-AI survey includes an online panel obtained by non-probability sampling, which is subject to self-selection biases. The weighting methodology assists in reducing these effects by accounting for more than basic demographic variables, such as age by education, gender, household structure, language spoken at home, self-reported health, early adopter status and television streaming. Any selection effects due to the prediction variables included in the analysis are also accounted for. However, it is possible that support for HCAI is mediated by confounding factors not considered in the weighting methodology or included in the analysis.

One key population that were not represented in the study were those who identified as a gender outside of the male/female binary. Only one participant identified as a gender outside of the binary and was excluded from the analysis due to insufficient participant numbers to form a third gender category. Given that support for AI is lower among certain marginalised groups, consulting gender diverse individuals about their support for AI is an important consideration for future research.

Finally, the present study is a cross-sectional analysis which cannot infer causation between any of the predictor and outcome variables. While we found an association between certain sociodemographic characteristics such as education, and outcomes such as level of support for AI, we cannot ascertain the reasons for this association. These reasons are likely complex and multifaceted and should be explored in further research.

Conclusion

Respondents who reported having greater ill health or disability were more likely to consider human aspects of healthcare, such as explainability, human oversight and accountability, as important. While factors indicating socioeconomic advantage (higher income, higher education) were associated with general support for AI, these factors were not necessarily related to support for HCAI scenarios. Instead, support for HCAI scenarios was higher among males, younger people and those with computer science or programming experience. Based on other research, these groups may have a higher level of familiarity with AI. Further research should examine the relationship between familiarity with AI and support for the development of AI.