Article Text

Original research
Conditionally positive: a qualitative study of public perceptions about using health data for artificial intelligence research
  1. Melissa D McCradden1,
  2. Tasmie Sarker2,
  3. P Alison Paprica2,3
  1. 1Department of Bioethics, Hospital for Sick Children, Toronto, Ontario, Canada
  2. 2Health Team, Vector Institute, Toronto, Ontario, Canada
  3. 3Institute for Health Policy, Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
  1. Correspondence to Dr P Alison Paprica; alison.paprica{at}utoronto.ca

Abstract

Objectives Given widespread interest in applying artificial intelligence (AI) to health data to improve patient care and health system efficiency, there is a need to understand the perspectives of the general public regarding the use of health data in AI research.

Design A qualitative study involving six focus groups with members of the public. Participants discussed their views about AI in general, then were asked to share their thoughts about three realistic health AI research scenarios. Data were analysed using qualitative description thematic analysis.

Settings Two cities in Ontario, Canada: Sudbury (400 km north of Toronto) and Mississauga (part of the Greater Toronto Area).

Participants Forty-one purposively sampled members of the public (21M:20F, 25–65 years, median age 40).

Results Participants had low levels of prior knowledge of AI and mixed, mostly negative, perceptions of AI in general. Most endorsed using data for health AI research when there is strong potential for public benefit, providing that concerns about privacy, commercial motives and other risks were addressed. Inductive thematic analysis identified AI-specific hopes (eg, potential for faster and more accurate analyses, ability to use more data), fears (eg, loss of human touch, skill depreciation from over-reliance on machines) and conditions (eg, human verification of computer-aided decisions, transparency). There were mixed views about whether data subject consent is required for health AI research, with most participants wanting to know if, how and by whom their data were used. Though it was not an objective of the study, realistic health AI scenarios were found to have an educational effect.

Conclusions Notwithstanding concerns and limited knowledge about AI in general, most members of the general public in six focus groups in Ontario, Canada perceived benefits from health AI and conditionally supported the use of health data for AI research.

  • qualitative research
  • health informatics
  • ethics (see medical ethics)
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • A strength of this study is the analysis of how diverse members of the general public perceive three realistic scenarios in which health data are used for artificial intelligence (AI) research.

  • The detailed health AI scenarios incorporate points that previous qualitative research has indicated are likely to elicit discussion.

  • Notwithstanding the diverse ethnic and educational backgrounds of participants, overall the sample represents the general (mainstream) population of Ontario and results cannot be interpreted as presenting the views of specific subpopulations.

  • Given the low level of knowledge about AI in general it is possible that the views of participants would change substantially if they learnt and understood more about AI.

Introduction

Modern artificial intelligence (AI) and its subfield machine learning (ML) offer much promise for deriving impactful knowledge from health data. Several recent articles present summaries of recent health AI and ML achievements, and what the future could look like as more health data become available and are used in AI research and development.1–5 Given that AI and ML require large amounts of data,6 public trust in, and support for, using health data for AI/ML will be essential. Many institutions are exploring models for using large representative datasets of health information to create learning healthcare systems.7 8 Public trust and social licence for such work is essential8 because, in contrast with clinical studies that have consent-based participation from data subjects, ‘big data’ research is often performed without expressed consent from the data subjects.9 Previous studies exploring the public attitudes toward data-intensive health research in general, that is, without an AI/ML focus, found that most members of the mainstream public are supportive provided there are appropriate controls.10–13 While underscoring the need to address the public’s concerns, studies in Canada, the UK, the USA and other jurisdictions suggest that members of the mainstream public view health data as an asset that should be used as long as their concerns related to privacy, commercial motives and other risks are addressed.10–13

However, we cannot assume that this general but conditional public support for data-intensive health research extends to AI/ML for several reasons. Foremost, research has shown that the members of the general public have low understanding of AI in general, alongside AI-specific hopes and fears including loss of control of AI, ethical concerns and the potential negative impact of AI on work.14–18 Second, while there is general trend toward support for health AI,19 there is also recent negative press about large technology companies using health data for AI, including patients suing Google and the University of Chicago Medical Center20 and the view of the National Data Guardian at the UK’s Department of Health that the sharing of patient data between the Royal Free Hospital of London and Google DeepMind was legally inappropriate.21 Third, there is decreasing confidence that accepted approaches to de-identification are sufficient to ensure privacy in the face of AI’s capabilities.22

To date, there has been limited scholarly research on public perceptions of health AI. Most published studies have focused on the views of patients who may not be representative because they stand to benefit from AI applications.16 Further, most published studies have focused on computer vision health AI applications in radiology and dermatology, which represent only a small fraction of the potential applications of AI in health.23–25 Additionally, there is a need to understand public perspectives versus patient perspectives, because health AI research may rely on large datasets that include information about people who do not have health conditions and/or do not stand to benefit directly from the research. Accordingly, the objective of this study was to learn more about how members of the general public perceive health data being used for AI research.

Methods

Study design

Focus groups were conducted using semi-structured discussion guides designed to prompt dialogue among participants (see online supplemental file 1). Each 2-hour focus group had four parts: (i) warm-up exercise and participant views about AI in general, (ii) brief introduction of the Vector Institute for AI (Vector) and plain language examples of AI/ML supplied by Vector, (iii) discussion of participant views on realistic but fictional health AI research scenarios (see online supplemental file 2) and (iv) time for questions with a Vector representative (PAP). The three AI research scenarios were presented in varying order across groups per site, and included AI-based cancer genetics test, an AI-based app to help older adults ageing at home and an accessible health dataset of lab test results for AI. Participants were asked to make an independent written decision about the acceptability of each health AI research scenario before the group discussion began to increase the likelihood that they would state their own initial views versus echo the views of others.

Setting

The sessions took place in October 2019 in facilities designed for focus groups with audio-recording capabilities and space for observation (PAP, MDM, TS) behind a one-way mirror. This allowed the research team to take notes and discuss emerging findings in real time without distracting participants. Three focus groups were conducted in northern Ontario (Sudbury, 400 km north of Toronto) and three in the Greater Toronto Area (Mississauga).

Participants

A total of 41 participants took part in the research (tables 1 and 2)—20 participants in Sudbury, 21 participants in Mississauga. Participants were contacted by the Canadian subsidiary of Edelman (a communications company that conducts market research) drawing from a database of individuals who had signed up to participate in research studies which was established by Canada Market Research (a company that provides market research services and field service support). Purposive sampling was used to identify eight invitees for each focus group that collectively had variation in age, gender, income, education, ethnicity and household size.26 Of the 48 people approached, one person arrived unwell and was compensated but sent home, and six did not choose to attend (reasons not captured). To create an environment in which participants were likely to be comfortable sharing their views, in each city there was an afternoon focus group with individuals ages 25–34 and mixed incomes, followed by 17:00 focus group with people ages 35–65 with lower incomes and a 19:30 focus group with people ages 35–65 and higher incomes. Participants learnt the first name and city or town of residence of other people in the focus group, plus whatever additional information participants chose to share about their work, family, education and so on.

Table 1

Characteristics of participants (N=41)

Table 2

Characteristics of participants by focus group

For practical reasons, recruitment for all focus groups occurred at one time. As part of the recruitment process, participants were notified of the purpose of the focus groups, that is, to learn more about how members of the public perceive the use of health data for AI research. Participants were also informed of the purpose of each focus group, in writing, as part of the process to obtain their written informed consent. At the end of each session, participants were provided with a cheque for $C100 as compensation for their time.

Patient and public involvement

The central research question—how do members of the general public perceive the use of health data for AI research?—was directly informed by the results of previous qualitative studies with 60+ members of the public.10 11 Before the research was started, the draft scenarios were reviewed and refined based on feedback from the Manager of Public Engagement at ICES (an indepedent not-for-profit data and research institute in Ontario, Canada, formerly referred to as the Institute for Clinical Evaluative Sciences) and multiple members of the public, including students at the University of Toronto and friends and family members of Vector staff. The corresponding author, PAP, is coauthor of the Consensus Statement on Public Involvement and Engagement with Data-Intensive Health Research27 and the Lead for the Public Engagement Working Group of Health Data Research Network Canada. Through those and other initiatives, PAP has connections to multiple patient and public advisors, from whom the research team will seek advice when disseminating study findings, including through non-academic channels such as ‘The Conversation’ and social media.

Data collection

Focus groups were moderated by an experienced male focus group moderator employed by Edelman (10 years of professional experience) with no prior relationship with the participants. The moderator was hired to conduct the focus groups. He had no prior knowledge about AI/ML and had no vested interest in the outcome of this project. This information was disclosed to participants at the beginning of the session. Having an external moderator enabled the research team to benefit from the experience of a skilled professional, provided an environment in which participants would be more likely to feel free to express negative opinions about AI and the Vector Institute than if a member of the Vector Institute staff were facilitating, and allowed the research team to focus on observing the participant discussion and taking field notes. The discussions followed a semi-structured discussion guide (see online supplemental file 1) which allowed for free-flowing conversation as well as facilitated discussion of written scenarios, with prompts on certain questions. All members of the research team (MM, TS, PAP) observed every focus group from behind a one-way mirror and took independent field notes during the sessions. Focus group participants were informed that researchers were in attendance behind the one-way mirror, and that sessions were audio-recorded. Audio-recordings were transcribed verbatim by Edelman and participant names were replaced with a code (eg, M01 for male 1) before the transcripts were provided to the research team for analysis.

Data analysis

Data were analysed by MDM, TS and PAP using a qualitative descriptive approach which is a naturalistic form of inquiry that aims to remain ‘data-near’ while inductively interpreting and thematically grouping and detailing respondent experiences, beliefs and expectations.28 29 MDM, TS and PAP worked together to develop the descriptive coding framework based on the verbatim transcripts and field notes taken during the focus group sessions. The transcripts were read and re-read as coding was performed independently by MDM and TS using a combination of Microsoft Word and Microsoft Excel software. No software was used to supplement human qualitative coding. MDM, TS and PAP used an inductive analytic approach to derive themes based on the data and socialised and refined themes through group discussion. Differences in opinion between MDM, TS and PAP were resolved through iterative discussions. Review and coding of transcripts stopped when inductive thematic saturation was achieved, that is, when MDM, TS and PAP agreed that additional coding and thematic analysis would not result in any new codes or themes. Though the sample was not designed or intended to provide information about variation in perspectives based on gender, location or age, the research team analysed the theme-coded statements for each of those characteristics and did not find any consistent correlations. The research team was open to the possibility of recruiting additional participants for additional focus groups if there was insufficient data to identify themes; however, based on the finding that themes were strong and consistent across the focus groups, no additional participants were recruited. No formal participant feedback was sought, although the moderator continually reflected focus group participants’ views back to participants to ensure that their views were being captured adequately.

Results

The analysis identified mixed, mostly negative views about AI in general. There were three major themes from the participants discussion of the health AI research scenarios, (i) participants perceived benefits when data are used in health AI research and, (ii) they identified concerns and fears about the use of data in health AI research and about potential negative impacts of health AI application and (iii) they described the conditions under which the use of health data for AI research and AI application would be more acceptable. Finally, though it was not an objective of the study, the realistic health AI scenarios were found to have an educational effect.

Theme 1: mixed, mostly negative views about AI in general

Participants had mixed views about AI, but mostly unfavourable perceptions (box 1). Negative comments referred to the potential for job loss, lack of human touch and humans losing control over AI, with multiple references to malicious robots (eg, Terminator, HAL 9000). Several participants shared stories of advertisements being presented to them on their mobile phones after they had spoken about a topic, which they interpreted as proof of AI surveillance of their behaviour. Some participants expressed hope for AI in terms of autonomous vehicles, AI’s perceived ability to increase convenience, and the ways that AI could be useful in dangerous environments not suitable for humans. However, most of the participants who expressed positive statements about AI also noted concerns based on uncertainty about how AI will affect society.

Box 1

Mixed, mostly negative views about artificial intelligence (AI) in general

  • I feel like it’s one of those things that we’d all be diving headfirst towards, but may be something that could have long-term implications for us as a society down the road that maybe we didn’t fully understand when we dove into it at first. (M015-Mississauga2)

  • So, when I think of AI, I have mixed feelings about it because I think about, ‘Will my job exist in the future, or will most jobs exist in the future?’ …. I think very few of us actually know what AI could be in the next year, ten years, 50 years from now. (F017-Mississauga2)

  • Are we phasing ourselves out? (M008-Sudbury3)

  • I think it’s impersonal. Not like that human touch. Where there’s substance and feelings or emotions. (F002-Sudbury1)

  • It’s portrayed as friendly and helpful, but it’s always watching and listening … So I’m excited about the possibilities, but concerned about the implications and reaching into personal privacy (M007-Sudbury2)

  • You talk to somebody about something and then an ad will pop up on your phone for it. It’s almost like you’re being listened to (F008-Sudbury3)

  • Scary. Out of control … are they [AI] going to take over. It’s going to be jobless. (F004-Sudbury1)

Theme 2: hopes and perceived benefits of health AI research scenarios

Participants perceived benefits from the uses of health data in each of the three realistic health AI research scenarios (box 2). Perceived benefits were both epistemic (eg, the perception that health data combined with AI research could generate knowledge that would otherwise be inaccessible to humans) and practical (eg, the ability of AI to sift through large amounts of data, perform real-time analyses and provide recommendations to healthcare providers and directly to patients). Of the three AI research scenarios presented (table 3) participants saw the greatest benefit of the AI-based cancer genetics test, where it was perceived that AI research could ultimately save lives. Participants also commented favourably on the benefits of research to develop an AI-based app for older adults in terms of helping people maintain independence, and about the potential for a large laboratory test results dataset to support health AI training, education and discovery research (table 3).

Box 2

Hopes and perceived benefits of health artificial intelligence (AI) research scenarios

  • It could be a help worldwide to see similar symptoms … it will be quicker because using AI in a computer, you’ll be able to get that data and those analytics quicker. (F003-Sudbury1)

  • I think it’s fantastic. The more data they collect, the more they’ll be able to identify the patterns of these cancers and where they originate from. I think it’s just great. (F009-Sudbury3)

  • When you can reach out and have a sample size of a group of ten million people and to be able to extract data from that, you can’t do that with the human brain. A group, a team of researchers can’t do that. You need AI. (M018-Mississauga3)

  • You put everything into a data[set], somebody’s going to learn something on that. (M002-Sudbury1)

  • There’s just so much potential value … this can potentially save lives. (M017-Mississauga2)

  • If I could do that as an elderly person and keep my integrity and pride and myself, like staying home instead of having to be placed in a long-term care facility. And this little [AI-based] app can help me to stay home and not have a nurse come in my house two, three times a day. (F002-Sudbury1)

  • A lot of times doctors are very busy … So if they have a database or something where they could put in a particular disease or something they’re suspecting, and then this database just brings up - narrows down what the possibilities are. That might be better. (F013-Mississauga1)

Table 3

Summary of main participant views about three health AI research scenarios

Theme 3: fears and perceived drawbacks of health AI research scenarios

Participants were primarily concerned that the health data provided for one health AI purpose might be sold or used for other purposes that they do not agree with (box 3). They also expressed concern that AI research could lead to AI applications that have negative impacts including lack of human touch when machines are deeply integrated into care, job losses and the potential for AI to decrease human skills over time if people become ‘lazy’ and overly reliant on computers. Some additional fears and concerns specific to the individual scenarios were noted including: inability to guarantee privacy when genetic information is used for AI, concern about companies misusing or selling data, and scepticism that older adults would be able to use an AI-based app.

Box 3

Fears and perceived drawbacks of health artificial intelligence (AI) research scenarios

  • There’s no guarantee that they [the people developing AI] are going to have any kind of integrity or confidentiality or anything like that. (F003-Sudbury1)

  • Are they going to take my information, are they going to sell it? So, it kind of makes you scared when other companies are buying it. (F016-Mississauga2)

  • For me the big question is ownership of that data. (M018-Mississauga3)

  • I don’t find it very appropriate. First of all, it’s going to take jobs away from health professionals. If the app has to tell them, suggest things or whatever, there’s no communication there, like face-to-face. (F010-Sudbury3)

  • But it also misses out on that human component where the (personal support worker) comes in and talks to you and things like that. (M007-Sudbury2)

  • The concern is always that you lose some of those soft skills. And how many times in the medical field have you heard that a nurse practitioner or a doctor went on a hunch and found out what the problem was. So that’s a concern, that you lose some of those soft skills and that relies on intuition when you rely solely on AI, on computers and programs and algorithms. (M010-Sudbury3)

Theme 4: conditions under which health AI research scenarios are more acceptable

Many participants suggested specific conditions that would make health AI research scenarios more acceptable to them (box 4). These included assurance that privacy will be protected and transparency about how data are used in health AI, often expressed in terms of their preference that data subjects be fully informed about how data will be used and given the option of providing informed consent or opting out. In addition, participants repeatedly stated that AI research should focus on the development of AI applications that help humans make decisions versus autonomous decision-making systems.

Box 4

Conditions under which health artificial intelligence (AI) research scenarios are more acceptable

  • I think if you can eliminate people’s fear or risk about their information like the names and identity being removed so the fear of the data being hacked. (M016-Mississauga2)

  • I find de-identified is very loose terminology when you’re talking about DNA and medical records. (M020-Mississauga3)

  • The data may be used for research, but they may not be fully aware of it. They may have clicked ‘I accept’ and that part was like—I was like, ‘That’s kind of tricky, kind of’. (F002-Sudbury1)

  • That’s the thing that threw me off … it was the fact that you didn’t get to choose that your information gets used in this process … ‘Give me a choice’. (M012-Mississauga1)

  • Transparency … Why are they even taking the data in the first place? How would it help people in the future? Just understanding the purpose behind all of this. (M017-Mississauga2)

  • As long as it’s a tool, like the doctor uses the tool and the doctor makes the call. As long as the doctor is making the call, and it’s not a computer telling the doctor what to do. (M001-Sudbury1)

  • But I think that it should be stressed for the people that are going to be using it, that it should not be their primary source of health information. They shouldn’t skip going to the doctors. This is to be used in conjunction with that. (F007-Sudbury2)

Theme 5: educational effect of realistic health AI research scenarios

There was a notable difference between the dystopian and/or utopian statements of participants about AI at the beginning of each focus group (box 1) and their comments about the health AI research scenarios (boxes 2–5 and table 3) which tended to be more grounded in reality. In some cases, participants were direct in stating that the health AI research scenarios had an educational effect for them (box 5).

Box 5

Educational effect of health artificial intelligence (AI) research scenarios

  • I think our discussion prior to any of these scenarios was more geared toward just generally based [AI], wasn’t more toward the health … I didn’t think it was so appropriate but then seeing the other two [health AI] scenarios with it [the third AI scenario], I think it could all go hand in hand in the healthcare system. I’m leaning more towards it than my opinion was before. (F006-Sudbury2)

  • I’m not usually that positive, but I’m pretty positive about all of it, everything that we read [the health AI scenarios] so far … I’m anti-computer … But everything I’ve seen so far … I think it’s all good information and it’s all good tools, but the keyword ‘tool’. It’s a tool. And I see this being an awesome tool as well. (M004-Sudbury1)

  • [Before Scenarios] You can create a Terminator, literally, something that’s artificially intelligent, or the Matrix … it goes awry, it tries to take over the world and humans got to fight this. Or it can go in the absolute opposite where it helps … androids … implants … Like I said, it’s unlimited to go either way. [After reviewing health AI research scenario] I know what they’re trying to get done. I agree with all these things. I think they’re extremely beneficial for everyone … So now I can say, you know what, I’m confident that this is going in the direction of where I would like this to go because I can’t find a downside to an app like this. (M020-Mississauga3)

Discussion

After discussing the health AI research scenarios, participants demonstrated mixed, but generally positive views about using health data in AI research, provided certain risks were mitigated and conditions were met. Consistent with the literature, this study found that members of the general public have little understanding of AI and ML in general. Given this low level of knowledge, dystopian and utopian extremes presented in the media, and uncertainty about the future of AI and ML which runs across society, the term ‘hopes and fears’ is likely a better fit than ‘benefits and risks’ to describe how members of society perceive AI.15 16

Overall, participants’ perception of three realistic health AI research scenarios were more positive than their perception of AI in general. Many of the views expressed by participants were similar to the findings from a systematic review of public views of data-intensive health research10 which found general support for using of health data for research with some conditions, concerns about privacy and data security, the requirement that there be a public benefit, more trust in public sector studies compared with private sector studies, and varying views on the need for consent. This study adds participants positive views about the potential for health AI research to derive benefits from large amounts of data that might otherwise go unused because AI can produce faster and more accurate analyses. As has been observed for data-intensive health research in general, participants were concerned about risks to privacy, and potential abuses and misuses of their health data, particularly when companies work with health data.10 11 13 High profile news stories about data breaches as well as coverage of lawsuits (eg, related to Google20 21) can heighten these concerns. Participants’ support for the scenarios was also conditional on transparency about how data are used for health AI. Some participants were direct in stating that consent should be obtained before data are used for health AI, while other participants noted that current consent processes (eg, long forms) are not the solution, and many emphasised the need for plain language explanations of how data are used for health AI, preferably delivered by a human. This finding is aligned with the American Academy of Dermatology Position Statement which states ‘there should be transparency and choice on how their medical information is gathered, used and stored and when, what and how augmented intelligence technologies are used in their care process’.30 In this regard, the views of focus group participants were similar to the general public’s views on data-intensive health research in general10 11; that is, they had mixed views on consent with most people primarily wanting to know if, how and when their data were used for research.

Though care was taken to construct scenarios focused on using data in health AI research, participants’ support was often associated with the perceived benefits and risks of AI application, even when scenarios highlighted the fact that there was no guarantee that the research would lead to the successful development of an AI application. Given the Gartner Hype Cycle,31 this may present a risk for AI/ML research. If members of the public assume that health AI research will always be successful, there is increased likelihood of disillusionment, potentially leading to an AI winter and decrease in research funding for AI/ML. Consistent with previous studies of public perspectives about health AI,16 23–25 participants’ support for health AI research was highest when they believed that the AI research could bring an important new capability to a problem beyond what humans could contribute. Each of three health AI research scenarios were viewed as being acceptable by most of the participants of the focus groups (table 3). Of the three scenarios, the AI-based cancer genetics test was the most supported, with several participants linking their support to personal or family experiences with cancer. The next highest supported scenario was the AI-based app to help older adults ageing at home. Participants were also generally supportive of the scenario focused on creating a large accessible dataset but were direct in stating that the benefits from it were less clear to them.

Participants expressed concerns that focused on health AI applications versus health AI research. As has been reported in the literature,15–19 23–25 the main concern and condition for support of health AI research was that the AI application being developed be a tool used by humans and not used without humans ‘in the loop’. This condition is not surprising given the general fears associated with all AI, and also aligned with the American Academy of Dermatology Position Statement on Augmented Intelligence (their preferred term over AI) which refers to ‘symbiotic and synergistic roles of augmented intelligence and human judgement’.30

Taken as a whole, the findings of this study and other qualitative research should influence how data are used in health AI research and applications of health AI outside of research settings. Given widespread uncertainty about exactly how AI will impact society, and increasing use of public data (including unconsented data) for AI, we need to understand which uses of health data for AI research are supported by the public, and which are not. Transparency and plain language communication about health AI research are necessary but not sufficient.32 This is not simply a matter of informing members of the public about how health data are used in AI research. Consistent with the Montreal Declaration for Responsible Development of AI33 the objective should be to take the science of health AI in directions that the public supports. By behaving in a trustworthy manner, respecting public concerns and involving members of the public in decisions related to health data use we can align with the Consensus Statement on Public Involvement and Engagement with Data-Intensive Health Research29 to establish socially beneficial ways of using data in health AI research.

Limitations

This study has limitations. It is possible that participants from other settings, for example, rural Ontario, remote northern Ontario, specific sub-populations or other jurisdictions would have different views. Given the low level of knowledge about AI in general it is possible that the views of participants would change substantially if they learnt and understood more about AI. There are many uses of health data for AI which were not included in the scenarios in this study, and it is possible that participants would have different views if the scenarios were different or altered.

Acknowledgments

The authors thank Elham Dolatabadi, James Fernandez, Ian Gormely, Jenine Paul, Céline Moore, Andrea Smith and Linda Sundermann for their contributions to the health AI scenarios.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Contributors All authors contributed to the design of the study, attended all focus groups, developed and refined the themes, contributed text directly to the manuscript, and approved the final version submitted for publication. MDM and PAP led the literature review. PAP led the work to design the scenarios with contributions from TS, MDM and multiple other individuals who are acknowledged. TS and MDM both independently coded all transcripts. PAP reviewed all coding and performed analyses with MDM and TS to develop the descriptive coding framework and identify themes. PAP was the lead for preparation of the manuscript. All of the authors gave approval of the final version for publication and agreed to be accountable for all aspects of the work.

  • Funding This research was funded by the Vector Institute.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Ethics approval The study was approved by the Research Ethics Board of the University of Toronto in Toronto, Ontario, Canada, protocol number 38084.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement All data relevant to the study are included in the article or uploaded as supplemental information. All authors had access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. No unpublished data are available outside of the study team.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.