Skip to main content

Needs and expectations for artificial intelligence in emergency medicine according to Canadian physicians

Abstract

Background

Artificial Intelligence (AI) is recognized by emergency physicians (EPs) as an important technology that will affect clinical practice. Several AI-tools have already been developed to aid care delivery in emergency medicine (EM). However, many EM tools appear to have been developed without a cross-disciplinary needs assessment, making it difficult to understand their broader importance to general-practice. Clinician surveys about AI tools have been conducted within other medical specialties to help guide future design. This study aims to understand the needs of Canadian EPs for the apt use of AI-based tools.

Methods

A national cross-sectional, two-stage, mixed-method electronic survey of Canadian EPs was conducted from January-May 2022. The survey includes demographic and physician practice-pattern data, clinicians’ current use and perceptions of AI, and individual rankings of which EM work-activities most benefit from AI.

Results

The primary outcome is a ranked list of high-priority AI-tools for EM that physicians want translated into general use within the next 10 years. When ranking specific AI examples, ‘automated charting/report generation’, ‘clinical prediction rules’ and ‘monitoring vitals with early-warning detection’ were the top items. When ranking by physician work-activities, ‘AI-tools for documentation’, ‘AI-tools for computer use’ and ‘AI-tools for triaging patients’ were the top items. For secondary outcomes, EPs indicated AI was ‘likely’ (43.1%) or ‘extremely likely’ (43.7%) to be able to complete the task of ‘documentation’ and indicated either ‘a-great-deal’ (32.8%) or ‘quite-a-bit’ (39.7%) of potential for AI in EM. Further, EPs were either ‘strongly’ (48.5%) or ‘somewhat’ (39.8%) interested in AI for EM.

Conclusions

Physician input on the design of AI is essential to ensure the uptake of this technology. Translation of AI-tools to facilitate documentation is considered a high-priority, and respondents had high confidence that AI could facilitate this task. This study will guide future directions regarding the use of AI for EM and help direct efforts to address prevailing technology-translation barriers such as access to high-quality application-specific data and developing reporting guidelines for specific AI-applications. With a prioritized list of high-need AI applications, decision-makers can develop focused strategies to address these larger obstacles.

Peer Review reports

Background

Artificial Intelligence (AI) is the science and engineering of enabling computers to solve problems traditionally requiring human decision-making.1 Within Emergency Medicine (EM) physicians recognize that AI will have an immense impact on patient care [1, 2]. EM is one of a few specialties that manages both acute and sub-acute, undifferentiated patients, of all ages. With such heterogeneity, there are many different potential ways in which AI can augment care in the Emergency Department (ED). In ‘generalist’ specialties, such as EM, it is not always apparent what the most high-yield uses of AI are for the near future.

Over the last decade, an increasing number of original research studies and scoping reviews have been published that outline AI-tools for the ED. These articles describe multiple motivations for ED AI applications, for example, to improve patient safety through AI-enabled patient monitoring; to increase the speed and accuracy of triage, or the diagnosis and prognosis of a range of diseases or clinical-syndromes; to aid in targeted medication delivery; to augment imaging interpretation; and many others [3,4,5,6].

Despite the growth of clinical AI-tools, there are many obstacles to the implementation of AI-technology in medicine. These include concerns about the responsibility for medical-error related to AI, public perception, legal regulation, and the “black-box” phenomena or lack of ‘explainability’ of how the AI reach conclusions [1]. Moreover, from an adoption perspective, there is a lack of input from medical professionals in the needs assessment and later design of such applications [1, 6, 7]. To address this limitation, qualitative surveys have been conducted in specialties outside of EM to assess the needs of medical professionals that would benefit from new AI-tools [8,9,10,11,12,13,14]. Specifically, these studies explore physicians’ understanding and concerns about the technology, quantify their expectations, and identify needs that could guide the development of AI-tools.

A similar needs-analysis of AI use in EM does not exist. Several literature surveys summarize the current developments and applications of AI in EM, while commenting on its potential future benefits [1, 15,16,17]. Yet, few insights exist about how EPs currently use AI, their understanding of this technology, and importantly, how they want AI to be used in the clinical workflow or where they believe AI design efforts should be focused.

The primary aim of this study is to determine which EM work-activities are the highest priority for new AI-tool development in the near future. Secondary aims include identifying Canadian EPs’ understanding of AI, to gauge how AI is used in their practice, and to quantify their beliefs about the impact of AI on EM. Answering these questions will help address the need for additional user-input in the development of AI for ED applications.

Methods

This study is a cross-sectional mixed-method electronic survey of Canadian physicians practicing EM, conducted in the spring of 2022. The original survey is included in Appendix A, and was implemented electronically using Opinio (ObjectPlanet Inc., Norway), a secure online platform.

Participants were contacted using the Canadian Association of Emergency Physicians (CAEP) research-survey distribution service and the Society of Rural Physicians of Canada (SRPC) listserv. Residents, fellows and staff physicians practicing EM in Canada were surveyed. The study aimed to target 365 respondents (5% margin of error and 95% confidence interval) from an assumed total population of 3431 physicians calculated from the Royal College (RC) medical workforce database [18]. Assuming a 20% response rate to the survey, a minimum of 1820 participants were targeted to be invited to the survey. CAEP sent a total of three email blasts, spaced 1-month apart, to its 1494 subscribed members, and SRPC sent one email blast to its 350 members. The enrollment period was 4-months. The results are anonymous, and un-linked to the respondents’ identifying information. Participants could optionally enroll in a prize-draw including five gift-cards of fifteen Canadian dollars. The study was approved by the Research Ethics Board of Dalhousie University (File No. 1026940).

A survey draft was developed from similar studies in other medical specialties [8,9,10,11,12]. In addition, questions that aim to measure ‘technophilia’ - one’s enthusiasm for technology - were included from the TechPH scale, a validated instrument [19]. The development of additional original questions are described below.

First, a list of EM ‘work-activities’ performed by senior doctors was identified. This list was adapted from a systematic review by Abdulwahid et al. that proposes a classification system for EP work-activities [20]. Second, a list of existing AI-tools for EM was generated from scoping reviews; the final list was determined by consensus from the authors [21,22,23,24,25]. These two lists were used in two separate sections of the survey as outlined below. First, respondents were asked about their awareness and prior use of the AI-tools from these lists; second, respondents were asked to rank the priority of AI-tools on these lists.

Following iterative revisions, the survey was pilot tested on ten local EPs for written feedback (two FRCPC-EM [Fellow of the Royal College of Physicians of Canada, Emergency Medicine] residents, two Pediatric EM staff, two CCFP-EM [Certification in the College of Family Physicians, Emergency Medicine] staff physicians, and four FRCPC-EM staff physicians); the group had a balanced distribution of biological sex, and senior and junior staff.

The final survey is divided into four sections: (Section-I) Demographics; (Section-II) Secondary Outcomes: Knowledge and Comfort with Technology; (Section-III) Primary Outcome: Opportunities for AI-Tools in Patient Care in EM; and (Section-IV) Secondary Outcomes: Beliefs and Opinions about AI Impact and Significance.

The de-identified data was analyzed to extract summary statistics, and descriptive statistics to outline physicians’ rankings. The TechPH index, a composite score describing ‘technophilia,’ was calculated from the TechPH scale included in Section-II; see Appendix B for details.

Section-II includes both the EM ‘work-activities’ list and the list of existing AI-tools. Here, respondents were asked to indicate their awareness and prior use of these examples.

Section-III, measuring the primary outcome, asked participants to rank their top three choices from the same two lists of EM ‘work-activities’ and existing AI-tool items; an item’s total rank was calculated by weighted-sum, see Appendix B for details.

Analysis to assess rank-order preference, and co-variate analysis, were completed using the methodology outlined by Lee et al. [26]. The following variables were selected a-priori to assess if they significantly impact rank-order preference: Province of practice, hospital setting, prior educational focus (engineering or computer science versus other), TechPH index, prior clinical or research experience with AI, and years in practice.

Responses to open-ended questions were grouped and summarized in Appendix C. All statistical analysis was completed using R-Statistical-Software (R Foundation for Statistical Computing, Austria).

Results

The enrollment period of four months was reached before the target of 365 responses. 1844 physicians were invited to participate, 230 physicians enrolled in the survey, and 171 completed all questions.

Table 1 summarizes the demographic, training, and clinical characteristics of respondents. Appendix D, Table 2, shows details on employment status and time-in-practice of respondents. Approximately half (53.6%) of respondents were within their first 10 years of practice.

Table 1 Physician Demographic Data

The primary outcomes of this study are the top priorities for new ED AI-tool development in the next 10 years. Figure 1 demonstrates the priority ranking for new AI-tools categorized by work-activities based on physician opinion. The top three items are ‘AI-tools for documentation’, ‘AI-tools for computer-use’ and ‘AI-tools for triaging of patients.’ These ranks were statistically significant, and there was no variation in rank-order with the sub-group analysis. The ‘AI-tools for computer-use’ work-type focuses on ease-of-use of computer systems employed in the ED; open-ended comments from respondents include “optimizing [and] simplifying EMR workflows to limit human cognitive demand”; and “reduce [the] time used in an EMR, fewer clicks”; and “current test entering [is] awkward and time consuming”.

Fig. 1
figure 1

Canadian Emergency Physicians’ rankings of AI-tool examples by work-type; Survey Question - which of the following are highest priory for developing a fully translated AI-tool for patient care in the next 10 years? (n = 186)

Figure 2 depicts the top priorities when respondents were asked to rank example AI-tools instead of general work-activities. ‘Automated charting or report generation’ was the first-priority and was statistically significant. Other highly ranked items include ‘AI-powered clinical prediction rules’ (Clinical Decision Rules, CDR), ‘monitoring of vitals with early warning detection’, ‘predicting department demand and workload’, ‘imaging interpretation’ and ‘predicting diagnoses’. The differences between these rankings were not statistically significant, and there were no changes in rank-order in the sub-group analysis. See Appendix D, Table 3 for details.

Fig. 2
figure 2

Canadian Emergency Physicians’ rankings of existing published AI-tool examples; Survey Question - which of the following are highest priory for developing a fully translated AI-tool for patient care in the next 10 years? (n = 177)

Participants comfort with technology was low-moderate, with 33.0% identifying as technology-enthusiasts, and 20.3% as technology-hobbyists. 7.5% and 5.7% of respondents previously studied either computer-technology or information-technology, respectively. 4.4% have previously studied engineering and 2.2% computer-science. See Appendix D, Table 4 for details.

The participants mean rating of ‘Technophilia’ based on the TechPH index was a moderate 3.30 (std 0.65) on a range of one (high technology-anxiety) to five (high technology-enthusiasm).

Examples of participants definitions of AI are summarized in Appendix C. In the authors’ opinion, 23.6% of respondents answered correctly and 45.8% had partially correct definitions.

Respondent’s experience with AI was low-moderate; the results in Appendix D Table 5 indicate 38.2% have ‘read journal articles about AI in general’, and 45.2% have ‘read journal articles about AI in medicine’. Most respondents indicated “very-little” to “some” experience with AI in their personal lives, clinical work in general, work in EM and in research.

Appendix D Table 6 includes the same items from Figs. 1 and 2., however this question asked respondents to comment on their past awareness and usage of these items. The most common ED physician work-activities in which participants have used AI include ‘AI-tools for computer use’ (29.1%), ‘AI-tools for documentation’ (20.1%) and ‘AI-tools for administration/education/research’ (16.9%). The most common work-activities where EPs have heard-of AI-tools, but not necessarily used the tools, include ‘AI-tools for making the diagnosis/selecting investigations/risk-stratification’ (42.9%), ‘triaging of patients’ (42.9%), and ‘AI-tools for computer use’ (40.2%).

Framing this question in another way, the most common examples of published AI-tools that EPs have used in practice include ‘AI-powered clinical prediction rules’ (51.8%), ‘AI-powered monitoring of patient vitals and early warning-systems’ (29.9%), ‘AI-powered PoCUS’ (25.4%), and ‘AI-powered recommendations of patient handouts/resources’ (20.8%). Interestingly, the most common examples of published AI-tools that EPs have heard-of, not necessarily used themselves, are ‘AI-powered XRAY’ (64.0%), ‘CT’ (60.9%), ‘MRI’ (55.8%) and ‘US interpretation’ (54.3%).

Regarding EPs’ opinions about AI’s impact on physicians’ jobs over the next 10 years, 60.6% of respondents believe, because of the impact of AI, “jobs will change slightly” while 35.9% believe “jobs will change substantially”.

Further, the responding EP indicated a high potential for AI in EM (32.8% ‘a great deal of potential’, 39.7% ‘quite a bit of potential’, 24.7% ‘some potential’), and indicated high personal interest in AI for ED patient care (48.5% ‘strongly agree’, 39.8% ‘somewhat agree’).

In terms of how the job may change, respondents felt AI is most likely able to complete the following tasks: ‘Provide documentation’ (43.7% Extremely likely, 43.1% likely), and ‘formulate personalized medication/therapy plans’ (13.8% extremely likely, 50.0% likely). Respondents were neutral regarding AI’s ability to ‘analyze patient information to reach a diagnosis’, ‘reach a prognosis’, ‘formulate personalized treatment plans’ or ‘evaluate when to refer to a specialist’. Physicians indicated it was ‘extremely unlikely’ (45.4%) or ‘unlikely’ (36.2%) for AI to be able to provide empathetic care. See Appendix D, Table 7 for details.

Discussion

This study outlines EPs work-activities that are the highest priority for new AI-tool development. Survey participants were asked to consider the development of a fully translated AI-tool for patient care that would be available at most EDs in Canada in the next 10-years. To triangulate responses, participants were asked to rank a list of common ED work-activities that may benefit from AI, and a list of existing AI-tool examples.

The survey sampled 5.65% of Canadian physicians practicing emergency medicine, not including residents, based on 2019 data from the RC [18]. This estimate does not account for physicians practising EM with other licence types; for example, CCFP physicians without additional EM designations. Additionally, 6.07% of trainees were surveyed (33 of an estimated 543 active residents); based on the 2019 residency quotas [18]. In general, the breakdown of survey respondents fit with national trends. For example, responses by licence-type are similar to the RC reported proportions; however, this survey had slightly higher PEM representation. The age distribution of survey respondents is slightly older than the RC reported proportions, with 57.4% less than 44 years old in the general population and 40.9% of survey respondents less than 41 years old.

Considering geography, there was a disproportionately high response from the Maritimes (29.2%); the author’s practice location being Halifax. However, the remaining geographic distribution is consistent with the RC database; the other highest response rates come from Ontario (29.2%), Alberta (11.4%) and Quebec (10.0%), which contain approximately 38%, 17.5% and 16.3% of the target population, respectively. The high response result may also relate to each region having large AI institutes (Vector, Amii, Mila, respectively) with provincial strategies for AI adoption. The results are also biased towards urban practitioners, with only 11.4% practicing in rural or regional centers; important input from rural physicians may been missed.

Concerning familiarity with technology in-general, respondents were neutral; approximately half neither “dislike” nor “like” technology and 9.0% indicated “no interest in technology.” The average TechPH index agrees with the finding that most respondents were neutral regarding technology interest [19]. A measure of the baseline ‘technophilia’ of Canadian EPs for comparison is unknown. Overall, these outcomes are reassuring that the respondents include general EPs and are not necessarily biased towards physicians hyperspecialized in technology development, nor are they actively opposed to the integration of new technology. Compared to the average Canadian, our study population may be more cautious regarding the use of AI for healthcare. A 2018 survey from the Canadian Medical Association (CMA) of 2000 adult Canadians found that 69% believe AI could be the solution to the challenges facing our healthcare system; 70% thought that using more technology for personal healthcare would prevent disease, and 50% indicated they would seek-out doctors who use AI in their practice [27].

Respondents have low overall experience with AI in their personal lives, clinical roles or work as EPs. We speculate that the ‘low’ personal experience with AI may relate to misconceptions about the technology, as we assume that most Canadians are daily consumers of AI-enabled apps and productivity tools (weather, navigation, search-engines, voice-to-text). This response may also be from the framing and interpretation of the question.

When asked about their understanding of AI-technology, 87.2% of respondents “agree” or “strongly-agree” they “understand what is meant by AI.” However, only 23.6% of respondents had a completely correct definition of AI (see Appendix C). These results suggest that more education around the concept and purpose of AI may be needed.

Few respondents have conducted any research in AI (4.5%). This result agrees with follow-up questions, where most respondents indicated “no experience at all” (71.4%) or “very little experience” (11.1%) with AI-research. Again, these findings corroborate the neutral TechPH index. However, almost all respondents either “somewhat agree” (39.8%) or “strongly agree” (48.5%) that they are interested in AI for EM.

Overall, EPs agree that AI has potential use for EM; however, physicians feel there will be only “slight” (60.6%) to “moderate” (35.9%) impact on how their job will change. This result suggests physicians believe AI will enhance current roles but not disrupt the specialty over the next 10 years. This opinion is consistent with findings from surveys of psychiatrists and family-physicians [9].

Considering EPs’ impressions about AI’s capabilities, most thought AI was “likely” or “extremely likely” to be able to provide documentation. Interestingly, much of the current focus of EM AI development aims at tasks such as reaching a prognosis, formulating a treatment plan, formulating personalized medications and evaluating when to refer to a specialist, despite these being ranked either “neutral” or “likely.” Providing empathetic care was ranked as “extremely unlikely” or “unlikely”. These findings also match the opinions of psychiatrists and family-physicians surveyed with the same instrument [9, 12].

Respondents also indicated examples where they “have used” or “heard of” AI being used in EM (Table 6). They were provided specific examples of AI-tools, and for triangulation, also general ‘work-activities’ where AI is used. Of note, the ranks of “have used” for ED work-activities, do not map to the AI-tool example ranks; the first choice AI-example ‘clinical prediction rules’ maps to the fourth choice AI-work-types ‘AI-tools for making the diagnosis, selecting investigations, etc.’. One explanation is that physicians may not agree on how to classify different types of AI-tools, or there are other more important AI-tools within the work-type categories not listed in the examples.

As well, interpreting Table 6 in the context of Table 7, many of the “have used” items fit into the “prognosis/diagnosis” and “formulating a treatment plan” categories, which are all areas that physicians have guarded opinions about. Interestingly, the ED-documentation-tools have only been used by 13.2% of respondents and only 41.1% had heard of AI-tool examples. Yet, this was the task with the best perception of being accomplished by AI. Additionally, all categories of AI-imaging interpretation were ranked low in terms of past use by EPs; a surprising result given the large body of AI-research for radiology. Perhaps the use of this technology by Radiologists is not immediately obvious to the EP receiving the reports.

For the study’s primary outcome, there is clear consensus for translation of AI-tools to facilitate documentation, and as mentioned, respondents had the most confidence that AI could facilitate this task. Although many new ED information systems (EDIS) have some AI integration, as indicated, few respondents “have used” or “heard of” ‘automated charting or report generation’ and only 37% have “heard of” and 20.1% “have used” AI-tools for documentation. Based on the responses, we would suggest that tools for documentation are prioritized to meet both the expectation and needs in EM.

The emphasis on ‘AI-documentation’ is not unique to EM, with recent surveys of primary-care providers, psychiatrists as well as a heterogenous population of US-clinicians also strongly indicating that AI could aid clinical documentation [9, 12, 13]. There is clear evidence that clinical documentation is both time-consuming and a source of burn-out for all physicians [28, 29]. However, the current summaries of AI-applications for EM do not clearly emphasize ED-documentation as a large category for active AI development [1, 15,16,17]. Although clinical-documentation may be considered general to all specialties, the environment of the ED will generate unique user-requirements, and therefore ED-documentation should be included as an EM specific application for AI development initiatives.

The ‘documentation’ category is broad, including electronic charting with voice-to-text, or active listening with AI-powered scribes, or AI-powered summaries of patient records to consolidate them into succinct and accessible formats. Future work should clarify these needs in detail, perhaps using focused interviews.

Separate from these specific recommendations, this survey provides insights into a potential strategy for implementing AI tools in an ED setting. As such, we recommend that (i) ED physicians be engaged in the specification, design, evaluation, and implementation of future AI driven tools; (ii) Priority should be placed on developing proof of concept AI-solutions for the high-yield problems identified by ED physicians; (iii) Solutions should embed AI tools within the ED’s existing digital infrastructure and clinical workflow; and (iv) Developers should identify measurable and impactful outcomes for AI use, and use standardized metrics to assess these outcomes.

In conclusion, AI in an ED setting can be seen as an innovation agent, as the analysis of ED data can generate new insights about the effectiveness of certain procedures/policies and lead to the optimizations of ED resources. AI is not here to change ED practices, rather it offers solutions to optimize a number of practice challenges. The survey responses clearly point to perceived value for AI in the ED, however certain activities are more amendable to AI driven support. For instance, automated charting particularly using speech recognition and transcription, rapid interpretation of real-time ED data for clinical decision support, patient risk stratification, and forecasting for staffing. The opportunity to benefit from AI based applications relies heavily on their integration within the current clinical workflows and the data sources used by ED physicians. This will ensure that ED physicians do not need to change their practice to make use of AI tools, rather AI driven support is seamlessly available at the point of care. Overall, the growth of AI in medicine is on the rise and it is fair to conclude that the use of AI in ED is quite near in the future.

Limitations

Study limitations are as follows: First, this survey reflects the Canadian perspective and may not be generalizable internationally. Further, there is a sampling bias towards physicians subscribed to CAEP and a selection bias towards physicians interested in AI, and those practicing in the Maritimes. Further, the sample size is less than the apriori target of 365 respondents for representing the Canadian Emergency Physician population; the four-month deadline and maximum allowable three-survey blasts were reached before complete enrollment. The study is also limited by the confounding effects of variables not measured. Additionally, the survey’s questionnaire was not previously validated, despite being carefully designed. There is no standardized classification system for AI tools for emergency medicine, as such, some of the AI examples or physician work-types may be interpreted differently by respondents. For example, “clinical prediction rules” are synonymous with clinical decision rules (CDR) which are tools used to identify patients at higher risk for disease-specific clinical conditions, or are used to prevent the overuse of specific diagnostic testing [30]. While this is commonly understood in the Canadian Emergency context, the phrasing could be misinterpreted to mean prediction in general. As well, the study does not consider other health professionals working in EM.

Future directions

There are many limitations in applying survey research methodologies. In addition to the known limitations of electronic surveys, specific to this study, there was confusion about the meaning of AI in-general and no opportunity for participants to clarify certain applications and items in the questionnaire. In the future, alternate methodologies including focused interviews and focus groups should be employed to further explore the themes identified in this study.

Conclusions

User-centered design is essential to technology translation. A lack of physician input into AI development is a major translation barrier for these practice-changing AI tools. A survey of Canadian EPs has identified ‘automated charting or report generation’, ‘clinical prediction rules’ and ‘monitoring of vitals with early warning detection’ as high-priority areas for new development. This prioritization can aid policymakers in decision-making for AI data sharing, developing reporting guidelines and facilitating external validations studies for high-demand AI-tools.

Availability of data and materials

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

AI:

Artificial Intelligence

CAEP:

Canadian Association of Emergency Physicians

CCFP-EM:

Certification in the College of Family Physicians, Emergency Medicine

EDIS:

Emergency Department Information System

EM:

Emergency Medicine

EP:

Emergency Physician (EP) / Emergency Physicians (EPs)

FRCPC-EM:

Fellow of the Royal College of Physicians of Canada, Emergency Medicine

RC:

Royal College

SRPC:

Society of Rural Physicians of Canada

References

  1. Grant K, McParland A, Mehta S, Ackery AD. Artificial intelligence in emergency medicine: surmountable barriers with revolutionary potential. Ann Emerg Med. 2020;75(6):721–6. https://doi.org/10.1016/j.annemergmed.2019.12.024.

    Article  PubMed  Google Scholar 

  2. Taylor RA, Haimovich AD. Machine Learning in Emergency Medicine: Keys to Future Success. Acad Emerg Med. 2020;28(2):263–7. https://doi.org/10.1111/acem.14189.

    Article  Google Scholar 

  3. Moulik SK, Kotter N, Fishman EK. Applications of artificial intelligence in the emergency department. Emerg Radiol. 2020;27(4):355–8.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Rasheed J, Jamil A, Hameed AA, Aftab U, Aftab J, Shah SA, et al. A survey on artificial intelligence approaches in supporting frontline workers and decision makers for the COVID-19 pandemic. Chaos Solitons Fractals. 2020;141:110337. https://doi.org/10.1016/j.chaos.2020.110337.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Tupe CL, Pham TV. Anorectal Complaints in the Emergency Department. In: Abdominal and Gastrointestinal Emergencies, An Issue of Emergency Medicine Clinics of North America. 2016. p. 251.

    Google Scholar 

  6. Angehrn Z, Haldna L, Zandvliet AS, Gil Berglund E, Zeeuw J, Amzal B, et al. Artificial intelligence and machine learning applied at the point of care. Front Pharmacol. 2020;11(June):1–12.

    Google Scholar 

  7. Drysdale E, Dolatabadi E, Chivers C, Liu V, Saria S, Sendak M, et al. Implementing AI in healthcare. Toronto: Vector-SickKids Health AI Deployment Symposium; 2019.

    Google Scholar 

  8. Coppola F, Faggioni L, Regge D, Giovagnoni A, Golfieri R, Bibbolino C, et al. Artificial intelligence: radiologists’ expectations and opinions gleaned from a nationwide online survey. Radiologia Medica. 2020;0123456789:63–71.

    Google Scholar 

  9. Doraiswamy PM, Blease C, Bodner K. Artificial intelligence and the future of psychiatry: insights from a global physician survey. Artif Intell Med. 2020;102(July 2019):101753. https://doi.org/10.1016/j.artmed.2019.101753.

    Article  PubMed  Google Scholar 

  10. Blease C, Locher C, Leon-Carlyle M, Doraiswamy M. Artificial intelligence and the future of psychiatry: qualitative findings from a global physician survey. Digit Health. 2020;6:1–18.

    Google Scholar 

  11. Layard Horsfall H, Palmisciano P, Khan DZ, Muirhead W, Koh CH, Stoyanov D, et al. Attitudes of the Surgical Team Toward Artificial Intelligence in Neurosurgery: International 2-Stage Cross-Sectional Survey. World Neurosurg. 2020; Available from: https://doi.org/10.1016/j.wneu.2020.10.171

  12. Blease C, Bernstein MH, Gaab J, Kaptchuk TJ, Kossowsky J, Mandl KD, et al. Computerization and the future of primary care: a survey of general practitioners in the UK. PLoS One. 2018;13(12):1–13. https://doi.org/10.1371/journal.pone.0207418.

    Article  CAS  Google Scholar 

  13. Choudhury A, Asan O. Impact of accountability, training, and human factors on the use of artificial intelligence in healthcare: Exploring the perceptions of healthcare practitioners in the US. Hum Fact Healthcare. 2022;2(January):100021.

    Article  Google Scholar 

  14. van der Meijden SL, de Hond AAH, Thoral PJ, Steyerberg EW, Kant IMJ, Cinà G, et al. Intensive care unit physicians’ perspectives on artificial intelligence-based clinical decision support tools: preimplementation survey study. JMIR Hum Factors. 2023;10(20):1–13.

    Google Scholar 

  15. Kirubarajan A, Taher A, Khan S, Masood S. Artificial intelligence in emergency medicine: a scoping review. J Am Coll Emerg Physicians Open. 2020;1(6):1691–702.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Kirubarajan A, Masood S. Artificial Intelligence in Emergency Medicine: Beyond the Hype. Canadiem. 2021.  https://canadiem.org/artificial-intelligence-inemergency-medicine-beyond-the-hype/.

  17. Liu G, Li N, Chen L, Yang Y, Zhang Y. Registered trials on artificial intelligence conducted in emergency department and intensive care unit: a cross-sectional study on ClinicalTrials.gov. Front Med (Lausanne). 2021;8(March):1–9.

    Google Scholar 

  18. Royal College Medical Workforce Knowledgebase. 2019 [cited 2 Dec 2021]. Available from: https://www.royalcollege.ca/rcsite/health-policy/medical-workforce-knowledgebase-e

  19. Anderberg P, Eivazzadeh S, Berglund JS. A novel instrument for measuring older people’s attitudes toward technology (TechPH): Development and validation. J Med Internet Res. 2019;21(5):e13951.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Abdulwahid MA, Booth A, Turner J, Mason SM. Understanding better how emergency doctors work. Analysis of distribution of time and activities of emergency doctors: a systematic review and critical appraisal of time and motion studies. Emerg Med J. 2018;35(11):692–700.

    PubMed  Google Scholar 

  21. Miles J, Turner J, Jacques R, Williams J, Mason S. Using machine-learning risk prediction models to triage the acuity of undifferentiated patients entering the emergency care system: a systematic review. Diagn Progn Res. 2020;4(1):1–12.

    Article  Google Scholar 

  22. Liu N, Zhang Z, Wah Ho AF, Ong MEH. Artificial intelligence in emergency medicine. J Emerg Crit Care Med. 2018;2(4):82–82.

    Article  Google Scholar 

  23. Varner C. Can artificial intelligence predict emergency department crowding? Healthy Debate. 2020. https://healthydebate.ca/2020/03/topic/predictingemergency-department-crowding-artificial-intelligence-mar2020/

  24. Berlyand Y, Raja AS, Dorner SC, Prabhakar AM, Sonis JD, Gottumukkala RV, et al. How artificial intelligence could transform emergency department operations. Am J Emerg Med. 2018;36(8):1515–7. https://doi.org/10.1016/j.ajem.2018.01.017.

    Article  PubMed  Google Scholar 

  25. Zhu X, Zhang G, Sun B. A comprehensive literature review of the demand forecasting methods of emergency resources from the perspective of artificial intelligence. Nat Hazards. 2019;97(1):65–82.

    Article  Google Scholar 

  26. Lee PH, Yu PLH. An R package for analyzing and modeling ranking data. BMC Med Res Methodol. 2013;13(1):65.

  27. Canadian Medical Association. Shaping the future of health and medicine. 2018. Available from: https://www.cma.ca/sites/default/files/pdf/Activities/ShapingtheFutureofHealthandMedicine.pdf

  28. Leventhal EL, Schreyer KE. Information management in the emergency department. Emerg Med Clin North Am. 2020;38(3):681–91. https://doi.org/10.1016/j.emc.2020.03.004.

    Article  PubMed  Google Scholar 

  29. Lino L, Martins H. Medical history taking using electronic medical records: a systematic review. Int J Digital Health. 2021;1(1):1–11.

    Article  Google Scholar 

  30. Stiell I, Wells G. Methodologic standards for the development of clinical decision rules in emergency medicine. Ann Emerg Med. 1999;33(4):437–47.

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

We would like to express our gratitude to all the physicians who participated in the survey and shared their insights. The authors would also like to thank our colleagues who contributed to the pilot study and iteration of the survey instrument. As well, we would like to acknowledge Dr. Charlotte Blease who graciously shared components of her survey which were incorporated into this study. In addition, thank-you Dr. Peter Anderberg for sharing the TechPH scale for incorporation into this study. This work was funded by the Nova Scotia Health Research Fund.

Funding

Financial support provided by the Nova Scotia Health Research Fund. The funders have no role in the design of the study, and the research was already underway when the funding was secured. The amount includes $4150 CAD for the purpose of paying for independent statistician consultation fees, survey distribution service fees, fees associated with software access, and future costs for developing knowledge translation materials (i.e. infographics).

Author information

Authors and Affiliations

Authors

Contributions

Study concept and design: KWE, OL. Survey Development & Data acquisition: All Authors. Analysis and interpretation of data: KWE, RM, OL. Drafting of the manuscript: KWE, RM. Critical revision of the manuscript: All Authors. Statistical expertise: PA, KWE. Acquisition of funding: KWE, OL.

Corresponding author

Correspondence to Kyle W. Eastwood.

Ethics declarations

Ethics approval and consent to participate

Ethics Review Committee of Dalhousie University (Protocol: 1026940, approved on June 7th, 2021). Participants completed written consent to participate in survey, see Appendix A which includes the consent form and survey.

Consent for publication

Not applicable.

Competing interests

Authors KWE, RM, PA, SA, SSRA and OL report no conflicts.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Appendix A: Survey.

Additional file 2.

Appendix B: Calculations.

Additional file 3.

Appendix C: Open-Ended Responses.

Additional file 4.

Appendix D: Supplemental Results.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Eastwood, K.W., May, R., Andreou, P. et al. Needs and expectations for artificial intelligence in emergency medicine according to Canadian physicians. BMC Health Serv Res 23, 798 (2023). https://doi.org/10.1186/s12913-023-09740-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12913-023-09740-w

Keywords