Article

Legally Blind

As Artificial Intelligence technologies expand at an unprecedented rate, charting the unexplored frontiers of health-care AI has never been more urgent. In this three-part series, we explore the nascent legal landscape of health-care AI, appraise the value of patient data and question the appropriate use of AI.

Rosemarie Lall was on the verge of quitting her practice. For more than 20 years, the Ontario family physician had been faithfully battling mountains of paperwork – countless unpaid overtime hours of scribing, charting and billing euphemized as “pyjama time” – on top of her active roster of 1,000+ patients.

“I was exhausted,” she says. “The burnout was so bad. The alternative was quitting, and I can’t do that to my patients.”

That’s when Lall discovered Ambient Scribe, WellHealth’s automatic medical scribing software powered by artificial intelligence (AI). Capable of transcribing and charting clinical notes just by listening in on a patient encounter, Lall found the software life-altering.

“It’s returned to me the joy in practicing family medicine,” says Lall, who now has higher quality face-to-face interactions with her patients, has increased her daily patient volume capacity, and finally has time to spend with her family, greatly improving her quality of life.

Lall is not alone. She is one of a small but burgeoning group of “scribe explorers,” pioneering physicians who are early adopters to the recent burst of AI scribe technology. Since then, she has tried Scriberry, Heidi and AutoScribe. Many of her peers have also started experimenting, relying primarily on word-of-mouth to navigate the Wild West gold rush of health-tech AI.

And gold rush it is. Worldwide, the AI health-care market is in feverish demand. Microsoft is aiming to dominate health-care administration with Nuance, while Amazon takes aim at diagnosis with Claude 3 Anthropic. Google’s Med-PaLM2 large language model (LLM) hopes to court patients with easy-to-understand health-care explanations.

As of April 2024, there were 469 AI health-care startups registered in Canada, with the Canadian government announcing a $2.4 billion investment to build AI capacity. Given Canada’s estimated shortage of more than 30,000 family physicians by 2028, the potential benefits of health-care AI tools in bolstering efficiency and minimizing administrative burnout are not to be understated – a McKinsey analysis of Canadian health-care costs estimates net savings of a whopping $14-$26 billion per year.

But while the myriad benefits are dizzying, there are frighteningly few infrastructural supports at present to guide physicians through what is uncharted territory. Lall recalls wading through pages of extensive legal and technical jargon. “[The contract] was 12-20 pages … I tried reading it several times.”  In the absence of official services, grassroots word-of-mouth forms the bulk of AI scribe reviews – the onus is on individual physicians to vet privacy and security credentials for each vendor, raising questions as to where liability falls in case of an adverse event.

For now, it seems that physicians remain the most – if not the only – liable party. Existing legal principles and precedents focus on suing human actors and have not yet accounted for increasing algorithmic independence. Autonomous cars provide a useful corollary. Despite at least eight serious accidents involving Tesla drivers while in self-driving mode, the courts have consistently ruled that the human drivers are culpable. The rationale provided is that human oversight is still necessary, even with self-driving cars.

Just as Tesla’s Terms of Use require drivers to hold the wheel at all times even while in self-driving mode (despite what is depicted in its many marketing campaigns), physicians using AI scribes and other medical devices may find that they are required to review and take responsibility for any chart entry. Given that AI devices have been observed “hallucinating” information, misinterpreting and incorrectly recording data and falling prey to biases, physician oversight remains crucial.

One time-tested approach still rings true. Treat your AI like a medical student. “AI is an aide, meant to supplement professional work but not replace it,” the Canadian Medical Protective Association (CMPA) advised at the Navigating AI in Healthcare physician webinar. “Patient care should still reflect your clinical judgment.”

For now, it seems that physicians remain the most – if not the only – liable party.

When asked at the webinar regarding the unfortunate few who may not measure up, eligibility for AI-related legal assistance courtesy of the CMPA is discretionary and decided on a case-by-case basis.

Strides are being made to make this environment easier to navigate for physicians. The CMPA recommends vetting AI devices with the same approach as choosing Electronic Medical Records (EMR) software. “The protection of patient data is [the physician’s] due diligence,” a representative clarified at the CMPA-run webinar. “Ask vendors and read the terms and conditions. Look at where the data goes, if it is compliant, if [the vendor] has privacy and security certifications.” Also duly described are due diligence actions such as obtaining patient consent, providing information to patients on the role their de-identified data may play in improving AI algorithms, and requiring physicians to verify that their AI device meets applicable privacy requirements in their respective jurisdictions.

Meanwhile in Ontario, 150 physicians have signed up for an AI scribe pilot supported by Ontario Health and the Ontario Medical Association (OMA), where multiple AI models from various vendors are evaluated for efficiency and accuracy of documentation.

“We are collectively looking at what process to create, to make sure vendors meet certain criteria in privacy, security and usability,” states Mohamed Alarakhia, family physician and Chief Executive Officer of the eHealth Centre of Excellence, a nonprofit assisting clinicians with AI adoption. “For example, we still need to make sure data is housed on Canadian servers.”

For many physicians who lack a background in law or computer science, bearing the sole onus of vetting a buffet of heterogeneous AI vendors may be a tall order. Alarakhia confirms there is much to do: “This is an area where we need to catch up on, in terms of providing this guidance for clinicians.”

Legislation is also playing catch up. AI devices are regulated by Health Canada under the Food and Drugs Act 1985 (FDA) and the Medical Devices Regulations as a Software as Medical Device (SaMD). However, these regulatory frameworks have not been adapted to address unique aspects of health-care AI models – namely its dependency on big data and ever-changing, self-learning algorithms.

Action is being taken to fill the legislative gap; Bill C-27 for the Artificial Intelligence and Data Act (AIDA) passed second reading in the federal House of Commons last April. Surprisingly, many aspects governing medical sector devices largely have been excluded from AIDA’s specific regulations due to “the robust regulatory requirements they are already subject to under the Food and Drugs Act (FDA).”

A closer look at the FDA reveals no current mention of AI-relevant considerations such as data governance, physician liability or measure accommodating ongoing algorithmic changes that distinguish AI softwares from fixed coding softwares. However, a preliminary 2024-2026 FDA Forward Regulatory Plan amendment would facilitate Health Canada to monitor the safety and efficacy of AI medical devices, even after commercial release, and streamline product recall. For AI devices that may evolve beyond their initial launch model as they continue to learn from their datasets, post-market surveillance will be crucial.

Physicians advocating for future AI device regulations would do well to draw upon a series of guidelines jointly developed by Canada, the U.S. and the United Kingdom that are informing the development of Good Machine Learning Practice. The aim is to tailor good AI practices from other sectors to suit the health-care sector. The medical field also may play an active role in shaping legislation at this critical juncture, and in ensuring AI developers align with evidence-based medicine, patient safety and health-care delivery.

Given all the risks and responsibilities, many physicians may understandably throw up their hands and eschew AI altogether. As tempting as it is, this may no longer be an option as technology advances – AI may raise the standard of care to the point where assistive technology may become mandatory. Luddite-oriented physicians may even eventually be held liable for insufficient quality of care without AI supplementation. This has been seen throughout the history of medical progress, from the debut of the X-ray – in which not ordering one eventually became medically irresponsible – to the ubiquitous adoption of EMR systems.

And as with any progress, there are growing pains. However, when asked about the balance of risks and benefits, Lall remains optimistic.

“All the physicians will want to adopt AI. If we’re happier, we’re going to be more helpful for our patients and colleagues,” she shrugs, “Medicine will be changed in five years.”

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Authors

Angela Dong

Contributor

Angela (Hong Tian) Dong is an Internal Medicine resident at the University of Toronto. She sits on the CMA Ethics Committee, PARO Leadership Program, and has completed a diploma in Global Health Education Initiative (GHEI) at the University of Toronto. Angela has a passion for bridging medicine with policy and innovation. She has led multiple health advocacy Days of Action with the CFMS, founded the MP-MD Apprenticeship to teach medical students hands-on health policy, and is an active member in the healthcare-AI and the synthetic biology communities.

X: @AngelaHDong and Medium: @angela.h.dong

Republish this article

Republish this article on your website under the creative commons licence.

Learn more