In the initial assessment of cardiac function by echocardiography, artificial intelligence is the best echocardiographic designers

08 September 2022

2 minutes to read


Ouyang D et al. Hotline 3. Presented at: Congress of the European Society of Cardiology. August 26-29 2022; Barcelona, ​​Spain (Mixed meeting).

Report Ouyang about, receiving or retaining royalties from EchoIQ, InVision, and Ultromics-related counseling.

We were unable to process your request. Please try again later. If this problem persists, please contact

The researchers reported that in adults whose heart function is evaluated by echocardiography, the initial assessment of a left ventricular fracture by AI was better than that by an ultrasound specialist.

In addition, after a compelling review of the LVEF assessment, cardiologists were less likely to change their final report significantly compared to the initial assessment if the initial assessment had been done by an AI versus an ultrasonographer, David Ouyang, MD, A cardiologist in the department of cardiology at the Smedt Heart Institute in Cedars-Sinai, said during a press conference at the European Society of Cardiology conference.

Source: Adobe Stock

The researchers conducted the investigator-initiated EchoNet-RCT trial, which Ouyang said was the first randomized, blinded trial of Artificial intelligence system in cardiologyto determine if an AI system can interpret echocardiographic results better than echocardiographic devices.

“Echocardiography is the most common form of cardiac imaging,” Ouyang said during the press conference. “It dictates a lot of cardiovascular care, be it Heart failure treatmenttreatment for valve disease or many other indications. The challenge is that for echocardiography, there is variance in interpretation.” He said that in previous studies, the variance was in the range of 7% to 10%, which is enough to change the assessment from normal to abnormal or vice versa.

The trial included 3,495 echocardiographic scans of adults undergoing LVEF evaluation (median age, 66 years; 57% men).

After a run-in period during which the ultrasonographers checked their ability to comment on the scans, the scans were randomly assigned to initial evaluation by the AI ​​or the ultrasound imager. Cardiologists, who were masked by the initial evaluation method, performed their own evaluation of LVEF.

The primary outcome was recurrence and degree of change from the initial evaluation to the cardiologist’s assessment of the LVEF. Intrinsic change was defined as more than 5%.

Secondary outcomes included the time of the sonographer, the time of the cardiologist, and change from the cardiologist’s historical evaluation.

There was a significant change between the initial evaluation and the cardiologist’s evaluation 16.8% of the time on AI-evaluated scans compared to 27.2% of the time on ultrasound-evaluated scans (median difference, -10.5 percentage points). ;95% CI, -13.2) to -7.7; s for inferiority and superiority < .001; mean absolute difference, -0.97; 95% CI, -1.31 to -0.61; s <.001), Ouyang said during the press conference.

“It is very unlikely that cardiologists will change the initial assessment” when the initial assessment is done by artificial intelligence, he said.

The change from the historical assessment, which compared a cardiologist’s assessment from the current study with a cardiologist’s assessment of the same examination in clinical practice, was 50.1% for examinations assessed by AI and 54.5% for examinations assessed by an ultrasound specialist (mean difference , -4.5) percentage points; 95% CI, -7.8 to -1.2; s = .008; mean absolute difference, -0.94; 95% CI, -1.34 to -0.54; s <.001), according to the researchers.

This finding also shows “that there is more consistency with cardiologists when they use AI assistance,” Ouyang said.

The sonographer’s time was significantly less for AI-evaluated scans than for ultrasound-evaluated scans, and the cardiologist’s time (s < .001 for both), Ouyang said during the press conference.

Leave a Comment