There’s a lot of talk around Artificial intelligence (AI) for mammography: AI certainly cannot be ignored. Radiologists ask us a lot of questions about it. We developed “The AI Elephant in the Reading Room” as a learning lab for the SBI/ACR Breast Imaging Symposium and now, we want to share this same information with all of you.

Learn about the burning topics in this field; this is a kind of information that you won’t necessarily find in scientific presentations.

Below are the topics we highlighted and the questions we answered during the webinar. We invite you to read on for more information and view this free learning lab in full, online.

The difference between AI and CAD

AI should not be confused with CAD. CAD, an acronym for Computer-Assisted Detection, used to be a standard of care. Surely, it detects malignant lesions, but it also detects a large portion of false positives. Companies are trying to develop better CAD and lower false-positive rates, but is it good enough? What does AI offer that a detection tool is lacking?

Radiologists’ expectations about AI

We asked radiologists with different experience levels (from 2 to 37 years in breast imaging) from private and academic practices what their thoughts were about the benefits of AI. Some of these responses included:

“In some cases, it (AI) makes it more efficient to read screening mammograms because radiologists can pass or … dismiss negative mammograms … more easily.”

“For private practices with high volume, it could be helpful to detect things missed on an initial view.”

“(AI) helps find that little set of calcifications that you blew past just because the last patient worried you so much.”

“When I started to get fatigued reading, it (AI) made it a bit quicker to come to a decision when my brain was a little bit muddled.”

How to evaluate AI for breast cancer screening?

Evaluation of AI is still a new endeavor and while there isn’t as much wide-spread information on the topic, it’s important to learn how it can be used for breast cancer screening.

During our learning lab, we discussed what metrics should be used to best measure AI’s performance. View our discussion around the following:

  • Is accuracy a valid metric? We say forget about accuracy; it is too easy to have a good looking number
  • What is the main condition for ROC AUC to be the right indicator to compare AI performances?
  • Why shouldn’t you trust AUC just as a simple figure of merit?
  • We reviewed the reader studies of three FDA-cleared AI breast cancer screening products to illustrate the case.

During this portion of the learning lab, we also discussed the strength of AI related to cancer detection rates. At the end, we provided some tips about the practical way of testing AI.

Case studies

Case studies are an important part of evaluation, and during this learning lab, we covered a few case studies demonstrating MammoScreen®’s performance.

AI + radiologist = better performance

“The media loves to present stories about AI being better than a radiologist. But when you … look at the bottom line of the studies … they almost universally show that… the performance and accuracy of mammographic interpretation are better when you put them (AI and a radiologist) together” Dr. J. Lopez, Hoag Hospital

AI in mammography is like putting a plane on autopilot: when doing so, you don’t want to remain on autopilot for the duration of the flight; it’s simply a tool that the pilot can turn on when needed. The same holds true for AI—it is there to augment radiologists, not to replace them.

Watch the replay of The AI Elephant in the Reading Room Learning Lab, by following the link: https://www.mammoscreen.com/product/resources#webinar