- This event has passed.
Maya Khalifé – Precision Oncology: Empowering Radiologists and Oncologists With AI-driven Workflow for Better Clinical Decisions in Cancer Patients (AI PHI Affinity Group)
January 13, 2023 @ 9:00 am - 10:00 amFree
Thank you for coming!
The recording for this meeting is available below:
Precision Oncology: Empowering Radiologists and Oncologists With AI-driven Workflow for Better Clinical Decisions in Cancer Patients
Precision Oncology: Empowering Radiologists With AI-driven Workflow for Better Clinical Decisions in Cancer Patients, will uncover the capabilities of Arterys cloud-based intelligent diagnostic platform that combines clinical AI and workflow optimization to empower physicians to make more accurate & data-driven clinical decisions.
Our presentation will showcase Arterys’ AI-based oncology platform and demonstrate how it provides comprehensive cancer patient reporting and follow-up through an innovative and efficient workflow. Arterys enables physicians to make more informed treatment decisions and helps patients live longer and healthier lives.
Presented by Maya Khalifé
Maya Khalifé, PhD, Director of Product Management, is a medical imaging enthusiast with 13+ years of experience combining medical imaging and machine learning to create healthcare solutions that impact patients’ lives. Her research work focused on image processing and machine learning applied to radiology. Her mission at Arterys is conceiving and building the radiology platform of the future to deliver tangible clinical value.
Artificial Intelligence in Cancer Research – AI PHI Affinity Group
(First Friday of each month)
This group was formed to discuss the current trends and applications of artificial intelligence in cancer research and clinical practice. The group brings together AI researchers in a variety of fields (computer science, engineering, nutrition, epidemiology radiology, etc) with clinicians and advocates. Students, trainees and faculty with any or no background in AI are encouraged to attend. The goal is to foster collaborative interactions to solve problems in cancer that were thought to be unsolvable a decade ago before the broad use of deep learning and AI in medicine.
Let’s go ahead and get started. We have a really exciting speaker today from Arterys. Maya Khalife is the director product management at Arterys, her mission at Arterys is conceiving and building the radiology platform of the future to deliver tangible clinical value. Her presentation today will showcase Arterys’ AI based oncology platform and demonstrate how the platform can bring a holistic view of patient data to reduce the data silos between radiologists and ecologists and allow oncologists to have a more informed database diagnosis process.
The AI PHI Affinity Group meets on the first Friday of every month. Except for this month, where we were on break on the first Friday, so we’re meeting today. And what we do here is hear from leaders in the field. We discuss trends in AI for health and health care, both in research and in industry and how the two intersect. Our overall goal is to foster collaborative interactions that solve problems of cancer that were thought to be unsolvable a decade ago. But now, with the advent of deep learning, are very solvable. So the format of these talks tends to be a roughly 30 to 40 minute presentation, followed by an interactive Q&A.
We have many exciting upcoming speakers, so be sure to check the link here and sign up to get notifications about future speakers.
The University of Hawaii is an excellent place for cancer research. We have the UH Cancer Center in the Kakaako campus, which is a beautiful campus and hosts many excellent cancer researchers.
This talk is hosted by the AI Precision Health Institute within the Shepherd Research Lab, which has the goal of advancing the assessment of human health through technology.
And finally, University of Hawaii has a Hawaii Data Science Institute, which is a system wide effort to support data science education, collaborative research, and partnerships with industry.
We often have a paper of the month, but when I was scrolling through the Arterys website, it looks like they have several papers that showcase the value of Arterys. So I recommend that you look at the website and check out these cool research papers.
So without further ado, Maya, take it away.
Thank you. Thank you for this intro. So I’ll be sharing my screen… All right. Can you see my screen?
Okay. So. Hi, everyone. I’m Maya Khalife. I’m director of product management at Arterys. The topic of my presentation today is about precision oncology and how we can empower today’s physicians, including radiologists and oncologists with an AI driven workflow for better clinical decisions in cancer patients.
I’ll start by sharing a bit more about our division. So Arterys is part of Tempus today. So it started as a focused on AI for radiology company 12 years ago and now we’re part of Tempus, which shared the same vision as us. We are both focused [on] providing a world where a clinical decision is driven by data. And so empowering physicians with AI driven tool and intelligence tools and improving patient outcomes.
And today this vision is more important than ever. Specifically, with all the infrastructure and the access, the data that we have, with all of the access to the technology that we have. So we feel that we have to take this data and make a good use of it and improve patient outcomes.
Just a few numbers here is that today we have eight products that are FDA cleared and CE marked and that are available for clinical use and that are being used clinically on a daily basis to help physicians do their work better and help patient care. And we’re present across the U.S. and Europe and in some of these the big research centers, as you see here, and big research hospital as well.
Our main focus has been really to deliver clinical value, efficiency and performance through the power of deep learning and the cloud. And so, MICA, our medical imaging Cloud AI platform spans across all modalities and pathologies. It’s a cloud based platform and we leverage more than 20 AI applications today in different parts of the radiology workflow. That’s just to introduce that…
But now I’ll move on to talk about a less happy topic, which is cancer. So every year, more than 18 million patients are diagnosed worldwide with cancer and almost 9.5 million of them don’t survive.
In the United Space States, specifically, lung cancer is the second most diagnosed cancer in both men and women, and it has the highest death rate in both populations—in both men and women and it’s reaching 21% of the population diagnosed. Which is way ahead [of] the second most leads of cancer, which is prostate and breast.
However, when we look at the data over the past century, we observe actually a decline in lung cancer mortality rates starting in the 1990, and that is largely due to reduction in smoking, but also the establishment of screening programs and guidelines. With the pace accelerating in recent years due to major advances in treatment of non-small cell lung cancer.
So the takeaway from this is that early detection of lung cancer and better treatments—better personalized treatment—has been shown to reduce lung cancer mortality.
Nonetheless, despite all these advances and survival rates, we fail to solve the complexity and the fragmentation of the current oncology patient journey. So as you see, most of the steps that the patients have to go through are performed in silos today, and we have limited access to relevant data.
So throughout the patient journey, you have multiple physicians and health care providers, including specialists across multiple disciplines. They are dealing with separate systems and separate tools that don’t communicate with each other. So the data that should be combined for a better and comprehensive understanding of the disease ends up sitting in different systems and not realizing its potential to generate like a personalized treatment path.
So if we take a step back and we summarize the case the cancer patient journey today, we can name the following unsolved challenges.
That there is no one size fits all to treat cancer patients. So we do need to access all the different medical rigor based on the the medical record of the patient, family history, environmental, genetic, metabolic data, comorbidities, all the different radiological findings and prior scans. All of these are impacting the way a patient would react to a certain treatment.
And there’s also an absence of a holistic view of the patient profile. So as I mentioned earlier, [the] health care system is very fragmented and it makes it difficult to share data and recommendation across physician.
And the third challenge is really the limited accessibility to standardized reporting and data driven tools. So a lot of how patients are treated and treatments [being] recommended are subjective and non reproducible across different patient. And that leads to an unequal access to accurate diagnosis and treatment decisions to all patients.
With that, I’ll focus a bit more on the shortcomings of the radiology workflow itself. So if we take just one part of that patient journey.
So when it comes to reporting lung nodules, for instance, radiologists face the need to use multiple scales and scoring systems depending on the use case. So we know that they, a radiologist, would use the ACR’s—American College of Radiology’s—lung grabs when they are reading screening images, they would use Fleischer’s society guidelines for incidental findings, and they would use RECIST criteria for cancer follow up and management. So they also need to associate [or] combine prior images when they’re reading current images, and so that they could track lesions over time in multiple scans. So they need those tools to enable them to link studies across time.
They also like structured reporting, tools that enable them to share data across the physicians in a consistent way. And on top of that, to top it all with the radiology workloads, we have a shortage in radiologists and an increased volumes of images, including, like, if we count all the screening and the follow up images that they need to read. So all this leads a lot to radiologist frustration and burnout.
So this is the problem that we’re trying to solve at Arterys [to] address all these challenges that radiologists face and try to come up with solutions based on AI driven workflows.
So we’re looking here at understanding the complete workflow of a radiologist when they have the task of reading lung cancer images.
So first, we want to reduce the number of clicks that they do. Because, you know, when you’re sitting and you have tens of CT scans to read on a daily basis and you’re doing all these repetitive tasks… the first thing that we want to do for them is to automate all these tasks with AI algorithms and make that workflow the most efficient possible for them.
So we focus really on adding AI algorithms to detect, segment lung nodules and allow to track them over time when they’re reading them on a CT scan but also provide all this automated measurements that they need to report on for each lung nodule.
Specifically, we can automate how they score each nodule based on the different scoring systems, as I mentioned earlier. So we can have automated lung grabs so that they don’t have to click and choose all the different criteria for each nodule. And so automated detection, but also automated scoring.
What we could do next is also—which is very important—is to pull any prior studies that were available for the same patient. So instead of a radiologist or the care team going and manually looking for whether the patient had previous scans, we can automatically pull these scans and make them available. And also co-register them within the Arterys viewer so that the radiologist can review them next to each other. And the nodules that were detected on the current scans were also detected on the prior scans can be linked. And all this reporting and all these progress measurements are automatically added in the report. Because at the end of the day, when we’re reviewing a lung CT for a cancer patients, what we’re interested in reporting is we report on change. So whether there’s new nodules or whether the old nodules have grown in size. Or we want to track as well is their strength, because that’s also a good sign that that they are responding to treatment.
So the comprehensive workflow is thought through and all the different tasks are enhanced with AI algorithms or intelligent workflows.
And last but not least, is the reporting section. So after reading a lung CT scan, the radiologists… the final result needs to be reported. And as I mentioned, it needs to be reported in a consistent and reproducible way so that the referring physician, so the oncologists and numerologists that are reading the report, can make a consistent conclusion or summary out of it.
So what we do is we automate the report creation as well. So all the findings that were reported or validated by the radiologists are automatically added into the report section with all the different scoring—scores that were assigned for each nodules— and the progress of the nodule over time— the longitudinal tracking of the nodule. That report is automatically created and can be downloaded into the radiology system so that everything is located in the same patient record. Again, allowing for removing all the data silos and allowing for everything to be in a single record.
And on top of that, we also add more automation and standardization for reporting the cancer lesion. But because outside of the lungs, when we’re looking at RECIST workflow, we need to be able to also detect and segment these lesions outside of the specific body part, so in all body parts. So that is why we’re adding more automated tools to segment lesions in every body part, but also an automated and an efficient research workflow to report all these lesions in the different body parts. And that again, as you see, [is] the same format. So it’s always a consistent and objective way of reporting lesions. Which makes it even easier to share data across all the different physicians during that patient journey.
So to summarize, what we can provide with our smart oncology workflow is enabling longitudinal analysis that is required to report patients, cancer patients. We’re providing automated segmentations for the lesions, automated lesion scoring with all the different scoring systems, and automated reporting for consistent reporting. All that through a unified platform that enables sharing data and connecting physicians.
Once we’ve solved the radiology workflow, we still haven’t solved the initial problem of data silos across the care continuum. So with Tempus here, we can imagine this future where for every patient we can have a comprehensive patient profile. So that is created by combining all the different data that is needed for a a customized diagnostic. And we can imagine that building models that support and making more informed decisions around diagnostic and treatment.
So combining all these data sources to create multimodal patient profiles with intelligent models that can help us recommend better patient treatment and more personalized patient treatment.
We can also imagine also gathering [or] having enough data so that we can have built better insights and create or help with the drug development or drug discovery process. So with more data, we can create more targeted drugs and inform more and help patients find clinical trials that would fit better their cancer profile. And that’s what we’re trying to achieve by gathering all this data together. All that to reach a better patient outcome.
So with that, I actually end my talk here. I wanted to have more the opportunity to have more questions and have more of a discussion. So, yeah, I don’t know if you have any questions. I’m here to answer them.
Great. Well, thank you so much, Maya. That was an excellent presentation. I want to give an opportunity for students to ask questions but while you’re formulating your questions, I can start with one.
So many of the attendees here are students and they’re learning about AI and we’ll eventually move on to industry where AI models will be used and integrated into pipelines and workflows in an industrial setting. So could you talk a little bit, especially in the field of medical AI, of the FDA approval process? And in a related note, how does government regulation affect the product management process?
Yeah, that’s a great question. So the FDA approval process, I think as I might, I don’t know if I mentioned that we were the first company to receive an FDA clearance on a deep learning algorithm for radiology. So we’ve gone a long way since. We learned a lot through that. So the process of building a product… we should always focus on first, what is the question or what is the actual problem that we’re trying to solve and how we’re trying to solve it with AI. So it’s very important to have a clear value statement of the product because that would inform the whole clearance process.
So once we’ve identified the indication for use, so what is this AI algorithm used for? So defining the scope of what the AI algorithm or what the product where it can be used. Then from that scope will we can define a specific clinical validation study and the validation— the clinical evidence study—need to take into account, of course, that a diverse number of data and make sure that the algorithm… The whole process is designed so that the algorithm that you’re trying to commercialize, you need to show that it’s it is working in that specific use case that you’re trying to commercialize. So you need to design studies to enable that to create that evidence that what you’re trying to sell actually solves that problem.
And yeah, and that is and then you create a process—the product development process—where you include all these regulatory timelines. And sometimes you go through what we call a pre submission. So you can pre submit—kind of like a preprint and research—where you submit ahead of time your plan, like what you’re trying to the plan of the study, and you get some feedback from the FDA early on, like if it’s ready to go or what else you can improve and change. Then you create those studies and then you submit them to the FDA for final approval.
Great. That was very enlightening. Does the FDA regulate the AI only or do they also regulate the workflow?
Yes. So they regulate the product as a whole. And the product can be made from like usually a product contains multiple algorithms—multiple AI algorithms—and you can bundle them all and submit them all. But sometimes they would—like if you add as you go, an AI algorithm, then you need to resubmit or revalidate that specific AI algorithm within the product. So an FDA clearance for a whole product doesn’t cover like new additional additions to the product.
Basically like in the lung if you take the lung CT case. So here you have the detection and the segmentation and you need to provide evidence for both separately.
Great. Thanks. Any questions from the audience?
I have a question. I believe if I am recalling correctly, that the workflow, the compute is done in the cloud, is there any special considerations for patient privacy?
Yeah, that’s a very good question. Yes. So usually when you have a cloud product, you need to ensure that you’re HIPAA compliant or GDPR compliant when you work in Europe. So we do de-identified patient data before they reach the cloud. So they are de-identified on premises at the patient.
Information stays within the hospital network and the pixel data goes to the cloud, gets processed and then is returned to the radiologists side. And this is where the reconciliation with patient information is done. So when the radiologists are viewing the images, they have the patient data available to enable them to do their work correctly.
But yes, there is a de-identification piece of the data. And it never leaves the hospital premises.
Any other questions? Just unmute and ask.
Hi, it’s Esther here. Thanks, Maya, for this great presentation. I had a question with regards to to the platform, actually. So what is not fully clear to me, what I not fully understood is you mentioned that the FDA clears AI algorithms, but is it the entire platform that you can also get cleared? Because I understood that the platform actually combines all the data and then provides you with a next step in your workflow.
So is that then something that has been approved? And if yes or if no, can everybody in each hospital apply it in their own workflow or do they need to do some extras?
Mm hmm. Yeah. So definitely the platform as a whole. So what you are clearing out actually is the medical device, whatever is considered a medical device. So a platform to review radiology images is considered a medical device. So you need to clear it, get a 510K clearance for it. And again, it all boils down to whatever you put in your indication for you. So kind of like when you buy a medicine. When you buy a medicine, what is it intended for? So if you’re buying a medicine for a cough, you cannot use it for other type of, I don’t know, for a fever. That kind of thing. So if you indicate in your indication of use that this software or this platform is used to review radiology images and provide insights to the radiologist, then you get the clearance on that use case.
So yeah, the platform, the viewer, should definitely be cleared for use for clinical use. And then every time you add a new component to it that is that is doesn’t fit in the initial indication for use, then you need to clear it out or to update the indication of use to include that new component.
And once you get that clearance, so once you say the platform is cleared for clinical use, then anybody can start using it. There’s no additional requirement from any user to start using it. Because as it’s clear, it can be directly deployed into the clinical workflow.
Any other questions?
Hi. Thanks for the talk. I was just wondering if you could speak more on like Arterys’ role or partnerships with academics. For instance, like we’re developing AI models, like how would that play into getting it deployed on your private platform? Or doing validations with data that you may or may not have? Can you speak more on that?
Sure. Yeah, I can definitely speak to that. So we are a platform that is used by radiologists for clinical use, but we do also have a department focused on research. So there are multiple use cases where we work with researchers.
So one use case is to validate our own product. So in order to validate the products and get them through the FDA clearance, we’ll need to work with researchers on creating those validations. External validation, independent research studies that provides information about our products. So that’s that’s one use case.
The other use case is that we have opened our platform to other providers, other developers. We provide also the ability for any researcher to deploy the algorithms so that they could as a delivery mechanism. So you can use Arterys to run your own research because it makes it much easier to deploy algorithms across, like if you have a multi-centric study, it’s much easier to deploy it through a platform that is easily deployable than to run the code locally, send them your code, and run them locally. So we do support those initiatives where we work with researchers on providing the delivery mechanism—the deployment mechanism—for their algorithms to run their research studies.
And then a third area where we work with researchers is when we’ve identified a use case where we’re looking for AI algorithms that solve that specific use case. We will look at researchers and try to find those algorithms, if they’re already available. So let’s say I’m looking for a specific algorithm that segments a specific lesion in the liver and your team has built that AI algorithm so that we can partner together so we can bring your algorithm to either a clinical or a life science use case.
I was wondering if I could ask another question. Has Arterys done any studies about like acceptability and attitudes towards the software by radiologists?
Yeah, so that is a tricky question because actually all the different studies that we run are directly to measure the performance, but also the efficiency of the workflow with the radiologists. So if you want those, that’s kind of an end point that is implicitly present in all the different studies. Because when you’re measuring efficiency of a specific workflow, you’re actually measuring acceptability from the end user.
And we’re also like always monitoring what we call “post-market surveillance.” Like we’re continuously in contact with all of the users, Arterys users. First to gather their feedback. How do they like the product but also to to gather feedback on how to improve it. And we actually, there’s a very tight loop where we take the feedback and, improve the product continuously and release new improvements all the time to make sure that the users are satisfied.
Yeah, but as like I don’t think we’ve ever designed a study just around the acceptability. That would be an interesting one to do.
Okay well if we don’t have any further questions, let’s give another round of applause to Maya. It was an excellent talk. I’m sure we’ll be in touch in the future to discuss potential collaborations and how we could work together. But thanks so much for sharing your work.
Yeah, thank you for having all the great questions and feel free to reach out if you have further questions offline. Thank you.
Thanks so much. Have a nice weekend.
End of transcript.
- January 13, 2023
9:00 am - 10:00 am
- Event Category:
- Artificial Intelligence in Cancer Research – AI PHI Affinity Group
- Virtual (Zoom)
- Artificial Intelligence and Precision Health Institute (AI PHI)
- View Organizer Website