Making AI in Healthcare Transparent: COMFORT Video Series
To mark World Cancer Research Day on 24 September, COMFORT is releasing a series of short video interviews with members of the consortium. The day highlights the importance of advancing cancer research worldwide, and COMFORT contributes by making the development of AI in oncology transparent and understandable. Through these videos, researchers and clinicians explain in accessible terms how the project’s AI models are built, tested, and applied, ensuring that innovation in cancer research goes hand in hand with openness and trust.
One of COMFORT’s objectives is to increase people’s trust in AI systems deployed in healthcare settings. It aims to do so by creating AI support systems for clinicians that are transparent and fair. This is especially important as many current AI systems already developed for clinical settings still lack interpretability and transparency. Without clear explanations of how an algorithm reaches its conclusions, both doctors and patients rightfully hesitate to trust its recommendations. COMFORT addresses these challenges by developing AI tools that are not only powerful, but also explainable, reliable, accurate, accessible, and understandable. A central aim of the project is to build trustworthy AI by being open about how the system works. To support this goal, COMFORT has launched a video interview series in which members of the consortium explain key aspects of AI development in accessible terms.
The Episodes
How does the COMFORT AI model work?
Alessa Hering, Assistant Professor, Radboud University Medical Center
Alessa describes how the model is trained with manually annotated data. It first learns to recognise organs such as the kidney, and by analysing thousands of healthy and cancerous images, it learns to outline tumours and classify them reliably.
What type of data do you use to train the COMFORT AI models?
Keno Bressem, Attending Radiologist, Technical University of Munich & Project Coordinator
Keno explains how COMFORT uses multimodal data — combining medical imaging with clinical information — to build more comprehensive models. He also highlights the importance of protecting patient privacy while ensuring data quality and representativeness.
How will the AI model be tested for reliability?
Antonis Billis, Postdoctoral Research Fellow, Aristotle University of Thessaloniki
Antonis outlines a planned one-year prospective study. As new data such as medical images and radiology reports come in, they will be processed by trained AI models. Clinicians will then assess the outputs to determine whether they are meaningful in practice, ensuring the models are robust and reliable.
How will patients benefit from COMFORT in the future?
Iris Verhoeff, R&D Engineer, Deephealth (formerly Quantib)
Iris explains how patients stand to benefit directly, as the COMFORT AI model will be made accessible to hospitals worldwide, for example via an AI marketplace. Integration into clinical guidelines could further ensure that patients everywhere gain from earlier diagnosis, more accurate results, and more personalised treatment options.
What does Responsible AI mean in the context of healthcare?
Lili Jiang, Associate Professor, Umeå University
Lili reflects on the importance of Responsible AI. She explains how biases can arise when training data does not fully represent all patient groups, for example across age, gender, or ethnicity. COMFORT tackles this by training on diverse datasets and testing rigorously to ensure fairness and reliability.
Building Trust Through Transparency
By presenting these topics in clear and accessible language, the series aims to make the development of the COMFORT AI system transparent to a wide public. This openness is central to the project’s mission: only when patients and healthcare professionals understand and trust the technology can its full potential be realised in improving cancer care.
🔗 Watch the full playlist here.