Follow us
Improving Urologic Cancer Care
with Artificial Intelligence Solutions

Standing with Science: How the COMFORT Project Builds Public Trust in AI-Driven Cancer Research

Artificial intelligence is transforming cancer care, opening new possibilities for earlier diagnosis, more precise prognoses, and personalised treatment strategies. Alongside these advances comes a critical challenge. Public trust must be earned, not assumed. This trust depends on whether patients and clinicians can truly understand how AI systems work and how their outputs should inform decisions.

The COMFORT project addresses this challenge directly. Focusing on prostate and kidney cancer, it develops multimodal AI models designed not only for high performance but also for trustworthiness. The project identifies five key attributes as core design principles that support trust, including explainability, reliability, understandability, accuracy, and accessibility. Also, two central pillars further enhance this work. The first is responsible risk communication, which aims to bridge the gap between complex algorithmic outputs and human interpretation. The second is meaningful participatory approaches, ensuring that patients actively contribute to the research process.

The importance of this approach becomes clear when considering how AI outputs are experienced in clinical settings. When a model assigns a probability, this number is not interpreted as a neutral statistic. For patients, it becomes a narrative shaped by prior experiences, personal expectations and emotions correlated to these narratives. Even clinicians rely on internal representations to interpret probabilistic information. Mental model theory, rooted in cognitive psychology, helps explain this process. People interpret unfamiliar information by fitting it into internal frameworks that help them make sense of the world. In AI-assisted oncology, the gap between how a model operates and how it is understood can be substantial. If these internal models are inaccurate, decisions may be compromised, even when the underlying science is robust. The COMFORT project responds by focusing not only on prediction but also on communication. It investigates how AI outputs can be presented so that they are genuinely understood by both patients and healthcare professionals. This work is carried out by researchers from UmeƄ University in collaboration with patient organisations represented by ELLOK and IKCC, supporting that understanding is not simply the transfer of information, but a dynamic process that requires careful guidance.

To support this effort, participatory research is crucial. Building an accurate understanding and trust in AI cannot be achieved through technical excellence alone. It requires the perspectives, experiences, and expectations of those who will ultimately be affected by these systems. COMFORT actively implements this approach by involving patients and patient representatives throughout the research process. Their contributions help identify what makes AI outputs meaningful, what creates confidence and how communication can be tailored.

The COMFORT project provides a clear example of how trust can be embedded within scientific innovation. By integrating trust into technical design, involving patients as co- designers, and grounding communication in how people interpret information, it reframes trust as a challenge that can be addressed through thoughtful design.