Follow us
Improving Urologic Cancer Care
with Artificial Intelligence Solutions

Ethically Governing Artificial Intelligence: An Interview With Virginia Dignum

The recent new advancements in artificial intelligence have made it clear that AI can be incredibly beneficial for society, but can also pose threats if unregulated. In 2023 the UN convened a High-Level Advisory Body on AI to support the international community’s efforts to govern artificial intelligence. The final report of the Advisory Body has been published in September 2024. We have spoken to our very own Virginia Dignum, who is one of the 39 preeminent AI leaders from 33 countries, who formed this High-Level Advisory Body.

Artificial intelligence has long fascinated humans and has been the topic of many dystopian films and books. Why do you think that is?

AI fascinates us because it gives the illusion that we can create a better version of ourselves, creating the ‘gods’, entities that are more powerful than we are. AI reflects both our aspirations and anxieties, it is a mirror of human ingenuity, and we are fascinated by its ability to mimic our intelligence. This makes for good films and Sci-Fi, but it is not correct: what we realise again and again is the irreplaceable essence of human capabilities. AI challenges us to confront our unique traits like creativity, empathy, moral reasoning and the capacity of feeling.

What has changed in the last years? What has caused the recent quantum leap in AI?

On the one hand the breakthroughs in machine learning, the availability of large-scale datasets, and increased computational power, as well as the integration of AI across industries. On the other hand, societal awareness of the impact of AI, and the harmful consequences of its application without consideration for that impact is also growing.

What are the risks related to the recent (2023-2024) developments in artificial intelligence?

The risks include ethical concerns like bias, erosion of privacy, and misuse of AI for harmful purposes, such as disinformation or surveillance. But real risks are larger than this, they include the risks of not benefitting from AI capabilities due to the fear of its consequences. And also, the risk that, if we continue the ‘larger than larger’ approach to data, computing and infrastructure, AI will be competing with us for scarce environmental resources that we need to nurture. Moving forward, we need to address AI from the perspective of its consequences, both positive and negative. We need to be able to measure its value based on a trade-off of many values, not only in terms of increase efficiency.

What are the potential benefits?

The direct benefits usually listed are: accelerating scientific discovery, enhancing healthcare, tackling climate change, and improving decision-making processes. But ultimately AI should enable us to work better together towards a society and an environment that benefits all of us, not only a few. And this means that we need to address AI's benefits and its development from a multidisciplinary, multistakeholder and multipurpose way. It is not AI, that is going to ‘solve’ it for us, it is a combination of human and artificial expertise and intelligence. One does not replace the other, but extends our capabilities.

What is the difference between governing AI and controlling it?

Governing AI is about setting ethical frameworks, guidelines, and principles to steer development in socially beneficial directions. Controlling AI, in the best way, implies restricting certain capabilities to mitigate risks and avoid harm. The wrong way of controlling is to allow a very limited group of actors to dominate the field and its direction (which is what we are seeing at the moment). Governance is proactive and collaborative, whereas control is often reactive.

When and how should AI be controlled?

When AI poses imminent threats, such as in autonomous weapons or invasive surveillance technologies.

How can AI be regulated and governed?

Firstly, by understanding that there is not a “one size fits all”- approach. What is foremost needed is an understanding that governance is needed and that a global dialogue needs to be maintained. This can be done e.g via a multi-stakeholder approach involving governments, industry, academia, and civil society. Clear accountability structures, transparency, and ethical impact assessments are key, and enable comparing and connecting different governance solutions in different contexts. Moreover, regulation, like AI itself, is not guaranteed to be correct and is not set in stone. We need to accept that it needs to be tested, fixed and updated. And we need mechanisms for this, such as regulatory sandboxes for testing policies.

What is a multi-layered governance model?

This model integrates ethical, technical, and legal considerations at local, national, and international levels. It ensures flexibility to address diverse contexts while fostering global coherence in dealing with cross-border AI implications.

Why was the UN High-Level Advisory Body on AI formed? Why is such a body necessary?

The body was formed to address global challenges posed by AI, such as the lack of universal standards and coordination. Its impact transcends borders, which means governance needs international collaboration to ensure fairness, safety, and inclusivity.

What was the task of the Advisory Body?

Its main task was to develop recommendations for the ethical governance of AI, focussing on global cooperation, protecting human rights, and promoting sustainable development.

What are key findings of the report?

The urgency of creating an international framework for AI governance, and ensuring equitable access to AI technologies, as well as the importance of mitigating risks such as bias and inequality.

What are the key recommendations of the report?
  • Create an International Scientific Panel on AI
  • Develop shared policy understanding on AI Governance
  • Create an AI Standards Exchange
  • Establish a Capacity Development Network
  • Create a Global Fund for AI
  • Develop a Global AI Data Framework
What is the situation like in medicine? What are opportunities/ risks when using AI in medicine?

The opportunities come from improved diagnostic accuracy, optimised treatment, and reduced costs. Risks are usually defined as coming from data privacy breaches, biases in algorithms, and over-reliance on AI systems without proper validation. But another important risk is to create an even larger divide in access to medicine between the ones that have and the ones that have not.

How present is AI already in medicine?

AI applications such as radiology image analysis and personalised medicine are becoming commonplace, but main gains should come from assisting in diagnostics, treatment planning, and administrative tasks.

What are the current challenges for the use of AI in medicine?

Besides ensuring data quality, regulatory compliance, and addressing biases in medical datasets, the main challenge is to avoid transforming patients into data points. Patients are more than data, and their well-being starts from treating them from a perspective of human dignity.

How do patients feel about AI in medicine?

Like any other person, there is a lot of misunderstanding both on what AI can do but also on what AI cannot do. We also need to address what AI should not do, what are the limits of its use from an ethical perspective.

How can we promote the safe use of AI in medicine?

Rigorous testing, ethical standards, and regulatory oversight to ensure safety. Open communication with patients about AI’s role and limitations is also crucial.

What can we do to improve trust of patients in AI?

Involving patients in decision-making processes. Again, there is need for robust frameworks that prioritise patient rights and ethical considerations.

Will AI be able to reliably diagnose cancer in the future?

Yes. But the best diagnostic comes from a combination of human and machine capabilities, and from a combination of disciplines and approaches, including the patients, their families and other experts, such as nurses and social workers. Also, diagnosis is important but treatment is much more than diagnosis.