Future-Proofing Healthcare

Integrating AI, Digital Inclusion, and Climate Resilience

Dr. Dimitrios Kalogeropoulos

Dr. Dimitrios Kalogeropoulos

Chief Executive Officer, Global Health & Digital Innovation Foundation

More about Author

Dr. Dimitrios Kalogeropoulos is a digital health pioneer and a committed advocate for the ethical and responsible use of AI in healthcare. He serves as CEO of the Global Health & Digital Innovation Foundation and as Health Executive in Residence at UCL’s Global Business School for Health. Dr. Kalogeropoulos has advised leading organizations, including the WHO, and has played a key role in shaping global policy initiatives to improve healthcare accessibility and drive sustainable, innovative solutions worldwide.

Dr Dimitrios Kalogeropoulos explores the strategic role of AI in advancing health equity and climate resilience. He emphasises inclusive data governance, participatory innovation, and the integration of public health with personalised care highlighting how AI can strengthen system adaptability, address social and environmental determinants, and drive proactive, sustainable healthcare transformation.

Q1: How can AI-driven personalised medicine redefine healthcare resilience, and what specific innovations do you see shaping the next decade?

Healthcare systems globally are under increasing strain, particularly in areas such as mental health, where demand is outpacing capacity. Many of the underlying health issues are preventable. AI-driven personalised medicine offers a compelling path to greater resilience by enabling smarter, more efficient use of limited resources. A particularly promising frontier is the convergence of precision medicine and integrated public health, where AI analyses large-scale, complex datasets to deliver more tailored, timely, and preventative care. The goal is to stay one step ahead of disease.

Looking ahead, the vision is to create learning health systems—where population-level outcomes feed into real-time clinical decision-making. This enables deeper insights into disease trajectories and supports adaptive, personalised interventions for individuals and communities. Such systems evolve continuously, becoming more participatory and responsive to changing needs and contexts.

Q2: What are the key systemic barriers preventing AI from achieving widespread adoption in digital health, particularly in preventive care, and how can we overcome them?

Despite the promise of AI, adaptive applications in personalised care remain nascent compared to more deterministic tools. This is due to several systemic challenges, including infrastructure gaps, integration issues, and service models that have yet to adapt to a digital-first paradigm.

Innovation is needed on two fronts: enabling digital-first care (such as proactive screening via chatbots) and integrating digital capabilities into traditional models (like hospital-at-home programmes). With these service models, we are shifting from passive, retrospective consent for secondary data use to active participation in precision-learning systems embedded in care.

To make this transition sustainable, we need to:

• Invest in digital infrastructure and equitable access,
• Design AI tools for real clinical workflows, and
• Establish robust methods for evaluating cost-effectiveness.

Most critically, public trust must be earned through education, engagement, and co-production. Many of these innovations are already underway, accelerated by generative AI and rapid, adaptive prototyping.

Q3: How can AI improve mental health interventions, particularly for burnout prevention in healthcare professionals, while ensuring ethical and unbiased implementation?

Burnout is a complex, multifactorial condition especially among healthcare professionals, where it’s becoming alarmingly prevalent. AI can play a pivotal role in prevention by analysing systemic data, such as workload patterns, over time. But workload alone is not a strong predictor. Additional longitudinal, context-aware data is essential to detect weak signals and temporal patterns that enable timely, personalised intervention.

Self-reported burnout scores collected through digital surveys can serve as phenotyping tools or proxy endpoints, allowing the capture of behavioural, environmental, and emotional markers. AI can then identify endotypes of burnout, much like in physical health conditions, and link these to tailored interventions.

Adaptive AI becomes especially valuable here modelling how burnout develops across different roles (e.g. junior doctors vs. senior nurses) and fusing behavioural data (e.g. sleep, resilience scores) with organisational insights. Importantly, these tools should be framed not merely as research instruments but as workflow supports that help prevent burnout at the system level.

Q4: What role does digital phenotyping play in early detection and intervention for mental health conditions, and how can AI enhance its predictive accuracy?

Digital phenotyping captures behavioural, cognitive, and environmental data through everyday technologies like smartphones. It allows us to detect subtle, early signs of mental health decline—such as disrupted sleep, reduced social interaction, or changes in speech—that often precede clinical diagnosis.

AI dramatically enhances the predictive power of this approach. Through deep learning, it can analyse vast, real-time datasets to uncover patterns invisible to human clinicians. Just as we now understand that diseases like diabetes can have distinct endotypes despite similar phenotypes, the same principle applies to mental health: one diagnostic label can represent multiple underlying pathways to effective treatment.
The future lies in combining AI’s pattern-recognition capacity with ground-truth anchoring and expert validation. This will enable more dynamic, accurate assessments and context-specific interventions—moving us from detection to truly preventative, personalised mental healthcare.

Q5: With growing concerns over digital autonomy, how should AI strategies evolve beyond consent-based models toward a truly participatory approach?

Digital autonomy is central to person-centred care. Yet while we need better, more representative data, traditional models of static, opt-in consent are no longer fit for purpose—especially as AI systems evolve in real time.

Agentic AI—capable of autonomous decision-making—has been around for a while and is already enhancing workflows in other sectors. In healthcare, it could coordinate care plans, manage workloads, and drive personalised prevention. But its growing independence raises critical ethical questions: about agency, trust, and the balance between human oversight and machine autonomy.

To evolve responsibly, AI systems in healthcare must:

• Prioritise transparency and explainability,
• Embed ethics and compliance into design,
• Use inclusive, rights-based approaches to governance, and
• Be co-produced with users, not imposed upon them.

We must also rethink participation itself. Moving beyond static consent means creating dynamic, evolving relationships with users where data usage, system behaviour, and decision-making are jointly shaped. Like learning health systems, participatory AI must evolve with people—grounded in transparency, shared agency, and civic dialogue.

Q6: How can AI-driven insights be integrated into existing healthcare infrastructure without exacerbating disparities in access?

This question cannot be separated from the social determinants of health or from climate impact, which increasingly compounds existing inequalities. Marginalised populations often face barriers in access, digital literacy, and care quality, leading to poorer outcomes.

AI systems risk deepening these disparities if they are trained on incomplete or biased datasets. For example, the datasets we design for studies often systematically ignore comorbidities. Without inclusive governance and participatory evidence generation, insights may not reflect the realities of vulnerable communities—and may reinforce rather than reduce inequality.

To counter this, we must:

• Develop inclusive data strategies that prioritise underrepresented groups,
• Ensure AI tools are accessible, usable, and embedded in the local context,
• Foster open collaboration infrastructures to bridge care and public health.

Environmental exposure is one powerful starting point. It shows how integrating climate and health data can drive innovation and equity. It also exemplifies how digital systems can be made context-sensitive and community-informed.

Q8: How do you envision AI facilitating a shift from reactive to proactive healthcare delivery, especially in addressing chronic diseases and mental health conditions?

AI can play a central role in transforming care from reactive to proactive, especially for chronic and mental health conditions. But the foundation must be trust, usability, and alignment with the workforce.
Today, clinicians are overwhelmed by growing volumes of underutilised data. This contributes to burnout and slows adoption. AI must help close this gap by making data more actionable, reducing unnecessary burden, and proactively identifying risks before they escalate.

Technology-enabled care, such as remote monitoring plans, is critical. Through regular health assessments and by integrating multimodal data—from wearables, environmental sensors, EHRs, and digital phenotyping AI can:

• Flag early signals of chronic disease progression.
• Recommend timely interventions.
• Support shared decision-making with patients.

But alignment is key: unless clinicians and patients see these tools as helpful, not burdensome, the shift will not happen at scale.

Q9: What governance frameworks and policy changes are needed to ensure AI-powered preventive care reaches vulnerable populations effectively?

Europe’s regulatory infrastructure—particularly the EHDS and the AI Act—offers a unique opportunity, with valuable lessons for global adoption. But technical frameworks alone are not enough; they must be matched by social and institutional readiness.

To realise this promise, we must:

• Establish strong, interoperable standards for data quality and exchange,
• Improve collective consent mechanisms and civic engagement,
• Promote adaptive governance that evolves with evidence and practice.

One of the greatest challenges lies in connecting fragmented datasets to enable anticipatory care. This requires not only technical interoperability but also legal and ethical alignment across institutions.

Ultimately, inclusive and trusted data ecosystems—grounded in shared value and co-created governance—are essential if AI is to serve all populations fairly.

Q10: Given the evolving regulatory landscape, how should AI developers and healthcare organisations balance innovation with patient privacy and data security?

We’ve built many privacy regulations—but we’re at risk of creating a compliance maze that hinders innovation and opens unintended loopholes. More fundamentally, we have yet to confront the question of ownership.

Without addressing who owns the data, or who benefits from its use, we may undermine public trust and reinforce inequities. We need a new approach—where innovation projects are also policy experiments, and where compliance is embedded from the outset.

This includes:

• Designing systems for transparency, safety, and auditability,
• Testing collective consent mechanisms beyond rigid opt-in models,
• Rethinking intellectual property and societal value in digital health.

AI is not just a technical transformation—it’s a governance challenge. Sandboxes, living labs, and participatory frameworks can help us prototype new value systems in real-world settings, ensuring that innovation is both ethical and sustainable.

Q11. How can AI empower patients to take a more active role in managing their own health while maintaining clinician oversight and trust in digital tools?

Patients and clinicians work together to navigate health concerns—this relationship is central to care and must remain at the heart of digital transformation. AI should support, not disrupt, this dynamic.

To empower patients while preserving clinical oversight, models of interaction must be developed at two interdependent levels:

• Knowledge Layer: AI can translate complex health data into actionable, person-centred insights—underpinned by improved data quality, explainability, and contextual awareness. From agentic AI applications to digital proxies, this layer enables shared decision-making and personalised care.
• Integration Layer: AI tools must embed seamlessly into clinical workflows and relational dynamics—not add friction or bypass clinical judgement. This demands transparent governance structures, secure and interoperable data-sharing mechanisms, and adherence to technical standards that foster co-designed, scalable innovation ecosystems.

Together, these layers enable a shift from passive to active participation, where patients and clinicians are supported not overwhelmed by digital tools. The goal is to enhance trust, improve health outcomes, and ensure equity across all populations.

Q14. What key metrics should be used to assess the success of AI-driven healthcare strategies, and how can organisations ensure continuous improvement in AI models over time?

The lack of clearly defined, universally adopted metrics in regulatory frameworks continues to create uncertainty—particularly for Generative AI and large language models (LLMs). The EU AI Act, effective from 1 August 2024, begins to address these challenges by establishing a legal foundation for AI systems. This is complemented by the General-Purpose AI Code of Practice (GPAI CoP), which outlines key principles for providers of GPAI models. As a participant in the GPAI CoP plenary, I have proposed the following metric-driven priorities:

1. Community Participation: Building AI systems by communities—not merely for them—fosters trust and contextual relevance. Metrics include relevant technical standards and AI accuracy in adapting to new clinical data and environments.
2. Translate Clinical Applications into Public Health Action: Metrics include the rate of translation from clinical AI tools to population-level insights and the maturity of digital public infrastructures (DPIs) enabling clinical-to-public health data feedback.
3. Utilise Public Health Knowledge to Drive Preventive Participation: Metrics include the uptake of systems medicine-based interventions linked to AI-generated insights and the use of public health data in behaviourally informed outreach and precision prevention campaigns.
4. Foster Transparent, Ethical, and Inclusive Governance: Metrics include public trust indicators, such as sentiment analysis and civic panel feedback on how well governance frameworks align with public values and emerging risks.

These principles reflect the global shift towards trustworthy AI—anchored in human oversight, technical robustness, transparency, inclusivity, societal well-being, and accountability. Embedding these values into measurable metrics enables organisations to continuously adapt their strategies—balancing innovation with safety, trust, and impact.

--Issue 68--