Artificial Intelligence in Healthcare and Biomedical Research
Artificial intelligence is transforming healthcare through enhanced diagnostics, clinical decision support, and drug discovery. Advances in imaging, ambient documentation, and generative design are reshaping workflows and accelerating innovation. Regulatory frameworks and investment trends reflect growing adoption, while challenges in bias, privacy, and generalizability persist. India is evolving policy and infrastructure offer promising models for ethical, scalable deployment.
1. Introduction
AI has moved from proof-of-concept to translation in core clinical tasks, while simultaneously changing the economics of discovery science. Two technological currents underpin this shift: 1) representation learning at scale for perception and prediction (medical imaging, EHR-based risk models, language-based clinical reasoning), and 2) generative models for molecules and mechanisms, catalysed by breakthroughs in protein structure and interaction prediction. Regulatory frameworks and capital markets are adjusting in parallel, with the FDA operationalising adaptive oversight for learning systems and the EU finalising comprehensive cross-sector rules for high-risk AI (1,2).
2. Technology landscape: where value is accruing
2.1 Diagnostic imaging
Breast cancer screening. Large evaluations now show AI can support readers and increase cancer detection without increasing recalls. Recent nationwide, real-world deployments and controlled studies report higher detection rates when AI assists radiologists during population screening (3,4).
Tuberculosis screening. The World Health Organization has recommended computer-aided detection (CAD) software for automated chest radiograph interpretation since 2021, updated in 2025 with operational guidance (5). Independent evaluations across high-burden settings show CAD meeting triage sensitivity targets and reducing downstream molecular tests (6).
Generalizability and oversight. By August 2024, at least 900 FDA-listed AI-enabled devices were catalogued, largely in radiology, with new analyses examining evidence maturity and generalizability (7).
2.2 Clinical decision support with language models
Clinical reasoning benchmarks show rapid progress, yet mixed results in prospective or workflow-embedded evaluations. Med-PaLM 2 achieved higher scores on medical QA, while pragmatic studies highlight both improvements and limitations when LLMs augment physicians (8). Experimental systems for complex case diagnosis show promise but require clinical validation and safety guardrails (9,10).
Ambient AI documentation. Early real-world studies and health-system analyses report reduced documentation time and after-hours “pajama time,” with corresponding reductions in perceived burden. Independent assessments caution that financial ROI evidence is immature, and RCTs are underway (11–14).
2.3 Drug discovery and development
Structure and interaction prediction. AlphaFold 2 transformed single-protein structure prediction; AlphaFold 3 extended to complexes and biomolecular interactions, accelerating hypothesis generation for medicinal chemistry and biologics design (15).
Generative and target-discovery platforms. In June 2025, a generative-AI–discovered TNIK inhibitor, rentosertib, reported phase IIa results in idiopathic pulmonary fibrosis, marking a milestone for model-driven target discovery and molecular design (16).
3. Evidence highlights across care domains
Sepsis early warning. Prospective, multi-site deployments of EHR-based early-warning systems have shown mortality reductions and earlier antibiotic administration, while also illustrating the importance of precision alerting and human-in-the-loop monitoring (17).
Cancer screening at scale. Real-world AI-assisted mammography programs have demonstrated increased cancer detection, informing policies on reader replacement versus augmentation (18).
TB programs in LMICs. CAD tools support high-throughput triage. WHO guidance now details threshold selection, human quality control, and programmatic integration, relevant to national TB programs (19).
Caveats. Bias and failure modes persist. AI can infer patient race from images even when imperceptible to humans, and hidden stratification can cause clinically meaningful errors in under-represented subgroups, reinforcing the need for subgroup performance reporting, external validation, and post-market surveillance (20,21).
4. Regulation and governance: toward adaptive, risk-proportionate oversight
United States. The FDA finalised guidance on Predetermined Change Control Plans (PCCPs) for AI-enabled devices, specifying how manufacturers can pre-specify permissible model updates. Convergent “good machine learning practice” principles were jointly articulated with peer regulators (1).
European Union. The EU AI Act classifies most medical AI as high-risk, triggering requirements for risk management, data governance, transparency, human oversight, and post-market monitoring, with staged obligations following entry into force. Interfaces with MDR/IVDR are now being clarified (2).
Global coordination. WHO guidance provides ethical principles for AI in health, with 2025 updates addressing large multimodal models, complementing emerging consensus frameworks such as FUTURE-AI for trustworthy and deployable medical AI (22,23).
5. Investment trends and market signals
The Stanford AI Index reports private AI investment of $109.1 billion in 2024, with the United States attracting the majority and health one of the most active verticals. In 2025, leading generative drug-design platforms raised substantial rounds, exemplified by the $600 million Series for Isomorphic Labs. While market forecasts for “AI in healthcare” vary widely by methodology, they consistently project rapid growth through 2030–2032 (24–26).
Interpretation for clinicians and policymakers. Capital concentration in foundation-model platforms and drug discovery partnerships signals a long horizon for returns, with nearer-term adoption concentrated in imaging triage, documentation automation, and specific decision-support niches. Health-system buyers should prioritise use cases with measurable operational ROI and clinical end points, while investors should interrogate evidence generation plans, regulatory strategy, and data assets.
6. Risk management: data protection, bias, and reliability
Bias and fairness. A seminal analysis showed racial bias in a widely used risk-prediction algorithm due to cost-based labels. Imaging studies demonstrate that models can infer race from pixels, elevating risks of disparate performance (27). Mitigation requires problem-framing audits, data and label provenance, subgroup metrics, and routine bias testing during updates (20).
Privacy-preserving learning. Federated learning and related approaches can enable multi-site training without centralising data, though privacy leakage and systems complexity remain active research areas (28).
Reliability of generative systems. LLMs can aid triage and documentation but may hallucinate or propagate subtle clinical errors, underscoring the need for bounded tasks, verifiable outputs, and human verification workflows (29).
7. Implementation playbook for health systems
Step 1: Problem selection. Choose high-volume, time-sensitive workflows with measurable outcomes, for example imaging triage or ambient documentation. Step 2: Evidence and validation. Require external validation on local data and pre-specified subgroups, with health-economic endpoints. Step 3: Safety case. Map hazards, define human oversight, and adopt PCCP-aligned update plans. Step 4: Data governance. Use robust consent, audit, and monitoring; consider federated or enclave models for multi-institutional learning. Step 5: Post-deployment monitoring. Track drift, performance across subgroups, and incident reporting (1,28).
8. India spotlight: policy, infrastructure, and use cases
Policy and data protection. India’s Digital Personal Data Protection Act, 2023, establishes consent and fiduciary obligations for digital personal data, with implications for health data flows and AI training. The Ayushman Bharat Digital Mission’s Health Data Management Policy defines minimum privacy standards for the federated national health stack. NITI Aayog’s Responsible AI approach documents frame ethical deployment (30).
Deployment exemplars. Indian-origin imaging AI has achieved FDA clearances for head CT triage and is widely evaluated for TB screening at scale, aligning with WHO guidance (31). Partnerships announced in 2025 aim to operationalise AI for cardiovascular risk and hospital of the future initiatives.
Implications. India’s mix of large public programs, digital ID rails, and private tertiary systems creates fertile ground for AI in screening, tele-radiology, and documentation support, provided procurement embeds validation, privacy-by-design, and subgroup monitoring.
9. Outlook: a pragmatic path to impact
Near-term value will continue to accrue in 1) perception tasks with robust labels and clear service-level metrics, 2) workflow automation that returns clinician time, and 3) target- and lead-generation that shortens the discovery cycle. Sustained impact depends on better datasets, transparent update mechanisms, bias mitigation, and rigorous, peer-reviewed outcome studies. Investors should expect a barbell of short-cycle operational tools and long-cycle therapeutics bets. Policymakers can accelerate safe scale-up through harmonised guidance, sandboxing, and public procurement that rewards transparent evidence.
References
- Health C for D and R. Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence-Enabled Device Software Functions [Internet]. FDA; 2025 [cited 2025 Nov 1]. Available from: https://www.fda.gov/regulatory-information/search-fda-guidance-documents/marketing-submission-recommendations-predetermined-change-control-plan-artificial-intelligence
- Aboy M, Minssen T, Vayena E. Navigating the EU AI Act: implications for regulated digital medical products. Npj Digit Med. 2024 Sep 6;7(1):237.
- Eisemann N, Bunk S, Mukama T, Baltus H, Elsner SA, Gomille T, et al. Nationwide real-world implementation of AI for cancer detection in population-based mammography screening. Nat Med. 2025 Mar;31(3):917–24.
- Chang YW, Ryu JK, An JK, Choi N, Park YM, Ko KH, et al. Artificial intelligence for breast cancer screening in mammography (AI-STREAM): preliminary analysis of a prospective multicenter cohort study. Nat Commun. 2025 Mar 6;16:2248.
- Use of computer-aided detection software for tuberculosis screening: WHO policy statement [Internet]. [cited 2025 Nov 1]. Available from: https://www.who.int/publications/i/item/9789240110373
- Qin ZZ, Walt MV der, Moyo S, Ismail F, Maribe P, Denkinger CM, et al. Computer-aided detection of tuberculosis from chest radiographs in a tuberculosis prevalence survey in South Africa: external validation and modelled impacts of commercially available artificial intelligence software. Lancet Digit Health. 2024 Sep 1;6(9):e605–13.
- Windecker D, Baj G, Shiri I, Kazaj PM, Kaesmacher J, Gräni C, et al. Generalizability of FDA-Approved AI-Enabled Medical Devices for Clinical Use. JAMA Netw Open. 2025 Apr 30;8(4):e258052.
- Singhal K, Tu T, Gottweis J, Sayres R, Wulczyn E, Amin M, et al. Toward expert-level medical question answering with large language models. Nat Med. 2025 Mar;31(3):943–50.
- Goh E, Gallo R, Hom J, Strong E, Weng Y, Kerman H, et al. Large Language Model Influence on Diagnostic Reasoning: A Randomized Clinical Trial. JAMA Netw Open. 2024 Oct 28;7(10):e2440969.
- Milmo D, editor DMG technology. Microsoft says AI system better than doctors at diagnosing complex health conditions. The Guardian [Internet]. 2025 Jun 30 [cited 2025 Nov 1]; Available from: https://www.theguardian.com/technology/2025/jun/30/microsoft-ai-system-better-doctors-diagnosing-health-conditions-research
- Albrecht M, Shanks D, Shah T, Hudson T, Thompson J, Filardi T, et al. Enhancing clinical documentation with ambient artificial intelligence: a quality improvement survey assessing clinician perspectives on work burden, burnout, and job satisfaction. JAMIA Open. 2025 Feb 21;8(1):ooaf013.
- Ma SP, Liang AS, Shah SJ, Smith M, Jeong Y, Devon-Sand A, et al. Ambient artificial intelligence scribes: utilization and impact on documentation time. J Am Med Inform Assoc JAMIA. 2025 Feb 1;32(2):381–5.
- Rubio M. Analysis: AI scribes save physicians time, improve patient interactions and work satisfaction [Internet]. Permanente Medicine. 2025 [cited 2025 Nov 1]. Available from: https://permanente.org/analysis-ai-scribes-save-physicians-time-improve-patient-interactions-and-work-satisfaction/
- UCLA. UCLA Physician Workflow Trial: Ambient Artificial Intelligence Scribe Technologies [Internet]. 2024 [cited 2025 Nov 1]. Available from: https://ucla.clinicaltrials.researcherprofiles.org/trial/NCT06792890
- Jumper J, Evans R, Pritzel A, Green T, Figurnov M, Ronneberger O, et al. Highly accurate protein structure prediction with AlphaFold. Nature. 2021 Aug;596(7873):583–9.
- Xu Z, Ren F, Wang P, Cao J, Tan C, Ma D, et al. A generative AI-discovered TNIK inhibitor for idiopathic pulmonary fibrosis: a randomized phase 2a trial. Nat Med. 2025 Aug;31(8):2602–10.
- Adams R, Henry KE, Sridharan A, Soleimani H, Zhan A, Rawat N, et al. Prospective, multi-site study of patient outcomes after implementation of the TREWS machine learning-based early warning system for sepsis. Nat Med. 2022 Jul;28(7):1455–60.
- Eisemann N, Bunk S, Mukama T, Baltus H, Elsner SA, Gomille T, et al. Nationwide real-world implementation of AI for cancer detection in population-based mammography screening. Nat Med. 2025 Mar;31(3):917–24.
- Use of computer-aided detection software for tuberculosis screening: WHO policy statement [Internet]. [cited 2025 Nov 1]. Available from: https://www.who.int/publications/i/item/9789240110373
- Gichoya JW, Banerjee I, Bhimireddy AR, Burns JL, Celi LA, Chen LC, et al. AI recognition of patient race in medical imaging: a modelling study. Lancet Digit Health. 2022 Jun 1;4(6):e406–14.
- Oakden-Rayner L, Dunnmon J, Carneiro G, Ré C. Hidden Stratification Causes Clinically Meaningful Failures in Machine Learning for Medical Imaging. Proc ACM Conf Health Inference Learn. 2020 Apr;2020:151–9.
- Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models [Internet]. [cited 2025 Nov 1]. Available from: https://www.who.int/publications/i/item/9789240084759
- Lekadir K, Feragen A, Fofanah AJ, Frangi AF, Buyx A, Emelie A, et al. FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [Internet]. arXiv; 2024 [cited 2025 Nov 1]. Available from: http://arxiv.org/abs/2309.12325
- Economy | The 2025 AI Index Report | Stanford HAI [Internet]. [cited 2025 Nov 1]. Available from: https://hai.stanford.edu/ai-index/2025-ai-index-report/economy
- Google-backed AI drug discovery startup raises $600 million. Reuters [Internet]. 2025 Mar 31 [cited 2025 Nov 1]; Available from: https://www.reuters.com/technology/artificial-intelligence/google-backed-ai-drug-discovery-startup-raises-600-million-2025-03-31/
- Labs I. Isomorphic Labs announces $600 million funding to further develop its next-generation AI drug design engine and advance therapeutic programs into the clinic [Internet]. [cited 2025 Nov 1]. Available from: https://www.prnewswire.com/news-releases/isomorphic-labs-announces-600-million-funding-to-further-develop-its-next-generation-ai-drug-design-engine-and-advance-therapeutic-programs-into-the-clinic-302415534.html
- Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019 Oct 25;366(6464):447–53.
- Rieke N, Hancox J, Li W, Milletarì F, Roth HR, Albarqouni S, et al. The future of digital health with federated learning. Npj Digit Med. 2020 Sep 14;3(1):119.
- McCoy LG, Manrai AK, Rodman A. Large Language Models and the Degradation of the Medical Record. N Engl J Med. 2024 Oct 30;391(17):1561–4.
- NITI AAYOG, India [Internet]. [cited 2025 Nov 1]. Available from: https://niti.gov.in/
- Codlin AJ, Dao TP, Vo LNQ, Forse RJ, Van Truong V, Dang HM, et al. Independent evaluation of 12 artificial intelligence solutions for the detection of tuberculosis. Sci Rep. 2021 Dec 13;11(1):23895.