BD - Earth day 2024

Artificial Intelligence and Healthcare

K Ganapathy

K Ganapathy

More about Author

K Ganapathy (Neurosurgeon) is a Past President of the Telemedicine Society of India. Former Secretary and Past President of the Neurological Society of India. Formerly an adjunct professor at the IIT Madras and Anna University, Chennai. He is currently Emeritus Professor at the Dr MGR Tamilnadu Medical University. He was formerly honorary consultant and advisor in Neurosurgery at Armed Forced Medical Services.

A in AI should stand for Assisting, Augmenting and Amplifying. AI is at the best an extension of NI. What is artificial about it? Can machines learning problem solving and other cognitive functions associated with the human mind, be deployed in healthcare? Perspectives of a neurosurgeon trained in the BC era are shared here.


The  term AI or Artificial Intelligence was first introduced by Mcarthy in 1956. I personally feel that the A in AI should stand for, Augmenting, Amplifying, Accelerating, and Assisting and in an Ambient milieu. What is artificial in AI? AI is after all an extension, a by product of the Natural Intelligence which Homo Sapiens are endowed with. Augmented intelligence can help expand the role of any domain expert, whether a musician or a doctor, so they can know more in order to do more. Assistive and ambient intelligence can help people execute mundane secondary operations so they can focus on their primary job. Accelerated engineering, analysis and workflows can help expedite the processing of data or accelerate data-rich workflows. AI will become an integral part of the healthcare delivery system only when its adoption results in consistently better outcomes and reduced costs. For AI to be successful, the models must be interpretable, not just intelligent. The more that people can understand how the models arrived at their output, the more credible and actionable they can be. This is important in the healthcare space, as health is incredibly complex and full of confounding factors.

AI is defined as “the use of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”. Tomorrow’s 5P (Predictive, Personalised, Precision, Participatory, and Preventive) medicine when fully functional will have AI as a major component. This presupposes availability of genomics, biotechnology, wearable sensors and high speed real time super computing. Tomorrow’s healthcare will essentially be Big Data analysis. As 80per cent of the 41 Zetabytes (410 trillion GB) of digital information currently available is unstructured, AI will be required to detect patterns and trends, which our grey matter at present is unable to decipher.

Healthcare, normally not an early adopter of new technologies, has seen some of the greatest advances in AI. AI engines are today assisting doctors improve diagnoses, pick the right treatment and monitor care. Bernard J. Tyson, CEO of Kaiser Permanente told Forbes magazine: “No physician today should be practicing without AI assisting in their practice. It’s just impossible (otherwise) to pick up on patterns, to pick up on trends, to really monitor care.”

‘Chipping’ perhaps indicates the cultural transformation to accept AI in our daily lives. Individuals are having RFID (Radio-Frequency Identification) microchips injected into their hands so they can open office doors, log in to computers, share business cards, and even buy snacks with just a wave.  Eye Control now makes Windows 10 more accessible by empowering people with disabilities to operate an onscreen mouse, keyboard and text-to-speech experience using only their eyes and a compatible eye tracker like the Tobii 4C.

Who is liable if an AI system makes a false decision or prediction? Who will build in safety features? How will the economy respond if AI makes redundant certain jobs? Forecasting and prediction used in AI are based on precedence. In the case of machine learning, algorithms can be under performing in novel cases of drug side effects or treatment resistance where there is no prior example to build on. Hence, AI may not replace tacit knowledge that cannot be codified easily. As is common with technologic al advances, AI could replace jobs that previously required humans with computers. AI could be applied to repetitive types of jobs or actions in healthcare.

Machine learning, the basis of AI, is a field of computer science that gives computers the ability to learn without being explicitly programmed.  Evolving from the study of pattern recognition and computational learning theory ,these algorithms  can learn from and make predictions on data. These analytical models allow researchers, data scientists, engineers, and analysts to "produce reliable, repeatable decisions and results" and uncover "hidden insights" through learning from historical relationships and trends in the data. Can this be extrapolated to individual patient care?

Where are We Heading?

A robot made in China scored 456 in a National level qualification test for doctors compared to the pass mark of 350. The same test was answered in the same time in a designated room without internet access! The robot had mastered self-learning and problem solving abilities to a degree.The robot, now certified, will soon make home visits – of course, in driverless cars! Saudi Arabia has gone one step further. It became the first country to grant citizenship to a robot. A very attractive intelligent Sophia was introduced at a large investment conference in the Saudi capital, Riyadh. She was able to think and respond appropriately at a global press interview.  Mirai, a 7-year-old humanoid chat bot whose name means 'future' in Japanese and who functions on the Line messaging app, has become a resident of Shibuya, a Tokyo ward with a population of around 224,000 people. The goal is to "make the district's local government more familiar to residents and allow officials to hear their opinions.”

Key Factors Driving Growth of AI & Deep Learning AI in Clinical Applications

• Analysis of Patient’s Electronic Health Records
• Population Health Management
• Predictive Care Guidance
• Effectiveness of Care Metrics
• Physician and Hospital Error Reduction
• Medical Image Processing
• Oncology Diagnostic Assistance
• Large Datasets G.P.Us (Graphics Processing Units Algorithms
• Convolutional Neural Networks inspired by the brain
• Medical knowledge is doubling every few years
• 0 per cent of medical data is unstructured text and images
• Aid in early detection & prevention
• Decrease diagnosis & treatment errors
• Ideally suited for telemedicine deployment.

AI for the Consumer

Elimination of Unnecessary Procedures and Costs Diet Guidance

Payers Claim Processing Cost Comparison

Physician, Hospital Staff & Patient Training & Education Predictive Modelling

Supply Costs and Management Social Networking

Interoperability & Security Wearables Integration

Staff Management Virtual Personal Assistants

Network Co-ordination Personalised Activity Coaching.

AI in Workflow Optimisation

• Privacy of patient data, HIPAA Regulations
• Integrating into existing workflows, EMRs & systems
• Resistance to change. Educate yourself and your team about AI
• Partner with domain experts in health and AI
• Solve the easy problems first with AI to gain acceptance
• Focus on goals and patient outcomes, not just the technology
• View AI as a tool complementing human expertise & not as a replacement
• Fear about AI taking jobs.

AI in Radiology

• Filtering incoming images based on priority using deep learning. The algorithm looks at the images to identify brain hemorrhage or stroke, if the computer detects one of the flagged factors, the patient will move up on the priority list to have their images analysed first. If the algorithm does not detect any critical factors, the patient’s case falls towards the bottom of the priority list.
• Image quality control, imaging triage, efficient image creation, computer-aided detection, computer aided-classification, and automatic report drafting
• Deep learning algorithm can improve MRI image quality even notifying the technologist that images are too fuzzy to be read accurately. MRI image quality will improve reducing patient time in the machine
• InnerEye is AI-powered computer vision designed to dramatically improve the productivity of radiologists, trained on thousands of 3-D images and radiologist inputs
Diagnosing Skin Cancer: A Stanford study trained on a database of > 130,000 images of skin lesions representing > 2,000 different diseases AI accuracy measured by ability to correctly identify both malignant lesions (sensitivity) & benign lesions (specificity). Evaluated against diagnosis by 21 dermatologists.AI Model achieved accuracy of ~ 91% - matching diagnostic accuracy of dermatologists
• Both IBM’s Watson for Genomics and Watson for Oncology are examples of state of the art AI systems that are currently available to doctors and scientists. In research, the Watson team also is working on a system that will take this classification and reasoning system to the next level, by applying it to images. The project is called Medical Sieve and is ongoing work to use AI to identify instances of breast cancer and cardiac disease. Google Research team has been focusing on increasing accuracy of detecting lesion-level tumors in giga-pixel microscopy images
Image Analysis & Diagnosis: Using AI for Diabetic Retinopathy, Input Images from Fund us Cameras With a Deep Learning Model Output Diagnosis grading diabetic and hypertensive retinopathy. The Google Research team is developing state of the art computer vision systems for reading retinal fund us images for diabetic retinopathy and for detecting lesion-level tumors in giga-pixel microscopy images.

Illustrations of deployment of AI in Healthcare

AI in Mental Health

• Cognitive Science is a new field intersecting computer science and psychology. AI engines today not only simulate human conversation, they listen, learn, plan, and solve problems
• For a student worried about exam results, a new mother dealing with post partum depression, or a successful businessman with a gambling addiction, talking to their families is often not an option. Even if they share how they feel, it is likely that they will be judged rather than supported
• AI can make great coaches helping people learn mental health skills in a safe, yet personalised environment. A bot does not judge, and could be the first step in helping to find support. Wysa is an AI based emotionally intelligent penguin. It listens, chats with and helps users build mental resilience by learning skills like reframing negative thoughts and mindfulness. ‘She’ is a non-judgemental, empathetic resource with whom one can share just about anything, anytime, and anonymously. She can be trained in different languages and cultural contexts, establishing trust and connect across socio-economic strata. In three months, Wysa had crossed a million conversations with fifty thousand users. Over five hundred people had written in to say how much it had helped them with a mental health problem, and while it was clearly new and learning, it was better than any other option they had. Some of these users had been suicidal, others lived with Post Traumatic Stress Disorder, social anxiety, depression, or bipolar. Practitioners started offering Wysa between therapy sessions as a way of practicing skills. For millions who feel lonely and don’t have a support system of friends and therapists around them, AI may well build resilience, provide support and save lives.

Predictive clinical analytics is a major component of AI in healthcare. The process of inputting historical patient data into models to identify and forecast future events, such as the likelihood of a patient relapse, is a sophisticated tool for risk assessment. The latter is the basis of every doctor patient interaction. Microsoft’s developments in AI for healthcare have “followed the data.” Healthbot enables healthcare organisations to create conversational interfaces that provide always-accessible information about health, wellness, benefits and intelligent triage, making use of natural language and conversational AI models developed for Bing, Xbox, Cortana and more. Patients can, anytime and anywhere, converse with an intelligent health agent, Microsoft’s Healthbot, go through an efficient triage, and then get intelligently handed off to a nurse or physician in a way that is more efficient, costs less and is more satisfying. The dependence of today’s AI on data has major consequences for healthcare. The industry is highly regulated, and so is access and use of medical data. So when systems are set up for accessing healthcare data in a compliant way, one will increasingly see the application of AI and machine learning

Augmenting Natural Intelligence

Eric Leuthardt, a neurosurgeon entrepreneur, believes that chips can be inserted into the brain to connect with the internet. With no room for hubris or delusion he exemplifies the certitude of a true believer. He is not alone. Last March Elon Musk, founder of  Tesla and SpaceX, launched Neuralink, a venture aiming to create devices that facilitate mind-machine melds. Facebook’s Mark Zuckerberg has expressed similar dreams, and last spring his company revealed that it has 60 engineers working on building interfaces that would let you type using just your mind. Bryan Johnson, the founder of the online payment system Braintree, is using his fortune to fund Kernel, a company that aims to develop neuro prosthetics which he hopes will eventually boost intelligence, memory, and more. These devices are notjust better hardware to facilitate seamless mechanical connection and communication between silicon computers and the messy grey matter of the human brain. They need to have sufficient computational power to make sense out of the mass of data produced at any given moment, with many of the brain’s 100 billion neurons firing. One of neuroscience’s most daunting tasks is to break the code by which neurons talk to each other and the rest of the body—developing the capacity to actually listen in and make sense of precisely how it is that brain cells allow us to function

Software relies on pattern recognition algorithms—specific programmes can be trained to recognise the activation patterns of groups of neurons associated with a given task or thought. With a minimum of 50 to 200 electrodes, each one producing 1,000 readings per second, the programmes must churn through a dizzying number of variables. The more electrodes and the smaller the population of neurons per electrode, the better the chance of detecting meaningful patterns, if sufficient computing power can be brought to bear to sort out irrelevant noise. One has to extract the one thing one is really interested in. That’s not so straightforward.

A prosthetic implant could allow one to use a computer and control a cursor in 3Dspace. Users could turn lights on and off, or turn heat up and down, using their thoughts alone. The device consists of brain-monitoring electrodes that siton the scalp and are attached to an arm orthosis; it can detect a neural signature for intended movement before the signal reaches the motor area of the brain. The neural signals are on the opposite side of the brain from the area usually destroyed by the stroke—and thus are usually spared any damage. By detecting them, amplifying them, and using them to control a device that moves the paralysed limb, Leuthardt has found, he can actually help a patient regain independent control over the limb, far faster and more effectively than is possible with any approach currently on the market. Importantly, the device can be used without brain surgery.

"The good physician treats the disease; the great physician treats the patient who has the disease; Medicine is a science of uncertainty and an art of probability. One of the first duties of the physician is to educate the masses not to take medicine. Listen, listen, listen the patient is telling you the diagnosis”. I often wonder how Sir William Osler, author of the above statements would respond to the introduction of ‘Artificial Intelligence’ in healthcare, 110 years later. For centuries, the essence of practicing medicine has been a physician obtaining as much data about the patient’s health or disease as possible and making decisions. White hair (like I have) presupposed experience, judgement, and problem-solving skills using rudimentary tools and limited resources. Today with the imminent disruptive technology, fashionably termed AI  is poised to become a reality even in healthcare, do we need to sit back and critically evaluate what this would actually mean.

Altruism, benevolence compassion, commiseration, concern, consideration, empathy, humanity, kindness, knowledge, sympathy, trust, understanding, and wisdom, - these are the characteristics a doctor of the twentieth century was identified with. Will the Sofias and Mirais of the next decade shed tears when a patient dies? Who knows, may be they will! But then the old individualised family doctor patient sanctified relationship, is being replaced with terms like ‘the healthcare industry’, ‘healthcare provider’, ‘consumer’ client’, ‘predictive analytics’, ‘machine learning’ and AI. Doctors of today beware your grandchildren will consider you a relic belonging to the Neolithic era. So buck up, become familiar with ‘deep learning’ and ‘Bayesian Networks’ if you want to understand the language tomorrow’s medical students will be talking. These are, of course, the perspectives of a neurosurgeon trained in the BC era! CharlesDickens began his immortal ‘Tale of Two Cities’ with the statement : “It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us”. He could very well have been referring to AI and NI. After all, good and evil are two sides of the same coin.

--Issue 39--