BD - Earth day 2024

Deep Learning in Medical Imaging

Nicola Pastorello

Nicola Pastorello

More about Author

Nicola Pastorello is Data Analytics manager of Daisee, an Australia-based Artificial Intelligence company bridging the gap between technical AI and commercial application in the fields of vision and natural-language processing. He has a extensive experience in applying impactful AI to healthcare-related problems (e.g. predicting epileptic seizures, improve medical diagnoses).

Kim Berry

Kim Berry

More about Author

Kim Berry is Principal Writer for Daisee. She is an accomplished journalist, passionate about creating balanced, concise and informative content. Throughout her career she has written on: AI & technology; environmental science and law; climate change and the rise of the carbon economy; and education.

New Artificial Intelligence (AI) and deep learning techniques can help medical imaging technicians spot anomalies and diagnose conditions in a fraction of the time previously needed (and generally with more accurate results). AI increasingly enables human capabilities like understanding, planning, and perception to be undertaken by software efficiently and at lower cost. Here, we present the most recent results in the field and discuss how it will change the role of medical imaging professionals.

No field has seen the extensive and successful application of Artificial Intelligence (AI) to real-world problems more than those of imaging and computer vision. Deep learning — a subset of machine learning — is now a significant component in a wide number of vision-based solutions such as detecting and recognising people in videos and post-processing mobile phone pictures to a professional level in a matter of seconds.

Specifically trained neural networks are now outperforming humans in classifying images, with the advantage of large scalability and fast processing times.

But beyond the world of making us look better in selfies, vision-specialised AI algorithms have a logical application in the medical imaging field. In fact, it is generally considered one of the first domain of applications for image-focused AI.

The reasons for this are generally considered 1) the availability of large volumes of (digitalised) healthcare data and 2) the massive improvements on the analytic techniques and, in particular, convolutional neural networks.

The impact of this is enormous. Deep learning can be implemented to quickly process massive volumes of scans in a short time, a task that would otherwise take hours (if not days) for a trained professional.

In 2016, Frost & Sullivan predicted 1the AI in healthcare market would reach US$6.6 bn by 2021, a 40 per cent growth rate. It found AI would strengthen medical imaging diagnosis processes as well as enhance care delivery. It could potentially improve outcomes by 30-40 per cent and reduce some treatment costs by as much as 50 per cent.

What AI in Computer Vision can do

Currently, deep learning algorithms (and, in particular, convolutional neural networks) are used in vision problems for:

• the classification of images/frames according to some pre-set labels. In this case, a large number of images (usually in the range of millions) is used, together with previously known “labels” to train the model. Such a model does learn how to match images with labels and can then apply this learning to new images;

• finding the position of objects in images (segmentation). The neural network in this case learns to find the pixels associated with an object in an image; and

• generating new images starting from a pool of original pictures. In this case the use of generative models allow for the production of new (often realistic) images.

In healthcare and medical imaging processing, classification and segmentation are the main applications of convolutional neural networks.

Pros and Cons of Deep Learning

Applying AI algorithms and tools has some obvious advantages over existing manual and human-based processes, as well as some drawbacks.

Firstly, a number of expensive medical errors and misdiagnoses are linked with long working hours and stress caused by prolonged tedious and boring tasks. AI models are not affected by boredom, stress or tiredness.

Secondly, human effort is not easily scalable, whereas deep learning model inferences can be run in parallel with more processing and computational hardware. A trained medical practitioner needs years of clinical experience before mastering the “art” of correctly and completely interpreting complex scans as CT, MRI and ultrasound scans, while a model can be trained on millions of images in hours and then applied effortlessly to new scans in order to detect and diagnose problems.

This also has an equity aspect to it. MMC Ventures’s 2017 State of AI report noted that traditional diagnosis by experienced professionals (whose training is time consuming and their scalability limited) by default inhibits supply and increases costs. This means that for developing economies, medical diagnosis can be inaccessible while being prohibitively expensive for many in developed countries.

Automating diagnosis for a growing proportion of conditions will see barriers to access fall rapidly. By transferring the burden of diagnosis from people to software, global access will increase, the report predicts.

Thirdly, there is the issue of consistency. Human diagnosis is not 100 per cent consistent in time and with different professionals (i.e. the same scan can be interpreted in different ways by different people or even by the same person at different times).

AI trained models are instead consistent (low-variance) in their predictions and, in general, the same scan will return the same result.

However, some caution is needed in applying AI to diagnostic problems. In particular, for models to become more advanced - and in turn - accurate, more training (tagged) data is needed. This ensures the model learns a thorough representation of the problem space but it can also be costly to obtain.

Moreover, a model is just as good as the data that has been used in its training. Since human-tagged data will include errors, there is the risk that in massive datasets these errors might be overlooked and become part of what the model learns.

Another issue that relates to model complexity is the loss of transparency. More complex models (e.g. neural networks) are very hard, in fact almost impossible to explain. The decision process regarding a scan is encoded in millions, if not billions, of different parameters.

Understanding the interplay between them to produce a result is a human-prohibitive tasks. While a number of techniques to overcome this lack of explainability have been proposed over the years, none of them has delivered a full understanding of a model decision process.

For this reason, AI is useful to highlight odd results and anomalies and filter out the results that a medical practitioner should focus on. As with most AI applications, its use in medical imaging comes into its own as a classification tool we can apply to data.

Radiologists and other medical professionals will have incredibly powerful tools to help reduce the risk of misdiagnosis, while also letting them process more scans than ever before.

What’s Already Around

A number of medical imaging software system vendors have already started implementing AI in their tools. Daisee is developing a deep learning application to expedite and improve the diagnostic accuracy of brain scans. The initial pilot will focus on detecting the most common forms of head trauma to redirect clinical attention to cases requiring acute intervention.

Philips included in its Illumeo software suite an AI-powered research tool that retrieves and presents all previous scans for the same patient on the region (and orientation) of the current scan in progress. This can be used with subsequent tumour size assessments, where the software can measure and compare in real-time the changes of the cancer region, speeding the analysis workflow.

IBM Watson is another technology that is being used in evaluations of X-ray exams, and in particular for detecting signs of surgery and cancer in the images. Watson leverages a massive dataset of millions of pre-evaluated/ tagged X-ray scans (obtained by IBM with the acquisition of Merge Healthcare in 2015).

Infervison, partnered with GE Healthcare, Cisco, and Nvidia, pairs Computerised Tomography (CT) scans with AI that learns the core characteristics of lung cancer and then detects the suspected cancer features through different CT image sequences. Earlier diagnosis allows doctors to prescribe treatments earlier.

How the Medical Imaging Professional’s Work would Change

In the next few years, AI will change a big portion of our daily and professional life. In the healthcare space, it will massively reduce the time that medical practitioners and technician will spend on tedious tasks (e.g., browsing through thousands of scan slices in order to find the few that are meaningful) allowing them to be more productive and spend more time with patients.

Accenture includes automated image diagnosis in its top 10 for AI applications in healthcare and predicts it could be worth US$3bn in the near term. AI in the medical world is moving fast.

In March, researchers at the Athinoula A. Martinos Center for Biomedical Imaging at Massachusetts General Hospital developed a new technique based on AI and machine learning that enables clinicians to acquire higher quality images without having to collect additional data. Called AUTOMAP (Automated Transform by Manifold Approximation), the correct image reconstruction algorithm is automatically determined by deep learning AI.

Researchers have taught imaging systems to “see” the way humans learn to see after birth, not through directly programming the brain but by promoting neural connections to adapt organically through repeated training on realworld examples. According to Bo Zhu, a research fellow at the MGH Martinos Center, it means imaging systems can automatically find the best computational strategies to produce clear, accurate images in a wide variety of imaging scenarios.

Because of its processing speed, AUTOMAP may help make real-time decisions about imaging protocols while the patient is in the scanner.

AUTOMAP is a major advancement for biomedical imaging which wouldn’t have been possible just a couple of years ago because of the neural network models for AI and the Graphical Processing Units (GPUs) needed for image reconstruction.

Healthcare providers and organisations will need to have a full understanding of AI, its breadth of application but also the implications it has on medical professionals and their work.

Implementation of AI is not going to be an easy task, though, since the adoption of widely adopted (and complex) processes, as well as the presence of legacy systems will require massive operational changes.

--Issue 41--