BD - Earth day 2024

Make Big Data Economical and Actionable for Faster and Better Healthcare

Yogesh Sawant

Yogesh Sawant

More about Author

Visual effects software is more commonly associated with the movie and gaming industries, but is today being used in pharmaceutical and life science research laboratories to digitally visualise, simulate and replicate micro biological processes that may bring us closer to solving some of humanity\'s most pressing medical problems.

In today’s Healthcare and Life Science (HLS) environment, healthcare big data is everywhere. For example, most physicians now provide consultation and diagnosis by reviewing a patient’s history in Electronic Medical Records (EMR); radiologists make treatment decisions with medical images such as x-rays, ultrasounds, Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) scans; and life science researchers study disease through high-resolution digital images of tissue biopsies, and examine digitalized genome sequencing patterns.

With the extensive adoption of healthcare big data at every step of the healthcare and lifescience process, the volume and variety of big data has become mind-boggling, and continues to expand relentlessly. To understand the scale of this data expansion, think of it this way: a single CT image contains approximately 150 MB of data, while a genome sequencing file is about 750 MB, and a standard pathology image is much higher, close to 5 GB. If you multiply these data volumes by a population’s size and life expectancy, a single community hospital or a mid-size pharmaceutical company can generate and accumulate terabytes and oftentimes petabytes, of structured and unstructured data.

Two sides of the same coin

Naturally, for many HLS organisations, trying to control the spiraling costs, complexities and risks associated with big data has become a critical issue. However, from another perspective, healthcare big data can produce benefits that are far beyond the costs required to manage it, such as unlocking new sources of medical value, improving the accuracy and speed of diagnosis, forecasting disease and health patterns, and gaining fresh insights into life science innovations. US management consulting firm McKinsey Global Institute (MGI) estimated that if the United States’ healthcare industry were to effectively use its growing volume of big data to drive efficiency and quality, it could create more than US$300 billion in new value every year. Moreover, in the developed economies of Europe, government administrators could use big data to save more than €100 billion (US$149 billion) in operational improvements alone.

Every coin has two sides. It’s true that healthcare big data creates many challenges regarding data management, storage, distribution and protection. For the majority of successful organisations, however, the use of big data has also become a key strategy forunder-pinning productivity, improving patient care, enhancing competitiveness, and accelerating growth and innovation.So, how do we balance these two sides, or even create a situation where the benefits exceed the costs?

The answer lies in Data Economics –namely, how to make the processof extracting value from data costless than the resulting value. If we can effectively minimize the costs of storing, processing and protecting data and then harness sophisticated techniques to transform that data into actionable information that supports clinical and business growth, we achieve the highest Data Economics.

Healthcare big data volume and complexity

The problem, however, is that effectively minimizing the costs of big data storage is a fundamental challenge for business and IT leaders, especially for content-driven HLS businesses. This is because, apart from the proliferation of data volumes and modalities, healthcare data is also subject to longer and longer retention periods. A patient’s medical records may need to be stored for 70 or 80 years, perhaps even longer. In many cases, medical records must also be preserved in the original format on a permanent basis in order to meet compliance and legislative requirements. Similarly, life science organisations are increasingly opting to retain and maintain decade’s worth of data, in the hopes that it could be a source of new research.

Furthermore, in today’s ever-changing environment, many HLS organisations are already struggling with strained resources, continuous business growth, and the rise ofnew medical technologies.This often leads to the unplannedexpansion of storage architecture resulting in disparate systems and tools, which makes management and administration even more complicated. Indeed, accelerated storage consumption, and poor utilisation of storage assets, as well as the continuous demand for more floor space, and higher power and cooling costs, all driveup storage TCO. As if this situation is not dire enough, for a hospital, failure to locate information can also create compliance issues and may result in legal penalties and even fines. For a research organisation, data access is at the core of innovation and competitive success. It’s no wonder managing the costs and complexities of file-based data growthhas been named asone of the top five most difficult issues facing Global 5000 companies today.

The ideal Storage Infrastructure for HLS organisations

To achieve the highest level of Data Economics, the key is to integrate all healthcare big data, regardless of whether it is structured or unstructured, to enable centralized management and better resource allocation. To consolidate big data across different hospital departments, or across disparate life science systems and enable optimal information search and sharing, the ideal storage architecture must be an integrated system for block, file and content with powerful capacity, performance and throughput to maintain a sense of operating coherency in handling, moving and accessing multiple large datasets and vast amounts of data, frequently in the range of terabytes and even petabytes. To minimise storage costs and alignthe system with clinical and business needs, it must also enable a level data interoperability that will support clinical innovation. It must provide intelligent tiering to automate data placement based on frequency of access, clinical value and the actual cost of the storage. This dynamic tiering function helps further improve capacity utilisation and resources allocation, which holistically optimizes the cost-efficiency of the storage resources.

In addition, HLS organisations require massive storage headroom and dynamic scalability to handle their unpredictable and growing amounts of data and images, as well as a high level of compliance for enforcing policies regarding long-term data retention, integrity and protection. Crucially, a suitable storage system must beable to provide a content-awareness capability to enable the transformation of data into actionable information. Content-awareness describes a set of capabilities that dynamically classify infor¬mation and assign policies to unstructured data files, turning unstructured data into valuable intelligence so they can enforce best practices, make decisions faster, and protect sensitive company data.

1. Source: Big data: The next frontier for innovation, competition, and productivity, McKinsey Global Institute, June 2011

2. Source: http: /  / www.busmanagement.com / article / The-Challenge-of-File-based-Data-Storage /

Author BIO

Yogesh Sawant is responsible for driving the Hitachi Partner program and enabling the partner eco-system in India, with over 20 years’ experience in the industry, Sawant has been with Hitachi Data Systems since September 2011, prior to which he worked for organisations such as Oracle India, Sun Microsystems, Dell Computer India, Hewlett Packard, Compaq and Digital Equipment India where his contribution has been highly respected.Yogesh Sawant is an engineering graduate from the University of Pune, specializing in industrial electronics.