GCC governments’ spending on healthcare is on the astronomical rise. From a regionwide US$2.4 billion in 2016, it rose to more than US$30 billion in 2021 and is projected to surpass US$104 billion this year, according to a report from the UAE Ministry of Economy. Much of this surge can be attributed to the acceleration of digital transformation that occurred during the COVID-19 crisis. And there are clear signals that technology investment will continue to be a huge part of healthcare investment in the coming years. One estimate puts the share of artificial intelligence (AI) alone, in GCC hospital spending, at 30 per cent over the following decade.
AI has a range of applications in healthcare, none of which involve robot doctors or android nurses. Connected medical devices, wearables, automation of administration, and cognitive-machine support in point-of-care decision systems are just a few examples. But AI, despite its many uses in healthcare, still faces barriers to widespread adoption. Here are three of them:
While this is applicable to AI adoption across industries, it is easy to see how it applies even more so in health services. This concern is not without foundation. Machine-learning models are only as good as the data on which they are trained. Warped data leads to warped models and undesirable outcomes. In healthcare, this can not only lead to physical harm but to inequities in care for underrepresented or underserved groups.
This is where employee training is critical. Any organisation that adopts AI at scale must undergo a culture change where stakeholders ensure that designers, implementers, testers, and users of solutions are aware of all the potential missteps. Trust can also be enhanced through transparency, where an AI system exposes its logic chain for all to see. Many of today’s AI toolboxes come with “explainability” features as well as anti-bias measures, empowering well-trained users to build trust in the solution and allow it to add value.
Use case selection
Deciding where to begin with AI, especially when focused on trust, can be tricky. The temptation will be to stick with guaranteed quick wins, but that may not be appropriate for all organisations. An ambitious project that is well designed and urgently needed can be just as worthy of consideration. Projects can yield adequate fruit, as long as decision-makers and project managers balance the standard metrics of AI use cases with the KPIs unique to value-based care (where ROI is notoriously difficult to pin down).
Some obvious healthcare use cases include staffing and resource planning, AI-assisted coding in medical claims billing, patient risk and care delivery models, and medical-imaging classification to improve clinical decision support systems. The major focuses must be process efficiency, productivity, risk management, value-based care, and decision support.
The main pointers to keep in mind are the “who”, “how”, “why”, “what”, “where”, and “when” questions. Who will benefit? How will the project add value and how can it be measured? Why AI? What are the upsides of success and the consequences of failure? Where is the data; and does it even exist? And when can we expect to see a prototype, a live system, and tangible results?
Data availability and confidentiality
The healthcare industry is not short on data, but much of it is unstructured, which presents yet more challenges. And the global device explosion guarantees more and more health-related data will be on hand. This data needs to be stored and shaped into useable formats. Then, when AI algorithms are finally let loose on it, regulatory constraints may reduce its quality and throw up yet more barriers to the delivery of actionable insights. Meanwhile, cybersecurity professionals will need to come up with ways to protect the data and the systems that use it.
Privacy standards like GDPR and UAE Federal Law No 2 of 2019 that governs the gathering and use of health data restricts the individualisation of data. And using anonymised data can give incomplete or inaccurate results. However, with the right AI platform, it can be possible to make compliance easier through the visibility of data lineage, while adding restrictions on use in real-time along with anonymization and pseudonymisation where appropriate.
However, even in the ideal case where plenty of data is available, success is not a guarantee. Computer vision, for example, requires medical imaging data that is adequately labelled, but this labeling can be resource-intensive, and prohibitively costly. This would normally rule out the use of supervised learning, which is a more direct path to value. But fortunately, some modern AI platforms include collaborative managed labelling systems that allow teams to efficiently generate mass quantities of high-quality, labelled data.
Governance to the rescue
Since most of the barriers to entry are people-based or involve processes, governance can help to overcome them. And this will allow AI to take its place in the technology suite of healthcare organisations across the region. It should be noted that AI governance is broader than data governance, enforcing standardised rules, processes, and requirements that shape design, development, and deployment. Trust is a natural byproduct of good governance, as is privacy and the shrewd selection of use cases.
Perhaps the most widely known thing about AI today is that organisations cannot hope to blindly deploy it as catchall for operational gaps. Most decision-makers who evaluate AI are aware that their organisation must change culture to reap benefits. But make no mistake — the benefits are there. And in a region where healthcare growth is approaching stratospheric proportions, AI can be a vital cog in the machinery of progress.
Maged Mahmoud is the Public Sector Sales Lead at Dataiku.