Security threats are, and always have been, major concerns to healthcare organisations due to the value and vulnerability of the clinical data that is being recorded and distributed. The value of the data comes from the fact that it directly affects our ability to safely treat patients. Due to its content and historical nature it can be very big, so it takes a long time to rebuild, and it contains more than just clinical data. Also included are personal, financial, and demographic data which allow it to be used for wider identity theft and payment fraud. It is immutable and persistent that in the event of a data breach you can change your email address, your credit cards, their passwords, PINs and account numbers, but you cannot change your mother’s maiden name. The vulnerability comes from the fact that there has been a revolution in healthcare with the interconnection of systems, cloud computing, Internet of Healthcare Things (IoHT) and mobile devices and the changes in working practices of clinicians such as remote monitoring, telemedicine, and working from home. This revolution in big data, AI and care has not always been matched with the security awareness, policies, practices, and budgets of healthcare organisations.
AI itself provides an open door to bad guys who wish to exploit this vulnerability, even though in most countries healthcare data is protected by data or privacy laws, and any breaches or inability to guard it properly can have legal and financial implications. Most custodians of patient data are not as aware of the changes that the use of AI imposes. As it is used more and more in patient care it needs access to larger and larger data sets from multiple sources, not just the electronic medical record (EMR). Not all of these sources adhere to the same rigour of data protection. Indeed, as most AI platforms need to consolidate large amounts of data and need extensive computing power, patient data and other information are more likely to reside outside the relatively isolated healthcare data centre, probably on third party systems, which should raise concerns as AI is mostly in the domain of technology companies who may be innovative but may not be fully aware of the security aspects of health data as their ideas have moved from innovation to production.
AI in healthcare seeks to incorporate physical, digital and bio-technology data, to create services we can only dream of today. The rise of wearables and the availability of geolocation data through mobile phones means that it is easy to re-identify people from seemingly anonymised data and understand more of their behaviours, important information for population health, infectious disease control, behavioural health and chronic disease management, but the key is to not exploit personal data.
In addition, in the globalised worlds of data clouds, AI, and healthcare, the transfer of data across man made boundaries such as states and countries are both inevitable and necessary. The world is only just coming to terms with this but the way forward is being led by the UAE with the Fourth Industrial Revolution Protocol (4IR). This visionary protocol was signed at the World Economic Forum in Davos in 2018 and is a global roadmap that seeks to ensure the well-being of the community. Adopted by the UAE government, the protocol seeks to establish an integrated and secure data ecosystem to expedite the implementation of 4IR technologies to produce unprecedented services that can transform aspects of people's everyday life. Even when this protocol becomes a reality and is widely adopted, healthcare organisations must still protect themselves against three major types of threats.
The first being the large-scale attack to get the data on as many patients as possible, for further sale or fraudulent enterprise. The second threat arises from headline grabbers, who want to attack high profile brands and famous facilities, probably for no financial gain, and the third is the targeted attack on one patient, say a celebrity or a high net worth individual for the purposes of selling the information to others such as gossip news or for blackmail purposes.
Traditional defensive methods are no longer sufficient to protect our patient’s data from the bad guys as these types of attacks are becoming more and more sophisticated. There has been an increasing number of attacks using social engineering techniques (e.g. Phishing) that can overcome “traditional” defenses such as email filters, anti-virus, rule and signature-based detection systems. But before looking at what new AI based tools can do for organisations, I would like to suggest that these are only more sophisticated tools, and without the basics in place they will fail to deliver on their promises.
Good systems management is important. The focus should be on keeping not just the central servers up-to-date with security patches but also connected devices. To do this, assessments of suppliers’ security policies and procedures should be a key part of your procurement department’s process for selecting devices which may be attached to the system.
Information governance is key; defining critical data, knowing how the data is managed both in transit and at rest, and having defined, usable policies and processes, are much more important than adding more technology to a fractured system. Similarly, education and awareness are necessary so that everyone on the system is regularly made aware of these policies, not just on induction day. As threats are evolving, staff should be kept aware of the people side of security with ongoing campaigns such as anti-phishing behaviour management. However, all this is just guess work if you do not know how effective it all is and there needs to be regular penetration testing of the systems to ensure that you know your defenses are up to date and effective.
Doing all of this means there is a shortage of security experts to help ensure that custody of your patients’ data remains as effective as time moves on. This is where AI and machine learning can help healthcare cyber security. However, as mentioned previously, merely purchasing new tools does not improve defenses; they need to be deployed, maintained and monitored to provide effective defense.
Security Information and Event Management (SIEM) software products and services provide real-time analysis of security alerts generated by network hardware and applications and are also used to log security data and generate reports for compliance purposes. By combining this real-time data gathering with Threat Intelligence, extending the storage of this data over time and applying the enhanced analytics capabilities that come with machine learning and AI techniques, it improves the detection of attacks around the clock with less skilled staff. By looking at past performance, it becomes possible to analyse user and device behaviours to detect activity that is out of sync with the expected patterns from the devices or users much quicker and more accurately than human observers can. For example, if we unexpectedly start getting system access or requests for EMR data from unknown sources or many requests for a patient’s data from multiple sources, this should raise a warning flag.
This use of AI in healthcare cyber security is becoming more and more important for protection of on-site systems especially as healthcare networks expand and data and processing gets pushed out into the “cloud”. By the time we have fixed one vulnerability the bad guys have moved on and are attacking from a different direction. If we only rely on techniques that respond to existing attacks we will always be one step behind these bad guys. By using AI, we can be more proactive and start to be one step ahead, as we begin to detect abnormal behaviours as they happen. Then it becomes a bit like personal wellness where we identify people at risk of chronic disease and take preventative measures before it becomes a big problem. We can thus identify and stop the attacks before they snowball into a larger problem.
The AI-based tools are getting better all the time and we will see them continue to evolve and learn over the course of the next few years, so that the anticipation of attacks will be quicker and the responses stronger. The healthcare industry has been slow to react to the realisation of how valuable their data is, and we are playing catch with other industries. It does give us the opportunity to look at them and quickly match the bad guys by learning from other industries. AI and machine learning will offer healthcare organisations a way of securing their patients’ data as healthcare evolves without relying on scarce high-cost skills but will only meet its promise if the basics of information governance, awareness and education are in place first.
The application of AI in healthcare is a double-edged sword. By its very nature of openness and sharing of data to be effective it exposes us to potential new vulnerabilities while the use of AI in cyber security tools improves our ability to identify and respond to threats. The current burden is falling on individual organisations, leaving weaknesses in the whole system. As the use of big data, AI and machine learning become more collaborative and international, to safely realise their potential we will need the UAE’s initiative in establishing the 4IR protocol to ensure the Arab world is at the forefront of this change.