Artificial intelligence (AI) is increasingly being implemented into medical devices. That innovation could bring about ground-breaking changes in patient care. However, it also comes with relevant concerns about cybersecurity.
One of the most promising applications of AI in medical devices relates to the ones patients use to help them manage chronic diseases.
For example, there are systems for patients with diabetes that can warn them about changes in blood sugar levels. Some can also detect how an individual's body reacts to insulin and adjust accordingly.
However, the way these AI systems work is still largely unknown, and even the technology itself can't explain how it reaches conclusions. People wonder, then, how they can verify accuracy and know a system is working as it should. Relatedly, since they're still in the dark about how some AI-driven medical devices work, would it become apparent to anyone that something's functioning incorrectly and may kill a patient?
For example, when a consumer deals with a hacked computer, they may see a warning message from the cybercriminals responsible or notice they don't have access to certain files.
However, a compromised medical device may not give such apparent signs to a user. Patients may need to schedule more office visits, allowing them to get software updates, device checks and other things that help verify their AI product is working correctly.
In the spring of 2019, the U.S. Food and Drug Administration (FDA) announced a new regulatory framework for medical devices that use AI or machine learning. That news signaled only the start of the discussion, and people know such products will need to satisfy different standards than others, but aren't sure what that might mean yet.
However, one thing the FDA indicated is that any approved devices would need constant monitoring to check for changes in their algorithms. Many AI products get smarter through exposure to more data over time, such as those that learn to spot cancerous tumors. In essence, that means the algorithms are rarely or never static, and the required checks may prove challenging.
It's also likely that low-risk devices using AI would have an easier approval process than ones that pose more potential dangers to patients.
For example, the StethoMe is a stethoscope tool that attaches to a smartphone and allows people to record their heart and lung sounds. It uses AI, but because a doctor makes the final judgment on patient care, it is arguably lower risk than other options.
Some of the guidelines for approval may require manufacturers to show they have addressed all known vulnerabilities, too. However, proving that could be difficult unless an enterprise takes a proactive approach from the start.
A company called Nova Leah allows medical device manufacturers to screen for problems in their products by using a specialized interface. It complements the necessities of meeting regulatory requirements and helps manufacturers do what they must in order to increase the chances of getting approved.
As most people probably know, the way they connect to the internet can act like a window that lets hackers into a system. Some medical devices that use AI do not need active internet connections to function.
For example, GE Healthcare recently received FDA approval for an AI chest X-ray tool. It can detect a collapsed lung in only 15 minutes compared to hours. Additionally, the product functions outside of the cloud and does not need an internet connection. If more medical devices that use AI work as it does, the cybersecurity risks may decrease.
One of the challenges of connected medical devices relates to keeping them all updated. Even if vendors release patches, it may become extraordinarily time-consuming for IT specialists to implement all of them on the respective devices. More problems tend to crop up if hospitals have legacy systems, plus newer equipment. Then, keeping a consistently high level of cybersecurity at an organization becomes difficult.
A market analysis from Stratistics MRC projects the health care AI market to show a 39.7% combined annual growth rate from 2017-2026. That dramatic increase means medical professionals and organizations have to remain vigilant about how to keep such devices secure, whether or not they need to stay connected to the internet to work.
AI-based medical devices have taken technological advances into new territory. That trend could stimulate progress for patients and providers. However, it also requires ongoing scrutiny of cybersecurity matters.