Last week, the U.S. Food and Drug Administration threatened legal action against Abbott Laboratories after researchers demonstrated that some of its heart-monitoring devices were insecure and could be hacked.
It's not a new problem. The white hat hacker Barnaby Jack showed in 2011 how insulin pumps could be remotely hijacked by a hacker, and the lack of adequate cybersecurity regulations for medical devices has been well-documented.
But none of this has gotten Karen Sandler any closer to getting the source code for her pacemaker.
Sandler, an attorney and executive director of the Software Freedom Conservancy, has had an implantable cardioverter defibrillator or ICD — a defibrillator/pacemaker combo — inside her body since 2008. It makes sure she doesn’t die suddenly by maintaining something close to a natural heart rhythm. Naturally, Sandler wants to understand how it keeps her heart beating, so she spent years tracking down her ICD’s source code before actually getting one.
She approached the three major ICD manufacturers — Medtronic, St. Jude Medical, and Boston Scientific — to see the source code for the devices she might get implanted. “I asked each of them,” Sandler said in a talk at OSCON 2011, “‘Can I see the source code for these devices? I need one. I’m going to put it in my body. Can I look at it? Send me the source code, and I’ll take a look, and I’ll feel more comfortable connecting it to my heart if I know that I can review it and see how it works.’”
They all said no.
“Can I see the source code for these devices? I need one. I’m going to put it in my body. Can I look at it?”
“I mostly got a runaround,” Sandler said when I asked about what the manufacturers told her, and was given “more people to call who never called me back, leaving detailed messages with no reply.” She even offered to sign an non-disclosure agreement, but no dice. “One manufacturer employee told me there was no way it was going to happen.”
She finally had the device implanted two years after being told she needed one, because every day without it compounded the risk of heart failure. Almost 10 years later, she still hasn’t gotten a real look under the hood. So much time has passed that she is ready for a new ICD, and she still has no prospects for checking out the underlying code.
The typical ICD is a battery-powered device implanted under the skin that keeps track of heart rate. If it detects an abnormal rhythm, it delivers an electrical shock to return it to normal. tOlder ICD models interfaced with instruments via magnets, but the majority now use radio telemetry, which relies on a high-frequency wireless signal, Sandler said. This is good because it makes it pretty easy to get information from the devices, but it also creates a cybersecurity risk. The signal travels unencrypted over the airwaves, meaning someone who knew what they were doing could intercept or mimic it, allowing them to gain control of the device. The old method was more secure simply because it was more difficult to access.
What are the odds that someone would try to hijack a medical device? Maybe not so remote. Hackers have been targeting hospitals with ransomware, malware that locks a computer system until the owner pays up to regain access to it. Since lives are at risk, the hospitals have been paying up. If you seize control of someone's lifesaving device, it's a good incentive for them to give you money.
There is also the risk of more targeted attacks. Sandler is a public personality within the hacker scene. She works on issues around software freedom and diversity in tech, which can be controversial. People have targeted her with misinformation campaigns — spreading false information about her personally — as well as threatened her directly, she said.
“I’ve received some scary threats to my safety and well-being,” she said.“So making sure that I have a defibrillator that isn’t broadcasting — that isn’t vulnerable to attacks through the radio telemetry — became really important to me.”
Why do device makers continue to use radio telemetry given that its vulnerabilities are now widely known? Device makers declined or did not respond to questions from The Outline, but part of the answer may be the lack of precedent for such an attack. While it’s been proven time and time again that lax security on medical devices can be broken, it hasn't actually happened in the wild yet.
“I’ve received some scary threats to my safety and well-being.”
It also comes down to ease of use. Home monitors that check in with ICDs using radio telemetry can give patients peace of mind. Older devices weren’t as easy to “interrogate” — a term that boils down to grabbing information from them — as those with radio telemetry.
The range of devices vary, but Sandler said that a nurse practitioner can commonly still remotely sense devices of patients who have left the room. While that may seem a bit Big Brother, there’s a practical application: Emergency response.
“If the EMT arrives and they know that there’s a defibrillator,” Sandler said, “they’re able to connect as they arrive instead of having to get all the way to the patient and interface with the device.” That’s time saved, which could mean lives saved.
There’s a common disconnect between doctors and the realities of how the implants work. Not every doctor is going to know about how code exploits happen, or whether one device is more vulnerable than another. Instead, according to Sandler, doctors will suggest whatever device with which they’re most familiar. Those devices that they’ve successfully implanted, that haven’t caused medical complications, and perform their function as intended. “Can someone hack this?” isn’t the first thing on a doctor’s mind. Since the doctor is the customer, that means device makers are not feeling the pressure to make devices more secure. That may change if a real device gets hacked, but for now it's still theoretical — and introducing more security isn't simple.
“A huge part of these devices is that they’re only good as long as their battery lasts,” Sandler said. “Introducing any kind of real security could potentially run down the battery. It’s not such a straight line. For example, these devices aren’t even password protected.”
There are manufacturers out there making efforts to address security issues with medical devices, however.
Last year, Johnson & Johnson alerted customers to an exploit discovered by security researcher Jay Radcliffe in the J&J Animas OneTouch Ping insulin pump. Radcliffe, a diabetic himself who once used the pump, had found a way to trick it into thinking it was receiving directions from its wireless control, which would allow hackers to inject insulin whenever they liked. He worked directly with Animas for months after alerting the company to the vulnerabilities, which then led to the public disclosure.
“This is where I have hope,” Sandler said of the incident. “This is where things I think have improved in this small way.” The company worked with Radcliffe to address the problem directly rather than mostly ignoring it. “It’s the first time that I’m aware of of a device manufacturer owning up to this problem, owning up to the vulnerability and addressing it in a public way immediately,” Sandler continued. “And the sky didn’t fall!”
This is a meaningful development for Sandler because it offers a recent example of where working with security researchers actually benefitted the company. It also appears that the FDA has gotten more serious about cybersecurity in recent months, issuing new guidelines and performing stricter inspections. It doesn’t mean she’s going to get the source code tomorrow, or convince manufacturers to offer communication options in their devices, but they are more likely to listen when they know it’s worked in the past.
Sandler plans to keep looking for a device that does not use radio telemetry. But she has other reasons to be optimistic. The Library of Congress approved a copyright rule exemption in 2015, thanks to litigation instigated in part by Sandler, that allows for independent security research without legal repercussions from device makers. In the past, companies had been known to attack researchers who exposed flaws with their hardware or code.
“Over time, more and more of us will be cyborgs,” Sandler said. “Understanding that there are ethical components to our software, and making sure that we’ve got the appropriate oversight is important for the moment that you realize your life needs to rely on this device.”