One of the most challenging issues around securing medical devices is involved in patching them. While the FDA is on record stating that “security patches do not require FDA approval,” device manufacturers often argue that the FDA validation process limits their ability to patch. Ultimately, both sides are correct. Unfortunately, this leaves hospitals and care delivery organizations holding the bag when it comes to vulnerable devices in their network.
The reasons for this are multi-faceted (and we cover them in some detail in our recent whitepaper on Securing Clinical Technologies), but they come down to carefully phrased wording in the regulation that enables extremely conservative medical device manufacturers to say that the FDA limits their ability to approve applying generic software updates (e.g. from Microsoft or whoever the software vendor is) to FDA-regulated devices.
The FDA first articulated their stance on device patching in 2005, stating that security patches do not require re-validation unless “a change or modification could significantly affect the safety or effectiveness of the medical device.” This statement was strengthened in the 2018 postmarket guidance:
“For cybersecurity routine updates and patches, the FDA will, typically, not need to conduct premarket review to clear or approve the medical device software changes. . . . Premarket notification (510(k)) would be required for countermeasures that would be considered significant changes or modifications to a device’s design, components, method of manufacture or intended use (See 21 CFR 807.81(a)(3)).”
While this simple statement appears to say that hospitals should be allowed to patch medical devices, it is the second half of the claim that is the crux of the challenge for device manufacturers.
While the FDA says that patching should be allowed, liability ultimately rests with the manufacturer to ensure that patches don’t affect the way a device functions. This leaves medical device manufacturers (who are notoriously conservative when they have liability) with the challenge of determining whether their system functionality will change based on the vulnerability patch. Device manufacturers believe that they own the risk that an unvalidated patch will end up causing patient harm, and that the FDA will sanction them in the event that a patch goes wrong. For anyone who has ever applied a software update (e.g. any monthly Windows update) and had it negatively impact software on their computer, this is a risk that should be easily understood: the manufacturers are afraid that, without testing those patches and subjecting them to some amount of validation, they will be taking on the risk that a patch applied at a hospital will cause their device to malfunction.
When the FDA asserts that security patches don’t require re-validation unless they are “considered significant changes or modifications to a device’s design,” they are correct. However, in practice, most vendors draw the validation boundary such that the operating system of the device is part of the device’s design and that a patch that alters a critical component (e.g. network drivers, bluetooth or wireless drivers, core parts of the kernel, etc). Consequently, many security patches require at least a full-retest of the system and sometimes do require re-validation. This allows the device manufacturers to point the finger back at the FDA’s refusal to allow them to patch without a revalidation, ultimately leaving the healthcare delivery organization responsible for unpatched and unpatchable vulnerabilities, which they must find another way to secure.