A Conversation With Alissa Knight and John Moehrke: The Ins and Outs of FHIR
In this episode, Mike welcomes recovering hacker Alissa Knight and HL7 Standards Architect and member for the FHIR Management Group, John Moehrke. Join us as they discuss Fast Healthcare Interoperability Resources (FHIR), a standard that defines how healthcare information is exchanged between computer systems. John and Alissa both acknowledge the challenges of maintaining a standard where implementations vary greatly.
On today’s episode of In Scope, host Mike Murray is joined by Alissa Knight and John Moehrke. John is a Standards Architect specializing in Healthcare Interoperability Standards Architecture in Interoperability, Security, and Privacy, and Alissa is a self-described recovering hacker of 20 years, blending hacking with a unique style of written and visual content creation for challenger brands and market leaders in cybersecurity.
As the episode begins, Mike asks Alissa about her new research, the occasion for these three to meet. A couple years ago, Alissa was approached by a company to create a campaign addressing healthcare security. Phase one was mobile health and phase two was FHIR. As part of the research during phase one she downloaded and hacked into mobile health apps. In phase two she worked with companies on FHIR (Fast Healthcare Interoperability Resources). There was a learning for her to dig into both the technology and the political sides. She jokes that now her research is hacking into and playing with FHIR.
John explains that most HR vendors and healthcare providers have concerns about using third party apps to download and receive their medical information. They understandably worry about the security of these platforms, but they cannot legally block a patient from receiving their data through them. The average patient is not a good evaluator of the security of those third-party systems, while healthcare managers can usually see through the security problems. John goes on to say that FHIR has multiple releases, but that does not mean they’ll be used by the community at large every time.
In every access, you have to look at the token, you can’t just make an assumption. Oftentimes in the case of privacy, the token will indicate what you should have access to, but it doesn’t cover everything. One of the vulnerabilities Alissa uncovered in her research was an overabundance for many developers to filter out things at the client level. What developers need to understand is if someone has an API client, they don’t have to use the platform the developer built, they have other options. If you’re doing filtering at the client level, you have a problem because you cannot mandate what a client accesses your service with. The most common vulnerability Alissa is finding in her research is Broken Object Level Authorization.
As the episode ends, Mike asks Alissa and John about risk management, and they both agree security should be a continuous cycle of plan, do, check, act. You’re never done with it. Alissa also adds you should hack your own stuff, don’t wait for someone else to do it.
– Alissa introduces herself to the podcast.
– John introduces himself to the podcast.
– The conversation turns to ransomware in healthcare, with Mike first wanting to know how it works.
– Alissa shares her recent research.
– John explains why healthcare providers are nervous about third party medical apps.
– Alissa and John discuss the vulnerabilities of these medical apps.
– Mike asks John and Alissa about risk management.
0:00:02.7 Speaker 1: Welcome to In Scope, The Healthcare Security Podcast. Each episode, we bring you interviews, technical tips and a unique point of view on the challenges facing the ever-changing healthcare ecosystem. Here’s your host, Mike Murray.
0:00:20.6 Mike Murray: Hello and welcome to this week’s episode of In Scope, The Healthcare Security Podcast. This is an episode I have been planning for a while and, actually when Alissa Knight was on the first time, and we heard a little bit of a foreshadow to the work that she was going to be doing on fire, I had this thought and I thought, “I know a guy who is… ” When I was at GE, there was a guy there that whenever I had a question about any healthcare standard anywhere, I would call John. And I thought, “Man, wouldn’t it be cool if we could get Alissa when that research is published, and John, who’s part of the group that has been involved in developing a lot of these standards together, so that we could get the history and the present of all these security things that are happening.” And, we finally managed to pull that off. So, rather than introduce them myselves, I’m going to have each of them give a bit of a quick bio on themselves, but Alissa maybe as the repeat guest, do you wanna introduce yourself first?
0:01:21.1 Alissa Knight: Yeah, sure Mike. I was laughing, this is right when you were talking about planning this in your head, but this whole time I was thinking, pinky in the brain. And it’s like cute devil horns coming out of your… But, I’m really excited to be here. It’s funny, I’m usually the big deal in episodes, and, for the first time, I’m actually nothing compared to John, so I feel weird sort of introducing myself in his company but, he’s definitely a huge deal, and I’m really happy to be here. So, thanks for the introduction Mike, I’m a recovering hacker of 20 years, recovering perfectionist, recovering entrepreneur. So, I pretty much have been doing a lot all the last couple of decades. I… For those of you who don’t know me, and if you don’t, shame on you. Why don’t you know me?
0:02:10.0 MM: Yeah, they missed the last episode is that what tells me.
0:02:12.4 AK: Yeah, they did, that’s what happened; published author on hacking connected cars, hacker of embedded systems, hacker of APIs, started hacking APIs back in… It’s 2018; hacking financial services, hacking healthcare APIs, financial services and fintech APIs. Hacking automobiles through the APIs, taking remote control federal and state law enforcement media calls through their APIs. So, really a big believer that we’re in an API-first world today, and, we need to do better at securing them. So yeah, that’s me.
0:02:45.4 MM: And John, how about you?
0:02:47.1 John Moehrke: So yeah, I don’t deserve the accolades but John Moehrke, I of course agree. I’ve been involved in the standards for healthcare since around 1999, where as part of new acquisition from… Moehrke had electronics into the big GE, all of a sudden HIPAA became a, “Wait, what are we gonna do about HIPAA? And… So… And I’m like, “Well, I know a little bit about that from a couple of years ago.” So, I ended up getting tagged with security and privacy, and became part of the Interoperability Center of Excellence. So I am a co-chair within HL7 of this security workgroup; have been for probably getting upwards to a decade, but I’m also co-chairs over in IT, of the IT infrastructure committees. I have been co-chairs within ISO and ASTM and a bunch of others. So I’ve worked through a lot of these standards, and to me, the standards are an important part of the discussion, but they’re not all the part of discussion, and I think… That’s where I think Alissa, you and I overlap or work together so to speak. Because, one of the things I start all of my tutorials with on security and privacy, which there’s one coming up this later in January, I always start off with…
0:04:24.7 JM: This is all about risk management, and, the interoperability standards can only tell you what you should do to enable interoperability, how you should get the right information to the right people, it doesn’t explain all of the ways you have to make sure that Alissa does not get access to the data. And, every single security standard always speaks about risk management, always speaks about the blocking and tackling that tends to be the technical failures on top of all of the people failures that none of us can really fully solve. So I’m actually pretty excited about being paired up with Alissa. To me, we’re kind of two sides of a coin, I’m there trying to explain how it should be done, what we’ve put into the standard to enable security, enable privacy, enable transparency, enable these things. But, as part of the standard, I can’t demand that it be done right. I certainly try.
0:05:33.2 AK: So, I have… I have a great joke to start our show out with so, healthcare API hacker and Godfather of security at HL7 walk into a bar, [laughter] No just kidding… Sorry. [laughter] The…
0:05:48.0 JM: Whiskey neat.
0:05:50.7 AK: Yeah, whiskey neat. Nice. I’m the whiskey sour girl so it sounds like you and I have a lot in common. This should be a fun episode. So Mike, why don’t you kick us off?
0:05:58.9 MM: In no doubt. I was gonna throw to you anyway actually, because it’s your research paper that is the occasion for us getting together.
0:06:06.4 AK: That’s brought us together.
0:06:07.8 MM: And, for those that are aware, I’m sure most of our audience is aware that the CARES Act has been really driving fire sort of adoption, especially smart on fire for patient data portability in the last couple of years, and so this protocol standard that was being adopted slowly really accelerated in the last 18 months. And so, Alyssa, I think your research is incredibly timely, maybe this is the place to start and give us kind of an overview of what you’ve been up to and what you found and what’s really occasioning our convo today.
0:06:38.4 AK: Yeah. It’s interesting, I wish I had a pen to write down all the points John brought up about implementation and should do, and that’s a thing that I’m always trying to explain to people, is that when a healthcare provider payer implements a FHIR API, it’s not like they’re walking to Best Buy and buying a FHIR API off the shelf, shrink wrapped API off the shelf, and it comes complete with the security. The vulnerabilities are always inherent in how it’s implemented. So the research. A couple years ago, I was approached by an API security vendor. I’m a content creator, and I create what I’m coining, “Adversarial Content,” meaning that I prove the efficacy of a security product by hacking something and showing how their product would have prevented it. And I do this through storytelling, through visual and written storytelling.
0:07:28.9 AK: And so, a company called Approve reached out to me, and said, “Hey, Alyssa, we really like what you’re doing. We’re an API security vendor and we think there’s a real problem in healthcare.” And so, what we did is we created this campaign where it was basically broken up in to two phases. Phase 1 was mHealth or Mobile Health and Telehealth APIs which are actually two different things. MHealth APIs and phase 2 would be FHIR. Phase one was, for a lack of a better word, if you didn’t see the last episode you should definitely go watch that, it was a great episode on that phase, but talks about the vulnerability findings in 30 mobile health APIs in ML apps that I downloaded and hacked, and the systemic problem that we have in authentication versus authorization. So that was phase one, and phase two is FHIR. This is what I have been sort of neck-deep in for the last… Really, last few months, in trying to work with the different EHR companies and systems that have agreed to participate and better understand FHIR, understand as a technology. A lot of people have this misconception that hackers just sort of wake up and come out of the womb knowing how to do this stuff. I didn’t even know how to spell FHIR when I walked into this. I’m like, F-I-R-E?
0:08:51.8 AK: So I had a lot of learning to do. As, I’m sure, John will cover, I had a lot of learning to do about what HL7 is and… HL7 the standard versus HL7 the organization. Understanding really what the ONC was and this mandate, and all of these other things. So once I did a deep dive into the, I guess we’ll call it the political bureaucratic side of things, I really dug into the technology side. And it’s really interesting to me, the FHIR in general, and how I actually, as a consumer, as a patient, didn’t know that if I went to one hospital that was using one particular EHR system, and another hospital that was using a different one that those systems couldn’t talk. So I think there’s a lot of value in what’s happening around FHIR. I think it’s very important that we do continue to innovate, but my mantra has always been making sure that we do that securely. What have I been up to? Hacking FHIR. [laughter] Playing with FHIR. So many puns that we can talk all day about, but…
0:09:57.9 MM: So many puns.
0:09:58.3 AK: I swear I won’t go there.
0:10:00.2 JM: Yeah, you would not be the first. [laughter]
0:10:01.2 AK: I was about to say, we could… I had so many title ideas for my research. So we titled it Playing with FHIR, and really, it’s me just hacking FHIR APIs and trying to find vulnerabilities. Now, let me make this abundantly clear. There’s… And I didn’t find this out until just recently, that there’s FHIR APIs, and then there’s Certified FHIR Regulated FHIR APIs. And there’s all these different things within that sphere, and then there’s regular APIs that people custom-build and develop. And one of the things that… One of the major EHR systems talked about was their concern or fear of the fact that, really, John and I can start an organization or a startup company, and say, “Okay, we’re gonna do this with FHIR and we’re gonna pull this data from this particular EHR system, and we’re going to do what we want with that data,” and that organization doesn’t know about it. They don’t have any control over the security John and I implement. There’s no sort of checks and balances, if you will, line as far as going from a more secure enclave or system, to a less secure enclave or system. And John, please yell at me and tell me I’m wrong if at any point I’m wrong.
0:11:15.6 JM: Yeah, it depends on your circumstance.
0:11:19.2 AK: Okay.
0:11:19.5 JM: And indeed, what you just said is one of the concerns that the healthcare organizations and the big vendors are worried about, is that they will be perceived as having contributed to a…
0:11:38.9 AK: To a bridge?
0:11:39.4 JM: That port, yeah. And when I said it depends, if that is a business partner that they’re using to do some data management or some kind of a task, then yes, they are under HIPAA indeed responsible for the failures of their business partner. Completely contrary to that though, and ONC has provided guidance on this, if a patient comes to a healthcare provider, and says, under Patient Right of Access, “I want my data,” the healthcare organization, the EHR vendor, has to just give them the data, and they cannot put constraints on it and say, “Oh, wait a minute, this app that you’re wanting us to send the data to, we know it’s a piece of garbage.” They can’t do that. And that’s because Patient Right of Access very clearly says, “It’s the patient’s responsibility to have done that check,” which is a positive in that there has been data blocking under this excuse that the app that the patient wants me to send this data to, or the email address that the patient wants me to send this to is not secure.
0:12:55.1 JM: There’s been data blocking of legitimate use of data, so it lets data loose to get into the patient’s hands. On the other hand, and I argue this point in other places, the patient is really… The common patient, not Alissa, not Mike, the common patient is really a poor evaluator of the security of the application that they have seen on the internet and “Oh yes, I can figure out your medical condition if you just send me your data.” They have no tools to do that analysis of that application. So you can see the reason why the EHR vendors and the healthcare organizations didn’t like this idea because they can see through the concern. So it’s a very difficult situation when the patient right of access is used.
0:13:48.8 AK: Yeah and John brings up a really good point earlier when he was talking about implementation, because I wanna make this abundantly clear and it’s important in this research compared to other vulnerability research that I’ve done, and that trying to figure out who’s fault it really is for an insecurity, for something… Is that the right word, insecurity? That seems wrong. Wrong use of the word, for something be not secure. And for example, one of the things that I’m finding is the most endemic to a lot of the APIs I’m testing is a failure to authorize, meaning that I’m authenticated, I have a token and I should be there, I’m authenticated, I should be allowed to talk to the API, but a failure to authorize as far as whether or not I’m authorized to request the data that I’m requesting. One of the things that I found, for example, in the most recent API I’m testing, and this is and forgive me, I can’t recall the actual acronym right now…
0:14:49.9 JM: Oh, USCDI?
0:14:53.3 AK: Thank you. USCDI…
0:14:53.5 JM: It has a C in it. Starts with US but…
0:14:53.7 AK: Yeah, sorry.
0:14:56.6 JM: Yeah, USCDI is a set of clinical information that they deem as fundamental. So there’s all kinds of data that could that FHIR can describe, but there’s a set of critical clinical data that they’re saying, “Hey, for vital signs and laboratory results, let’s do it all this same way.” So this USCDI is a bundling of some clinical data that there is specifications that are called US Core that are built on FHIR, that are built on LOINC and a bunch of other things, but it’s a subset of all of FHIR, so FHIR is hitting towards 200 resources. USCDI is what about 20-25-ish resources? So predominantly things like observations and…
0:15:49.5 AK: Yeah, there’s STU, which is Standards for Trial Use, and then DSTU Drafts for Standards for Trial Use, and there’s different levels, I guess maturity levels.
0:16:00.5 JM: Yep, those are different levels of maturity. DSTU was a tournament that HL7 used 10 years ago. They’re no longer using DSTU because Draft Standard for Trial Use seemed to be duplicative of draft and trial use, so they just dropped the word draft, so it’s now just Standard for Trial Use. So that’s why you see DSTU 2, but you see STU 3. They just dropped the D because it seemed redundant not because of anything else.
0:16:35.3 AK: Yeah, and so this is maybe an area you could really help me understand because this is like there’s not just FHIR APIs. There’s these different sort of versions, so one of the HR companies, and I won’t mention who sent me this and it was, it came out of a conversation where the narrative was about making sure that if I find any vulnerabilities in these FHIR APIs that I’d be very careful about whether or not it’s tagged with a specific version of FHIR. So for example, he said technically our USCDI version 1 APIs are not yet certified. If you were to follow his advice, you would only test the DSTU 2 FHIR APIs from the CCDS. We plan to certify later this year and expect to be among the first vendor certified, I think it’s likely a number of major vendors that will certify later this year. So…
0:17:24.7 MM: Hold on for a second, what, what, what people who are not watching this on video missed was John starting to laugh as Alissa was reading that.
0:17:34.6 JM: That’s your government bringing you more acronyms built upon acronyms, by the way.
0:17:40.3 AK: Acronym soup, right? What are you talking about? I’m just a hacker trying to blow this shit up.
0:17:48.8 MM: Yeah.
0:17:48.9 JM: So let me simplify it, and this of course is not really security relevant, but the FHIR standard is evolving, it’s maturing, and any time you do a standard, you kinda have to go through these maturity steps to kinda say, “Well, we think this is good, others seem to have found it useful, but we’re still not 100% sure, we’re sure this thing is… We’re not gonna make any more breaking changes, we might enhance it, but we’re not gonna make any breaking changes.” So you’ve got these different maturities and essentially when it comes to FHIR, there has been a couple of revisions of FHIR that made different parts of it normative that we won’t make breaking changes. Back in the early years, nothing was normative. Everything was trial use and some things were less than even trial use. That’s why on every single resource, there’s an FMM maturity model. So there are some resources that were just brought into the FHIR spec in the past two years that are DSTU 0, which means, even the committee doesn’t think this is worth looking at.
0:19:07.2 AK: Oh, so in terms that I can understand, so it’s like alpha, it’s not even beta, it’s like alpha.
0:19:13.2 JM: Yeah, zero is that the committee has started to draft something but we’re not ready to listen to anybody outside of the committee members. One, is when the committee says, “Yeah, we think we’re pretty solid on this, but boy we really don’t think we’re the only ones who should comment.” And as you head up to five, five is the last step before, “We will make no more breaking changes, it’s normative.” So one, two, three, four, five, normative.
0:19:43.8 AK: Well, Okay. And right now it’s at two, DSTU2.
0:19:47.0 JM: Yeah, so that’s different.
0:19:49.5 AK: You’re like, “I’ve never heard of this before.”
0:19:52.7 JM: That’s different. So there’s releases of FHIR, there was one, two, DSTU2, STU3, we’re now at R4. We’re getting ready to do balloting for five.
0:20:05.6 AK: Oh, interesting.
0:20:07.5 JM: There’s actually a 4B that’s out right now, that’s gonna be a 401. So those are releases, but within the releases, various things are more solid than others. So you actually have to find, A, the release, so DSTU, or R4, and then within there, you have to go find the resource that you’re interested in to see its maturity level. So in R4, if you look in there for observation where all the vital signs are, observation is now normative. So it took us until R4 to get observation normative.
0:20:50.4 AK: Okay.
0:20:51.5 JM: There are other things that have been in since the beginning that are not yet normative. Some of the security stuff is looking to get normative in R5, which is probably 2023.
0:21:05.1 MM: And so, the big challenge that I see here is… And just bringing this back a little bit to Alissa’s research, but something that is really important for people who are not as tied into this world, especially as John is, but as kind of all of us are, is just because those releases exist from the FHIR organization doesn’t mean that any given implementation will hue to a specific release perfectly, or implement all of the things required, does it?
0:21:31.0 JM: No. Yeah, absolutely, perfection is absolutely, yeah. The big advantage that counters that though is that the majority of implementations are using one of two or three reference implementations. So there’s the HAPI toolkit, there’s FHIR [0:21:50.7] ____ toolkit, the.NET framework.
0:21:54.6 AK: Are you also referring to SMART on FHIR? ‘Cause there’s SMART on FHIR, and then I know there’s also the Argonaut Project. This is like… ‘Cause I know enough to be dangerous, there’s just this huge spider web of all this stuff, and it’s like trying to really demystify that. I know we could spend all day talking about this, but one thing that I did wanna mention was, yesterday in one of the new FHIR APIs I tested, I was able to actually request other patient records besides the ones that I should’ve been able to access, and this was because of a failure with authorization. And for our audience out there, this is really important that I mention this. This is not the fault of HL7 or John or anyone, this is the fault of the particular implementation.
0:22:38.5 JM: Yes.
0:22:38.8 AK: I guess the best analogy I can use is, John and I can take a bucket of Legos and we can build something completely different from each other, yet we’re using the same Legos. I guess this is the best analogy. The vulnerabilities that I’m gonna publish from my research is vulnerabilities that were introduced because of that specific implementation, not because of any fault of the standard or HL7.
0:23:05.8 JM: I’m sure you’ll find some standards problem, eventually. You look hard enough, you’ll find…
0:23:10.6 AK: If I haven’t already.
0:23:11.3 JM: Yeah, the example you give is a really good one that I often times have to remind people, “Hey, every access, you have to look at the token.” You can’t just assume, “Oh, I looked at this token 10 transactions ago, it’s good.” Yes, you have to look every single time. You have to inspect and be robust to some garbage coming in on the request. And then there’s even some nefarious ways to use the query parameters that are even more subtle than the one you bring up, which you’re just simply saying, “Hey, I got this token for patient John, but I’m gonna ask for patient Mike. Oh, look at this, I got the data.” But there’s other ways you can use searching parameters, for example, one of the ones that we’ve dealt with is, often times in the case of privacy, when the token will only indicate what you should have access to, but the implementation has to look at the results and see that all of the results should be given to this user as well. So one of the examples that we throw out quite easily is VIP patients, the common clinical user doesn’t have access to VIP patients.
0:24:29.6 JM: When Britney Spears is in your hospital, only certain clinicians have access to Britney Spears. But the query for, “Give me patients whose first name starts with B,” will naturally include Britney, but Britney will be marked as VIP. So in the results set before it’s delivered back to the client, you have to look through that and go, “Oh, let’s eliminate these VIPs.” Well, you get to that point and you’re starting to realize that, “Wait a minute, I asked for a page size of 20 and I was just given 19.”
0:25:04.8 AK: That’s a really good point because it was one of the vulnerabilities that I found in my research was there was an over reliance by many implementations to filter out those things at the client level. And what developers need to understand is I have an API client, I don’t need to use your API consumer app that you built and you expect me to use. I can use Postman. I can use Burp Suite. I can go outside of that jail, and if you’re doing filtering at the client level, you’ve got a real problem if I can see all the results in an API client.
0:25:40.3 MM: It’s the oldest rule in security.
0:25:44.2 JM: Architecturally, that might be the right answer though, is that the client’s app is more trusted than the user, right? So that might be the policy that’s put into place, but it’s probably not.
0:25:57.6 MM: Well, and it’s never a good policy to rely on client side filtering. John, an old friend of ours, Matt Clapham would have gone absolutely insane and I’ve seen him go absolutely insane whenever a product is designed that way because to Alissa’s point, you cannot mandate what a client accesses your server with. There’s just no way to demand that they have to use your version of your web browser.
0:26:22.3 JM: Yeah, the case that I again, I’ve got a bunch of use cases in my tutorial, one of the cases is being able to deliver sensitive data to the client where I know that the app is more trustworthy than the user. One example is where I can deliver sensitive information that would normally not be allowable to that particular user, unless they did a break glass. So I’m feeding the client that I do trust with information that they should be blinding from the user, but it’s there in case the user declares, “This is a medical emergency. I really need to know, are they on this drug?” So there are some cases where the application is more trustworthy than the user. Now, if it’s a mobile app, mobile apps should never be seen as more trustworthy than the user because they’re in the complete control of the user. But business to business is still FHIR. You still use the same API definition for business to business, the token just means something slightly different, the token is a binding of, “Well, this is the current user who has triggered the event, they’re not the only user who’s going to see the data because it’s gonna get incorporated into the EHR of the requesting organization.” So there’s varied differences in policy depending on whether the API is being used by an organization or a end user.
0:27:55.8 AK: Yeah, and I think there’s valid arguments on both sides of the aisle, right? I’m well aware of the fact that in security, it’s not just black and white, and there’s gray area, and there’s valid arguments for both the attacker and the defender in that situation. Obviously, there’s reasons behind a developer just sending everything back from the database and expecting that whoever is writing the consumer app, the API consumer that they make sure they filter out just what they wanna see, whether you wanna say that’s laziness on the side of the developer or being forward compatible with future needs. I’m sure arguments could be made for either of those, but there’s also a lot of arguments that can be made from the breaker, from the attacker who’s saying, “Oh, you’re giving me everything, I didn’t see this in the app, but I see this now,” because I just sent the same request Postman. I think probably the most common finding for me right now is broken object level authorization or what are called BOLA vulnerabilities, broken authentication, mass assignment, just a lot of things that you can see is clearly not being checked against the OWASP API security top 10, just starting with that list alone. Issues like hard coding, tokens and cases 2021, and we’re still hard coding tokens with no time to live that’s just valid for 100 years, just things like that.
0:29:15.8 AK: But and then that begs to ask, from the developers, “Okay, well, we have to put these tokens or keys somewhere, where do we put them if we can’t hard code them?” and that then begs for another conversation around in app protection and code obfuscation, but I think there’s definitely, when you have something like this that’s a standards framework and the security… And I like how John started the show out with, “This is all about risk management.” Because true, you’re really managing risk here, ’cause you’re not gonna fix every vulnerability, you’re only gonna fix those that are an unacceptable level of risk to the business.
0:29:52.2 MM: Well, and unfortunately, in most of these cases, you’re not gonna know these vulnerabilities. Alissa, this conversation shows the really interesting challenge of the healthcare organization that uses the FHIR APIs. You’re reporting your vulnerabilities to the vendors and you’re working with the vendors, and those vendors will have some amount of ability or desire or impetus to fix those things, but we could talk about this for four hours, but I’m not gonna keep us on forever as much as I love just hanging out with you guys.
0:30:17.7 AK: I was gonna say, this better not be the only show I get to do with John ’cause I want to… I think this is a multi-part series here.
0:30:27.2 MM: I love that idea, and I’m quite happy to facilitate that exact idea.
0:30:31.6 JM: Yeah, ’cause I have questions Mike.
0:30:34.7 MM: Oh, I know we haven’t even gotten even started. We’ve barely touched the surface, but actually there is a question I want to throw to both of you and actually John, maybe I’ll through it to you first because the thing about FHIR and all of these APIs is most health care delivery organizations are being forced to implement this in some way, how do you even perform risk management against this thing that frankly, if it’s not obvious to every listener, this is a really complicated complex subject that the healthcare delivery organization has very little control over, how do you manage the risk in that situation
0:31:07.8 JM: Continuously.
0:31:09.2 MM: Continuously? That’s a good answer.
0:31:11.0 JM: Yeah, honestly, if I get only one message across, if you’re thinking you’re just going to secure your FHIR API and “Yay, I’m done” move on, no it’s gotta be a continuous fight. You better be ready to man that fight for the long haul. There are so many tools available today to help you. FHIR is based on HTTP REST, and therefore, you’ve got the whole set of security tools that are designed to help you there. There’s nothing FHIR specific about the majority of what you need to do. Yeah, there is some stuff in the FHIR space, things we’ve been talking about with query parameters and the fact that the patient…
0:31:56.5 AK: Scopes and tokens…
0:31:57.9 JM: Resources. Yeah, yeah, those are the harder ones to deal with, but so much of it is just simply, do your input validation.
0:32:09.2 AK: Yeah, my response to your question, Mike, is four words and it starts with the letter P and ends with lan Do Check Act. You wanna talk about international standards, that’s why I’m drunk on the ISO 27001 Kool-Aid. I really get lost in the sauce on the PDCA life cycle, I think security should be… I agree with John, it should be a continuous cycle, a continuous OODA loop. Plan Do Check Act continuously. That’s what security should be, it should never be point in time. Two new zero day exploits have come out since the beginning of this show, since we started talking. New zero day exploits come out all the time, and we need to be cognizant of that. Just because the FHIR API that you implemented today, that your implementers, your developers came back and said, is secure today doesn’t mean that it’s gonna be secure tomorrow. We saw that with even with JWT tokens and the vulnerabilities there with the hashing algorithm and being able to set that to none. So I just think that this needs to continuously improved and also hack your own stuff. Don’t wait on someone to come along to hack it for you. Everyone should be hacking their own code and it certainly shouldn’t be the people writing the code.
0:33:26.1 AK: Trust, but verify. And if you’re outsourcing this, where very shocking thing for me in my journey has been, I thought a lot of this was being developed internally at these organizations, a lot of them are outsourcing it. These are billion dollar companies, tens of thousands of employees, I thought that they would have buildings of developers, a lot of them are outsourcing this stuff and just because the company that you’ve outsourced to, they claim to be experts in FHIR APIs and on their website, they have a page about their security and how they’re SOC 2 Type 2 compliant. Trust but verify, you’re outsourcing your development, but you’re not abdicating it either, it’s outsourcing isn’t abdicating. So my advice is to also hack your own code.
0:34:15.0 MM: Alright, and with that, I like to end every show the same way. John, where can we find more of you? People listening, where can they find more John Moehrke?
0:34:24.8 JM: Well, I’d like to say go to my blog, Healthcare Sec Privacy, but I haven’t been doing much there lately, I think the majority of the reason for that has been lack of getting good questions. So I love to engage with these discussions then come up with a, “Oh yeah, how would I solve that?” So I am open on Twitter, John Moehrke, on Twitter, I’m open to even emails to [email protected] and just be aware if you ask me a question and I come up with a long answer, it’ll turn into a blog just because that’s the easiest way. So I’m very happy to have questions sent my way.
0:35:07.7 MM: And he means it, I often do that… As I said at the beginning, John is often somebody I will reach out to with questions ’cause he’s just such a fountain of knowledge. Alissa, where can they find you?
0:35:17.8 AK: Yes, a few surprising new announcements. I actually just found out that I’m speaking twice at HIMSS 2021, so John, I don’t know if you’re going, I would love to sit down have drinks and break bread with you at HIMSS, but I will be key noting along with amazing people like A Rod and some other folks, but I’ll be key noting at the HIMSS conference and also speaking there with Mitch Parker, but people can find me at HIMSS or they can find me on YouTube. Definitely subscribe to my YouTube channel, I live stream and upload a new video every week. Connect with me on LinkedIn, follow me on Twitter and I would love… ’cause I’m sure there’s a lot of FHIR API nerds just salivating at the mouth right now over this conversation, I would love to see us do this again. I think there’s a lot more that John and I can nerd out on and would definitely like to come back when I publish this research. We’re coming very close to the end here on it, so plan to have it actually published before Hacker Summer Camp for Black Cat DEFCON and HIMSS. So yeah, definitely, that’s the best way you can support me is follow me and watch my content, but yeah, I think there’s definitely #moretocome with John and I.
0:36:28.7 MM: There’s more to come with all of us, I’m also speaking at HIMSS and so…
0:36:31.0 AK: Oh, yay, congratulations Mike.
0:36:33.8 MM: We will spend some time out there, I don’t know if John, if you’re planning on being out there, but let’s find some time and let’s hang out.
0:36:41.0 JM: I’m virtually speaking at HIMSS, but I’m doing risk management. I’m not going to that cesspool of Covid variants, delta and whatever comes.
0:36:52.6 AK: The human petri dish.
0:36:52.8 MM: Yes, well I mean Las Vegas. As it often is anyway.
0:36:56.3 JM: Well, that’s Las Vegas in August.
0:37:00.7 AK: Yeah, yeah, yeah.
0:37:00.8 MM: Alright well, we will virtually break bread at HIMSS. And Alissa I will see you there, but thank you both for today, we should do… We will definitely need to do this again. This has been a blast and thank you all for coming.
0:37:15.3 Speaker 1: Thanks for joining us for this episode of In Scope. To make sure you never miss an episode, hop on over to www.scopesecurity.com to sign up or you can listen on Apple Podcasts, Spotify or Stitcher. And if you have ideas for topics, guests or technical tips, please contact us at [email protected]
ABOUT THE GUESTS
Alissa Knight is a recovering hacker of 20 years, blending hacking with a unique style of written and visual content creation for challenger brands and market leaders in cybersecurity. Alissa is a cybersecurity influencer, content creator, and community manager as a partner at Knight Ink that provides vendors go-to market and content strategy for telling brand stories at scale in cybersecurity. Alissa is also the principal analyst in cybersecurity at Alissa Knight & Associates.
Alissa is a published author through her publisher at Wiley, having published the first book on hacking connected cars and recently received two new book contracts to publish her autobiography and a new book on hacking APIs.
As a serial entrepreneur, Alissa has started and sold two cybersecurity companies to public companies in international markets and also sits as the group CEO of Brier & Thorn, a managed security service provider (MSSP).
John Moehrke is a Standards Architect specializing in Standards Architecture in Interoperability, Security, and Privacy for By Light Professional IT Services Inc. He is primarily involved in the international standards development and the promulgation of those standards. John is Co-chair of the HL7 Security workgroup, a member of the FHIR Management Group, FHIR core team, and co-chair of IHE IT Infrastructure Planning Committee. He participates in ASTM, DICOM, HL7, IHE, ISO/TC-215, Kantara, W3C, IETF, OASIS-Open, and others. John has also been active in many regional initiatives such as the S&I Framework, SMART, HEART, CommonWell, Carequality, Sequoia (NwHIN-Exchange), and WISHIN. He has been active in the Healthcare standardization since 1999, during which time authored various standards, profiles, and white papers.