Call Today 1-877-740-5028
Chris Heuman, Practice Leader for RISC Management and Consulting, provides an informative webinar on the value of encryption for HIPAA, PCI and many other regulatory frameworks and the successful components of a data security program that integrates encryption.
Title: Encryption - Perspective on Privacy, Security, & Compliance
Description: HIPAA, HITECH, the Omnibus Rule, PCI-DSS, and many other regulations and frameworks speak to the importance or requirement of encryption. Adequate encryption of regulated and sensitive data can help your organization meet or exceed the privacy and information security regulatory requirements you face, if it is implemented correctly.
Chris Heuman, Practice Leader for RISC Management and Consulting, provides an informative webinar on the value of encryption and the successful components of a data security program that integrates encryption. Chris Heuman discusses the legal safe harbors for suitably encrypted data, typical encryption methodologies, how to document your choices and implementation, and how to demonstrate a successful program to an auditor.
April: Hi everyone! Thanks for joining us today. This is April Sage from Online Tech. Please join me in welcoming Chris Heuman from Chicago. He is the practice leader of RISC Management & Consulting. Chris is going to bring us a wealth of experience about compliance, encryption, and the impact on risk from decades of experience with information systems, data security, healthcare environments and much more. Chris is a certified HIPAA professional (CHP), certified security compliance specialist (CSCS), and a certified information systems security professional (CISSP).
With no further ado, Chris, welcome! Thanks so much for joining us today.
Chris: Thank you very much, April. I want to say thank you very much to all of our guests. I know that you all have a very busy schedule and there are more balls in the air that you can possibly address. Therefore, we really hope to keep this presentation both succinct and valuable. I really want to say thank you to Online Tech as well for hosting these informative webinars every Tuesday and the current series on encryption which continues on both the 18th and 25th of this month with detailed applications and encryption techniques.
I have to say from our perspective that Online Tech is one of the very, very few providers in this space that takes privacy, security, and compliance seriously and they've really implemented and tested their control. April, thank you very much.
April: Thanks, Chris.
Chris: As April mentioned, my name is Chris Heuman. I'm the Practice Leader for RISC Management & Consulting. I've included my contact information here and as well as the addresses, the ability to look up further information on our website and the blog that we host. We have a Twitter account, of course, and if you're really bored, you can check any one of those out.
Today's program, we're really going to focus on an encryption program and the top level areas that we're going to address include, “Why are we bothering to encrypt this data?” We're going to talk about what data it is that we should be encrypting. We're going to talk about to encrypt that data? How to document your encryption program? Finally, how to test that the encryption program was implemented correctly, successfully, and on par with the intention?
We'll talk about the key determiner steps and milestones that are involved in building an effective encryption program that really achieves its intended goals.
Speaking of intended goals, we're going to talk about why bother to encrypt first and setting what are the goals of your encryption program. You can see here that there are controls that are broken down into different areas and chances are everybody on this call, if you have any type of sensitive information, you're probably dealing with one of these different pieces of regulations or laws, industry requirements, information security frameworks, and potentially even some of the international controls.
The government mandates if you have healthcare data, you know we're talking about HIPAA and the HITECH Act and now the Omnibus Rule. If you have financial data, non-public personal information, you might be dealing with Gramm-Leach-Biley. Public organizations have to deal with Sarbanes-Oxley if you have personally identifiable information, for example, Social Security numbers or other personal data about individuals from certain states, or you're dealing with the patchwork of state laws that are out there. If you're in California, Texas, or Massachusetts, you really need to pay attention to this.
If you're in banking, you're dealing with FFIEC. Everybody from an identity theft standpoint is dealing with the Federal Trade Commission, but the point is on this slide, there are a myriad of regulations, frameworks, industry requirements that anyone who has sensitive information these days has to deal with.
I know, April, one of the things that we had talked about was trying to get an understanding from the audience of whether they are a regulated industry and whether they have this type of data.
April: Right, yes. We'd like to fire off a really quick poll and we would love your feedback. We just would like to understand if your company needs to meet compliance.
If everyone can give us a real quick answer here? We'll share that data with everyone afterwards, and not specific to your name and company. They'll be anonymously. Then we'll turn it right back over to Chris.
Chris: While you're clicking those buttons, one of the items that isn't even on this slide, but if you provide cloud-based services to the federal government; if the Feds are one of your clients, you may be dealing with FedRAMP as well.
We've had a couple of our clients come to us and start to encounter FedRAMP surveys and controls lately and have been trying to deal with that challenge.
April: Great! Thanks, Chris. Thanks everyone for your feedback.
Chris: What we want to do is to, again, talk about why are we bothering to encrypt and there's some basics in what we're trying to achieve. What we're trying to do is really to protect the data from unauthorized access, loss, or theft.
What we have to remember here is that encryption is a technique. It's a control mechanism to enforce predetermined authorization decisions. We have to determine who it is that we're providing authorization access data first. Encryption is really a mechanism to ensure that those controls have been put into place.
When you think about encryption, we have to address encryption both of data that's at rest and data in motion. Data at rest is really the focus of this webinar today. Data in motion, as we'll mention briefly in a couple of slides, is really a far more mature technology and technique. We're not going to spend as much time on that today.
We hope that everyone out there in the audience sees privacy and security as a lifecycle, not as a one-time event. In this lifecycle, encryption is a control mechanism that's implemented after proper analysis has been completed and policies stating the intent have been put into place. One of the things we want to caution you about as you start down this venture, is not to jump the gun and start implementing encryption without attending to all the analysis and the preparatory steps. It's very important to walk through this in a logical manner to get your documentation in order, to make well-thought-out decisions rather than to jump head first into the technical implementation.
Continuing to talk about why to encrypt; for all of our healthcare attendees out there, HIPAA addresses encryption specifically in two places within the security rule. Within the technical safeguards under different standards, there are two implementation specifications that call out encryption. The encryption of data in motion is a very mature technology. It's really highly commoditized and understood to some degree by the public and to a great deal by IT. That data in motion requirement is found on the lower portion of the slide that talks about encryption "Implementing a mechanism to encrypt ePHI whenever deemed appropriate." That is the reference to data in motion.
The more challenging portion of the security rule is up under access control and it basically is called “encryption” and “decryption” and this is where HIPAA talks about data at rest and encrypting whenever it's reasonable and appropriate.
Chris: One of the most important statements regarding encryption, one of the biggest drivers if you are in the healthcare space, was a statement that I heard at HIMSS this year down in New Orleans. It was made by the Office for Civil Rights Director Leon Rodriguez early in the first presentation on Monday morning. Director Rodriguez told the audience that during the audit program efforts and intimate investigations to date, there really were only two types of organizations with regards to encrypting data at rest: those that performed an analysis and encrypted in their ePHI and those that did nothing.
Thereby, he was telling us that no organization that paid attention to this issue and did an analysis chose not to encrypt ePHI in storage. I thought that was a very powerful statement and a great observation that came out of HIMSS.
When you look at the HITECH Act, there was encryption guidance that came out subsequent to the HITECH Act and prior to the publication of the Omnibus Rule.
The OCR released this guidance on data breaches and encryptions and while the guidance has a long title there in the top bullet, basically there are really three key points from this slide and three important takeaways from the Data Breach Guidance. That is that the breach notification provisions apply to unsecured PHI and you can see that there in the second bullet.
Unsecured PHI is defined as PHI that has not been secured through the use of a technology or methodology specified; and that if the PHI has been breached, was rendered unusable, unreadable, or indecipherable to unauthorized parties, then it is not considered unsecured PHI. Therefore, the breach requirements do not apply if the data is not considered unsecured. The main question is "How do we render it unusable, unreadable, or indecipherable to unauthorized parties?" There's only two ways: encryption or secure and permanent destruction. There really are no other means.
In that guidance, you look at what the OCR provide us guidance for data at rest by pointing to NIST Special Publication 800-111. The link for that is here on this slide and that is both great for compliance and for curing insomnia if anybody can't get to sleep at night. It's a wonderful document to go through. It's looking at PHI that's at rest and that includes many different forms and formats, some of which people really don't consider when they're thinking about their PHI that's at rest.
We're talking about databases, file shares, both structured data and unstructured data. We'll talk a little bit more about unstructured later. Workstations, laptops, tablets, iPads, phones, USB drives, flash drives, data back-up tapes, CDs and DVDs, cameras, external hard drives and anywhere else that you have regulated data that is not in motion.
Regarding acceptable encryption for data in motion, that same guidance pointed us to a couple of different NIST Special Publications including FIPS 140-2, which pulled together controls from three different NIST Special Publications: 852, 77, and 113. When you think about data that's in motion, you're really thinking about data that's crossing the Internet, wireless networks, from tier to tier within an application; often times that's not thought about, and across wired and or wireless connections in a non-persistent state. Where it's not being written to disk and it's not being retained, that's data in motion.
Chris: For those of you on the call who are in the banking industry, I'm sure you are familiar with the FFIEC Workbook and I'm sure it's one of your favorite documents. Basically, the FFIEC Workbook, even if you're not in the banking industry, is an excellent resource to utilize for information security techniques, advice, frameworks and more. The four key areas where they talk about the encryption program that's required of the banking institutions, they require encryption strength sufficient to protect the information from disclosure until such a time as disclosure poses no material risk. They talked about effective key management practices which we'll focus on again a little bit later; robust reliability; and then protecting the encrypted communication endpoints.
Anybody who's dealing with payment card data, credit card information; if you're storing this credit card information including what's defined as cardholder data and you need to comply with the Payment Card Industry Data Security Standard (PCI DSS), the encryption of that stored data is addressed in requirement number three. You can see on the slide that PCI's requirement for strong cryptography, and there's really excellent guidance in the PCI-DSS Standard. If you are implementing encryption, even if you're not specifically required to comply with PCI, you still may want to take a look at that standard as it's very well-thought-out. It has a lot of controls in it and it can be very assistive.
Chris: The next step here in the program is really determining what to encrypt. During this step, we're going to talk about the top three level objectives for this phase, including finding your data, classifying your data, and making an encryption determination. First, you really have to find all of your data or understand perfectly where it's at. This is a requirement not just for encryption but for any information security control. If you don't know where your data is located, how sensitive it is, or what is in your data repositories, you cannot protect it.
It's not specific to encryption that's specific to any info sec control. If you don't have a good understanding, you either have to protect everything which can be very expensive and they waste significant resources or risk getting paralyzed in the analysis phase and not accomplishing anything at all. One great method to get a handle on what data you have and where it's located is Data Loss Prevention or DLP as it's often abbreviated. If you use a DLP solution, you want to make sure that you're looking for sensitive and regulated data that's going across network traffic, consider file shares, in databases, and on endpoints.
You really want to pay special attention to unstructured data such as Word documents, spreadsheets, text files, or other non-centralized unstructured repositories. Anything outside of an enterprise database, we're talking about when we say unstructured. In the DLP assessments that we perform, for example, we found significant credit card information and PHI stored in places that you wouldn't expect it, such as Microsoft Exchange servers. We found that there were a lot of users that were storing sensitive data in their Outlook clients which is dropping it to the Exchange server.
Once you got a good handle where your data is located whether you've done an assessment or whether you have great tools in place already, you really should classify that data. When you do data classification, you'll have to come up with a scheme that fits your industry and your organization perfectly. I put an example here with four levels of classification. I would really recommend starting with these or something close to it. You want to think about data that is regulated and it's the data that we're focusing on today whether it's PCI, with credit cards, whether it's health information. If there is a regulation that you're going to get in trouble with, I would classify that data as regulated.
You want to think about confidential data. Confidential data may not get you in trouble with an agency. It may not get you fined, but it's highly sensitive and you don't want it out there in the public. Examples for that might be contracts that you have with your clients, purchasing agreements, items like that. There's non-public data. This goes more into category of things that might be embarrassing if they get out but they're not critical and they're not going to end your business. Examples might be salary history, payroll, leave, things like that. Then there's public information. That's the set of items that are on your website, your marketing materials that's really intended for public distribution.
Once you classify that data, you really want to make an encryption determination. We want to say, "At this data classification level, we're going to encrypt this data." If you make that determination, you want to document that decision and you want to make sure that we follow all the way through. If there is a choice not to encrypt, if that's based upon classification, then your policies, your data classifications are dictate of that, or your may have special cases. You may have applications servers or databases that don't support encryption at this point. If you have those special cases, you really want to add those to the documentation packet.
When you put a project in place and you're looking to tie project management techniques to putting encryption in play, it's really critical to tie the information to assets. What you want to do is to inventory all of the data repositories where encryption should be implemented. In the last slide, we talked about finding data, but in these days, it's sometimes difficult to tie this specific piece of information to a specific asset. You really want to, at this point, look at a well-maintained and really up-to-date asset inventory. That's going to help you to identify the devices, the disk source, the storage arrays, etcetera where you need to apply the encryption. You may want to look at your asset inventory system and even consider adding fields to it to include the data classification or whether the device requires encryption or not. You want to make sure that this step is done prior to rolling out the encryption to understand where it is that we have to implement encryption, on what devices and correlating data to device.
Chris: Once you get that asset correlation done, once you have the devices tied to the data, the next step is to perform an analysis of what type of encryption is supported on those platforms and what's most appropriate for your environment. You want to think about the different support for encryption and the different manners of implementing it. You want to think about storage-based encryption as an option. I know that one of the subsequent webinars coming up here this month really address storage-based encryption in-depth. You want to consider whole disk encryption or full disk encryption. This level of encryption takes place below the operating system. The OS is not usually aware that it's not an encrypted drive provides for a great level of security and protection.
You can consider operating system level encryption and that's where you relied upon the operating system such as Windows Encrypting File System to implement the encryption techniques for you. Obviously, the operating system was aware at that point. Many database platforms support encryption. Some of your Legacy database platforms may not, but things that are relatively up-to-date should all natively support encryption at this point. It might be a great option for you. If none of those are available or if you determined a better application-based encryption scheme with your development staff, that may be the way to go. Many times, there are custom encryption implementations that take place at the application layer.
Regardless of which one of these you choose, you really want to determine the supported and appropriate implementation. At this point, you may have to get your vendors involved. You may need to obtain formal responses and support for your encryption decisions from those vendors or manufacturers. You don't often times want to hang out the window and take all of that risk on yourself by choosing to implementing encryption on a platform that your vendor has not verified and supported.
If the vendor tells you at the point that encryption is not supported, if they're relying in Legacy code, Legacy database platforms, you want to get that response then in writing, and you want to maintain that later on for your documentation packet. Once you make your determinations, you really want to develop a project plan. You want to have an organized tool that is going to take you through the data classification, the asset tie-in, and then the determinations of what encryption that you're going to put on. Generate a project plan off the bat. It gives you a lot of accountability and make sure that testing is a part of that project plan. Don't jump right to putting encryption on to production servers. Many times there are several levels of environments. You might have development, task, build, QA, all types of environments prior to getting to production. I highly recommend implementing your chosen encryption on one of those non-prod environments well before you put it into production.
Chris: A little bit here from the FFIEC on how encryption works. I know there's a lot on this slide and you can maintain for reference. Now, I'm not going through each of these. The important thing to understand off the slide, at least for the depth of this week's presentation, is that encryption success and the compliance you're hoping to achieve. The safe harbor that you're looking for rely on two factors: the algorithm that's chosen and the key.
You really have to understand the algorithm that you required to implement and you really need to select an appropriate key length and complexity. Then after those decisions are made, you really have to ensure that what is planned is what actually got implemented. Ensure that the algorithm is approved by the industry or the requirement, the regulation that you required to adhere to, and then ensure that a key that is sufficient, that's reasonable, that can be secured and maintained with some of the key practices that we'll talk about later is chosen and placed onto the devices.NIST, that 800-110 Special Publication, the Guide to storage encryption technologies for end user devices that was mentioned earlier, they talk a lot about full disk encryption or whole disk encryption. I mentioned a few slides ago that its an excellent choice and they lay out how the technology works. They talk about the sufficiency of it and that it's a highly-recommended solution. Again, below the operating system level and it's oftentimes a great solution especially for systems that are going in the last couple of years or currently.
One of the other things that we talked about was FIPS, the Federal Information Processing Standards, and the number that you're looking for is 140-2. There's, again, a lot on the slide. We're not going to cover it all. What you should know about FIPS though is that there are four levels of FIPS Certification beginning with level one which basically is approved cryptographic algorithms. You'll find those on a lot of end user devices or the vendors and manufacturers that have gone through that certification process.
The levels two, three, and four have increased levels of protection for the device and the inability to tamper with the device without leaving some trail of evidence. One of the most important things on this slide, we get asked this all the time, and this is some fairly recent information, no more than a month or so old is on the last bullet on the slide, and that's the fact that Apple's iOS 6 and the CoreCrypto Kernel Module version three has not obtained FIPS 140-2 level one certification.
For organizations looking to place types of regulated data onto those Apple devices, if they support the new Crypto Kernel Modules and iOS 6, you can now say that those devices have these FIPS certification. Now, as we discussed earlier, the encryption is only two. Actually one, if you're talking about data at rest of the controls that you have to put into place for HIPAA but at least you know that you have that behind you.
Chris: Moving forward to documentation, you really want to have a good understanding when you go through this program of knowing what and how to document that. If you think about some of the previous slides, we talked about doing this significant analysis work and you're going to want to capture those steps as you went through them. You'd want to understand how did we classify our data and make the determination of what to encryption, what regulations do we have to comply with, and what safe harbors we're hoping to achieve. What determination did we make for encryption support and you especially want to grab any information where encryption wasn't supported or wasn't feasible or wasn't reasonable to be implemented.
You want to keep that documentation understandable and audit-ready. You need to understand where that documentation is being kept, who is maintaining it, who is keeping it up-to-date, and who can respond quickly in the event of an audit. You usually have only between 10 and 15 business days to respond to that audit. That isn't the time to go back and create all this documentation. Recommend extracting your documentation from all of those analysis, determinations, and project plans that were built previously.
Occasionally, you get pushed back from technical staff or system and network administrators and putting that documentation together early, but it really feeds your documentation packet and your ability to prepare for an audit or an investigation, a complaint, or a lawsuit later on. Whenever those vendors and manufacturers responded to you that encryption was not supported, if they claimed that it interfered with their database, their application, or their Legacy technology, you really want to hold on to those responses.
Not only do you want to keep those retained and keep them part of your audit kit, but you want to request that periodically. I would continue to request that functionality at least twice a year, if not more often, and each of those responses should be retained. Try to do those requests in writing, whether that's email or whether it's a letter format. Phone calls are going to assist you in the event of an audit. From a management level, we want to employ trust but verify. I would encourage all of the executives or management level to ask to see demonstrations of the success of the encryption implementation, ask to see management screens.
Many of the end user or endpoint device encryption technologies will have an excellent management screen that will show you all the devices that are suitably encrypted. In fact, the capability to remotely wipe those and which devices had to be remotely wiped, or some sort of a console to demonstrate that the encryption was actually implemented. Really what's important here is that executive oversight and interest is key. If the executives are plugged in, if they're looking to see these technologies, the plans, and closing the loop on this process, everybody who understand that it's a priority for the organization and it will actually occur.
Chris: Once you have implementation of all of your encryption, you really want to test that implementation. Some of these are more technical, some of these are management functions, but there are several different ways to test that the implementation has actually been implemented correctly. One of the ways I would encourage everybody who is responsible to reporting a data breach that's on this call is to conduct a data breach drill. Whether you have had an incident in the past or whether you've had the advantage of not having an incident thus far, you really want to conduct some sort of a drill to see how your team responds to it.
You want to find out the documentation is in place to understand whether the device that was breached or the data that was breached was in a state that provides you with that safe harbor. Have the encryption been implemented and can you prove that it was implemented? You can consider performing a technical vulnerability assessment. There's many different levels of technical vulnerability assessment and this term is often times used interchangeably with pen testing. Pen testing is but a specific level of vulnerability assessment and really applies when you're testing encryption. Pen testing is actually trying to exploit a vulnerability that is believed to exist on a system, a database, or an application. In exploiting that vulnerability, you're trying to place information into that system, or extract data from that system, create credentials or elevate credentials.
If you actually perform a pen test even if data is extracted from that device, that database, the data should still be completely unusable. That's something that you can test for by performing the correct level of technical vulnerability assessment. I highly recommend that all of the organizations perform a disaster recovery drill, a test or an exercise, whatever you may call it. It does a couple of things. One, it will keep you more in compliance and more up-to-date in your disaster recovery planning, but how that really encounters encryption is that's encryption adds another level of complexity both in backing up data and in recovering that data.
When you perform a disaster recovery drill or exercise and you're recovering data from or to an encrypted source, you really want to ensure that the key management practices are in place, that the personnel are prepared for the primary contact not to be available. You want to re-force the recover of the encryption key to the recovery device. For example, if you have hardware-based encryption implemented on your tape drives and that's what you're relying upon to keep your data backup safe, you want to force that encryption key to have to be laid back down onto a replacement or a standby drive and prove that you can recover all of that encrypted data.
I'd encourage you that if USB drives or other portable media are allowed in the organization, that you randomly test those portable drives. If you have a policy that says that only R-logoed encrypted drives are allowed, you want to keep an eye out for drives that are in use that don't match the physical parameters or to randomly test the drives to make sure that the encryption has been implemented on them. Randomly test a remote wipe capability. That if a drive is lost, can it be wiped from the console?
The time really to find out if the encryption was implemented correctly is not after a breach has occurred, it's really beforehand. There are many different ways to do it. Most of them are cost-effective and time-effective. It can really save you a lot of headaches later on.
April: Hey, Chris. Can I ask a quick question about the testing recommendation?
April: How often do you recommend performing these? Is this something that companies should plan to test annually or should it be more frequent than that?
Chris: Depending on which test methodology you're looking at and which regulations or industry requirements that you're expected to adhere to, the answer may vary slightly. If you look at many of these tests that are recommended here, the only reason to test them is not just for your encryption implementation, but they might be required by other controls as well.
If you look at conducting a data breach drill, that's not going to help you only with testing encryption but also with your capability to respond to a data breach, whether it's a California requirement of five calendar days or whether it's a HIPAA/HITECH requirement of 60 days, that data breach drill is going to help make sure that you have policies and supporting procedures in place. I'd recommend something like that a minimum of once a year or whenever you have significant turnover on the team that is put together to respond to data breaches.
A technical vulnerability assessment should be performed by probably all of the organizations here on the webinar, but certainly, if you're in the HIPAA space, you're required to perform one periodically which for most organizations is going to be at least once a year as well as whenever the environment changes significantly. If you add new devices, if you change your networking, if you change your routing, if you change your security implementation, you should be thinking about conducting another one.
A disaster recover drill, I'd recommend those at least once per year, if not more often. Some of our banking clients perform them every 30 days taking a small section of their environment and actually doing a test or an exercise on a handful of servers to prove that on a regular basis that the procedures are up-to-date, that the capabilities exists, and that the technology is working as it should. Portable USB drives are a huge area of risk. There are technologies that are out there as a part of a vulnerability assessment to look at this constantly to be able to pull them and find out if they're encrypted, and to see when they've checked in. Certainly to prove that you're being responsible, I would test the USB drives or other portable media if it's allowed more than once a year.
If you have cameras that are a regular part of your environment, you want to be looking at those and if you allow your workforce to use USB drives regularly, hopefully you have a policy in place or you distributed encrypted drives, that'll be testing those basically as a regular part of your security program.
April: Great. Thanks, Chris. I appreciate it.
Chris: Looking at the next section about data breaches and safe harbor, one of the important things that happened in the healthcare industry was the publication of the final HIPAA Omnibus Rule on January 17th of this year. The enforcement date is coming up in September. One of the things that the Omnibus Rule really changed was the perspective on data breaches. Gone what was previously in there was some great area called the harm threshold.
In replacement of the harm threshold, we have a presumption now that every data breach is a reportable breach event. The only reason to establish that it is not a reportable breach event is to complete a four-step risk assessment. That you must go through all four steps and you must document those efforts. You can choose not to do the risk assessment and go ahead and report that breach. The only way not to report it is to perform this risk assessment.
One of the key areas to look at in that risk assessment is whether that data could have been or was accessed by unauthorized parties. If you have compliant layers of encryption and you can prove the fact that that encryption was on the devices or the data that might have been breached, that's going to allow you to pass that risk assessment and potentially not have to list that breach event. The burden of proof though is on the organization where the data originated.
When you're doing that type of a risk analysis and vulnerability assessment or thinking about encryption as a program, please remember your business associates or your key vendors. You really want to document data flow as it leaves your organization. Think about business associates, vendors, contractors, consultants, anybody else that you're sharing regulated and sensitive data with. You really want to find a way whether it's in contracting, whether it's an annual security assessments. You want to ask them how it is that they're addressing the encryption of that data both in motion and when it's at rest that they're in facility.
Data in motion, again, is far more mature. You will find a lot of responses in this area about secure FTP drives, encrypted email transfers, TLS, etcetera. The encryption of data at rest really requires some further attention and some greater scrutiny especially from covered entities to their business associates. Please feel comfortable asking about this. Remember that as the covered entity, as the healthcare provider, as a payer, or as a clearinghouse, you are the one making the breach notification, not the business partner. It's going to be your name on the OCR as well. It's going to be you that the OCR starts talking to after an event.
Some news in that area. None of these that's here on this slide, but the next one is really good news. There are many, many different headlines that we could tear down and put here. The point of this is that unencrypted data is being breached regularly. We see way too many of these types of breach notifications coming out and many of them are extremely expensive whether that's the fine that's assessed by the Office for Civil Rights that's listed on here, whether it's the reputational damage or whatever else, there is a significant amount of data at a significant cost that is being breached that organizations are having to respond to.
Chris: When you think about these types of numbers and you compare it to the cost of implementing and encryption program, all of a sudden, looking at that program from the get-go, from buying the technology and committing the resources to implement that encryption, you'll find that the expense of the program isn't quite as high as you thought it was. These are numbers to keep in mind as you look to fund the program, as you go to board or subcommittees of the board for approval, as you look at your technology budgets for next year. These are numbers to keep in mind.
If you look at the first event on this previous page, one of the interesting things about the Alaska breach is in that settlement agreement, one of the cost that was passed through to the Department in the State of Alaska was the fact that they had to pay for third-party monitoring of their settlement agreement program. I hadn't seen that before and I doubt that those are included in some of the breach response investigations. The breach list, if anybody is interested in looking at it and seeing the results, the reasons, the dates and the number of people involved, the link is there on the bottom of this. I know, probably over publicized, but there's a great download tools you can do some analysis.
Chris: Encryption, I mentioned it earlier, one of the challenges to putting a decent encryption program in place is key management. Key management can torpedo the best intentions, the best implemented program. If you don't manage the keys after the fact, you can really undermine all of your efforts today. Encryption key management is something that you spend some significant time talking to your operational staff about. Make sure that you have policies in place, supporting procedures to teach staff how to implement all of these mechanisms and what to do when there's a breakdown.
This information here was taken from ISO 27002. I've got the reference on the bottom here. ISO really has some great recommendations for putting a good key management program in place. If you think about the many ways that keys commonly break down during staff turnover whether it is the key that is being used on the outbound side or the key you might be using with your vendor, key replacement for personnel turnover is a large issue.
Selecting unique keys across multiple customers is oftentimes an issue for those attendees to the webinar that might be technology providers to healthcare. One of the common things that we see is that the same key is used across many, many different clients from these technology providers. That's a high risk. You really want to be looking to as often as possible, if not every time, utilize unique keys for each one of your clients, for each one of your vendors, for each one of your partners.
There's a lot of workflow that needs to go into place to manage these keys from issuing them, managing them, maintaining them, changing them, destroying them, how you handle them at the end of a contract period, etcetera. Key management systems are a great challenge. The good news is there are several vendors that are stepping up to that these days and really providing us with technology and with tools that can simplify this process.
They allow you to automate the workflow, to send alarms, to send notification and to do key changes from a centralized console. I know that the industry is greatly in need of those capabilities. There's a lot of information, a lot of bullets on these two slides that are common challenges and things that you should be thinking about during and after the encryption implementation.
Chris: I've got some information about our organization here if anybody is interested in it (www.riscsecurity.com). I just want to thank everybody. Hopefully, we'll be able to open up the call for a few minutes to respond to any questions that might have come in. If you have anything that is specific to the encryption program, we would love to tackle those right now.
As I said, there are a couple of subsequent webinars coming up that are going to address encryption in storage platforms and then encryption within applications. We could probably move those questions to those webinar presentations. If you have anything else at this time, I think April would be open to taking questions.
April: Super thanks, Chris. That was a lot of info packed into a very short time. That was a great overview. Thanks very much. Thank you for sharing the next two encryption webinars. That'll be next week Tuesday at 02:00 ET and the following Tuesday at 02:00 ET. Next week, we're going to focus on Encryption at the Software Level; going to talk a little bit about approaches to encryption in the Linux and Windows environment. The following week, we'll round out our June webinars by talking about Encryption at the Hardware and Storage Levels.
Please, do share any questions that you have for us. Of course, if you come up with any questions for Chris afterwards, feel free to reach out to him and we can also direct any questions you have onto him.
Chris, I did have a couple of questions here. One of them is what are the most common mistakes that you see companies make when they are deploying encryption options?
Chris: Depending on where the encryption is being deployed, the answer might vary. There are different mistakes that are made at different points in rolling out encryption. If you're looking at end user devices or you're thinking about USB keys or portable hard drives, one of the common mistakes there is to choose a solution that can't be monitored from a central location or to not have a decent inventory of the portable media that was handed out.
Therefore, if there is some sort of an event in the future, you're not sure exactly who that device was given to, what the asset tag information or the serial numbers might be to it, and you're unable to correlate it back to any type of central console to firmly establish the fact that you have safe harbor and don't need to report that data breach.
One of the other problems with portable media is the significant delay that members of the workforce usually unfortunately have when they're reporting the loss of devices like that. The common flow is that they believe they had just misplaced it or that it's in the seat cushion of their car and will spend far too long looking for the device.
By the time they actually discover or determine that that device has been permanently lost or that it was lost on a plane and not in their car, a number of the days that you have available to report that breach event or to address, that they're already gone. Remember that the clock starts ticking in reporting breaches when the breach was known or should have been known. You can unwittingly take off a lot of that time just in looking for the device or in thinking that it's not lost.
When you talk about encryption mistakes that are made at other layers, if you think about implementing encryption at a database level or a whole disk encryption level on a server, encryption mistakes that are common are keys that are chosen that are insecure, keys that are chosen that are used for other support mechanisms at the organizations. They might choose the domain administrator password. Will I choose something that is shared with a vendor or a manufacturer? Or they might use the same key from platform to platform to platform. We see that quite a bit.
We also see a lack of the personnel survivability plan in the encryption implementation. If you placed the key into a hardware based encryption in let's say your data backup unit, your tape unit, without that key, you cannot recover those tapes. If that person moves on from the organization, it's gone for any period whatsoever, it might not have a legal liability to tell you what that key way, and you may have the inability to contact them when you do experience an emergency.
Documentation, keeping that documentation up-to-date, ensuring that documentation is part of your disaster recovery plan are challenges that I commonly see. Disaster recovery plans are tough enough to keep up-to-date. When you add these requirements to it, it's just another level that oftentimes doesn't get updated.
April: That makes a lot of sense, very helpful. A couple more questions here for you. Have you heard of man in the middle attacks and are you aware of specific techniques to avoid them?
Chris: Man in the middle attacks are attacking the encryption of data while it's in motion. You want to make sure that you're putting into place technologies and tools that are resistant to that type of attack. You want to be thinking about encryption that has been approved, that has been tested. If you go to technologies and from vendors who can provide you with certification that they have adhered to all of the NIST requirements, that they have achieved those NIST level certifications, and you are running the most current supported versions of software or technologies from those vendors, you're typically going to be okay.
One of the areas that we see vulnerabilities to attacks like that is in the lack of keeping those technologies or those tools up-to-date. If you have a vendor who released a technology, perhaps some of the, I'm not picking out a vendor here, whatsoever, an old Cisco VPN concentrator, in a lot of our vulnerability assessments, there were some well-known vulnerabilities to those devices back in the day. Even though those devices had been end of life, even though they've been upgraded whether it's the iOS or the hardware, many times since then, we still run into organizations running vulnerabilities that are six, seven, eight years old.
That is not necessarily the responsibility of the vendor. They've done their job to patch that, to make it available, and to give ample time. We find those patches not put in place and we do find those types of organizations susceptible to things like man in the middle attacks.
April: It all comes back to prophecies, doesn't it?
Chris: Very much so.
April: One of other question here for you, what happens when organizations do not report the loss of a stolen device or laptops? Who holds those organizations accountable?
Chris: Obviously, that's a huge risk. Few organizations that the executives, whether it's the board or executive management, are made aware of the fact that there was a breach event choose not to report that breach. If the executive management has the data and has been provided with any details, their choice not to report that breach is highly risky and would probably make them personally liable as well as the organization being liable for choosing not to report that breach.
I have not run into a situation where executive management has been fully informed and they have chosen not report a breach. That's why it's key in your policies and then in the supporting procedures to make sure that when there is a suspected data breach, before determining there has been or hasn't been, there is a suspected data breach, the right individuals must be a part of the team that responds to that incident.They should be requesting information that allows them to make an informed decision. If the organization chooses that they go to that risk assessment process that's required in the four steps and they determined that there hasn't been any harm, that there is a less than no likelihood of that data being breached, I highly recommend you keep that documentation that you reviewed it several times. Maybe even consider an outside counsel or a third-party to go over it with you. That is a lot of risk that you're imparting on yourself by choosing not to report the breach.
April: Great. Chris, thanks so much. That was a wonderful download of information and a great way to kick-off our encryption webinar series. Thanks again.
Everyone, we look forward to seeing you again next week to learn more about Encryption at the Software Level. Thanks again everyone. Have a great day!
Chris: Thank you, April.
Christopher Heuman CHP, CHSS, CSCS, CISSP - Practice Leader, RISC Management & Consulting
Prior to consulting, Chris Heuman worked in healthcare organizations in an information systems and data security capacity for over 20 years. Chris held increasingly responsible positions in healthcare IT from systems and network administration to project management, infrastructure management and information security. Prior to founding RISC Management, Chris developed consulting programs focused on information security and compliance specifically for healthcare institutions as a Director of Engineering Services at mCurve, and Practice Leader for Compliance and Security at ecfirst. Through his practical experience and certifications as a Certified HIPAA Professional (CHP), Certified Security Compliance Specialist (CSCS) and Certified Information Systems Security Professional (CISSP), Chris is uniquely experienced to assist healthcare organizations in understanding and meeting the myriad compliance and security regulations and requirements they face.
As the Practice Leader at RISC Management, Chris helps healthcare providers and healthcare technology organizations by providing services in the areas of risk analysis, vulnerability assessment, business continuity management and planning, business impact analysis, disaster recovery planning, social engineering tests, data loss prevention, education and training, project management and consensus building at all organizational levels. In addition, Chris has presented training programs in the HIPAA, HITECH, compliance and security space, and has been a featured presenter for statewide healthcare organizations, for Health Information Exchanges, as a guest speaker for MBA programs, and has delivered tailored training to dozens of healthcare-related organizations and accreditation bodies.
For more information, Chris can be contacted at Chris.Heuman@RISCsecurity.com or through www.RISCsecurity.com.