Call Today 1-877-740-5028

Uncategorised

Thank You for Scheduling a Visit with Online Tech

Someone from Online Tech will be in touch with you shortly. For immediate issues, please email This email address is being protected from spambots. You need JavaScript enabled to view it. or call 877.740.5028.

…(continue reading)

Encryption of Cloud Data

Download the Encryption of Cloud Data White Paperpdf-icon 

View the full white paper below.

1.0. Executive Summary
2.0. Encryption for Compliance 
     2.1. HIPAA Encryption
     2.2. PCI DSS Encryption
     2.3. SOX Encryption
3.0. Encryption Guidelines
     3.1. Find, Classify and Determine What to Encrypt
     3.2. Encryption Guidelines
          3.2.1. Data at Rest
          3.2.2. Data in Transit
4.0. Choosing Encryption Techniques
      4.1. Storage-Level Encryption
          4.1.1. Full Disk Encryption (FDE) or Whole Disk Encryption (WDE) 
          4.1.2. Virtual Disk/Volume Encryption 
          4.1.2. File/Folder Encryption 
      4.2. Database-Level Encryption
      4.3. Application-Level Encryption
5.0. Keys
6.0. Encryption in the Cloud
     
6.1. Outsourcing the Encrypted Cloud
      6.2. Considerations
7.0. Conclusion
8.0 References
     8.1. Encryption Glossary
     8.2. Data Center Audits Cheat Sheet
9.0 Contact Us

1.0. Executive Summary

EncryptionOrganizations seeking to protect sensitive and mission-critical data quickly realize that there is no single answer to keep all systems completely secure. Online data security is a complex, rapidly evolving landscape, requiring robust and layered protections. Encryption is one tool in a comprehensive defense-in-depth strategy to mitigate the risk of accidental and intentional data breaches.1

Like every other technology tool, implementation must work within a broader digital ecosystem, without disrupting the purpose that the system was designed to fulfill. Evaluating the benefits of encryption against potential tradeoffs such as cost, performance, and ongoing maintenance is at the heart of determining the most effective and efficient means of using encryption.

First, a few basics:

What is encryption?
Encryption takes plaintext (your data) and encodes it into unreadable, scrambled text using mathematical algorithms, effectively rendering data unreadable unless a cryptographic key is applied to convert it. Encryption ensures data security and integrity, even if accessed by an unauthorized user, provided the encryption keys have not been compromised. Encryption can protect data in motion, referred to encryption in transit or encryption in flight, as well as at rest; meaning in storage. Encryption often occurs at multiple levels of a system, appropriate to the context of use and other system components.

Why use encryption?
Encryption is considered a best practice for any security-conscious organization, including those that need to meet specific industry compliance requirements such as HIPAA compliance for healthcare, PCI DSS compliance for ecommerce and retail, and SOX for financial reporting. Recurring data breaches are increasing, particularly in the healthcare industry that reports an estimated $7 billion loss due to data breaches. Even those organizations that determine their risk of data loss is minimal often choose encryption to mitigate the risk of having to report a data breach, since the loss of encrypted data may not be considered a reportable event if the encryption keys remain safe.

The increased cyber threats of hackers and data theft presents a strong case for employing encryption and infrastructure that both secures data while delivering strong computing performance for optimal data availability and reliability. In this white paper, different types of encryption will be discussed, including using encryption in the cloud.

Although encryption is not a silver bullet of data or system security, it is one key tool that can be accompanied by a full arsenal of security services for a layered-defense approach to ensuring data is protected, even if accessed by unauthorized individuals. Additional security options to add to your IT solution will be covered.

2.0. Encryption for Compliance

2.1. HIPAA Encryption

Healthcare organizations (also called covered entities or CEs), business associates (BAs) and subcontractors that support the facilitation, processing or collecting of protected health information (PHI) must comply with the Healthcare Insurance Portability and Accountability Act (HIPAA) and HITECH Act enforced by the U.S. Department of Health & Human Services (HHS).

With addressable encryption implementation specifications, both a covered entity and business associate must consider implementing encryption as a method for safeguarding electronic protected health information (ePHI). If encryption is not used by a covered entity or business associate, clear documentation of the risk analysis, the decision not to encrypt, and the specifics of an equivalent level of protection must be in place to prove due diligence to protect ePHI.

Even when equivalent protection is possible, many organizations opt to encrypt to save on the costs of publically announcing and remediating a data breach, as encrypted data is not considered compromised if the encryption keys remain safe.2 This means a stolen laptop with patient records is not a reportable event if the PHI is encrypted, and the encryption keys remain safe. The costs of a data breach involving ePHI is one of many reasons data encryption is considered a best practice and sound investment for IT security.

A Ponemon Institute study, Patient Privacy & Data Security found that the average economic impact of a data breach has increased by $400,000 to a total of $2.3 million since 2010.3

Data-Breach-Economic-Impact

Source: The Ponemon Institute, Patient Privacy & Data Security

The economic impact can include investigation and legal fees, federal penalty costs, business loss due to downtime, decreased credibility and remediation or free credit monitoring for affected individuals. Costs will only continue to increase as the healthcare industry increases their reliance on electronic medical record systems (EMRs) and electronic health records (EHRs).

Unsecured, or unencrypted data, is a reoccurring and common theme in healthcare data breaches. Take these data breach cases, for example, that all involve unencrypted data and devices:

  • The Alaska Medicaid program was fined $1.7 million after a breach resulting from an unencrypted USB device that contained just 501 patient records was stolen.4
  • Massachusetts Eye and Ear Associates was fined $1.5 million after a breach resulting from the theft of an unencrypted laptop containing about 3,600 of its patients and research subjects.5
  • Advocate Health Care lost four million patient records when four unencrypted computers were stolen from their facility; the second largest health data breach since 2009.6
  • TRICARE Management had unencrypted backup tapes stolen, affecting 4.9 million individuals; the largest health data breach to date.7

Only unencrypted data falls under the scope of the HIPAA Breach Notification Rule that requires patient, media and Dept. of Health and Human Services notification. Meaning, if you encrypt your data, you do not have to report a data breach to the government unless you have reason to believe that the encryption keys were compromised.

While encryption is “addressable” under HIPAA, it is highly recommended. OCR (Office for Civil Rights) Director Leon Rodriguez was quoted: “…Encryption is an easy method for making lost information unusable, unreadable and undecipherable.” We will not debate if encryption is “easy” in this white paper, but it is key to recognize that the Department of Health and Human Services, who enforces HIPAA violation investigation and penalties through its Office for Civil Rights considers encryption well within reach with the burden of proof on the organization that chooses not to implement it.

The HIPAA Security Rule for healthcare organizations handling electronic protected health information (ePHI) dictates that organizations must:

In accordance with §164.306… Implement a mechanism to encrypt and decrypt electronic protected health information. (45 CFR § 164.312(a)(2)(iv))

HIPAA also mandates that organizations must:

§164.306(e)(2)(ii): Implement a mechanism to encrypt electronic protected health information whenever deemed appropriate. Protecting ePHI at rest and in transit means encrypting not only data collected or processed, but also data stored or archived as backups.

HIPAA Encryption of Data at Rest
The HIPAA Breach Notification Rule8 provides guidance on encryption, stating that the proper standards for encrypting data at rest are aligned with the NIST (National Institute of Standards and Technology) Special Publication 800-111, Guide to Storage Encryption Technologies for End User Devices.9 The NIST standard approves of the AES algorithm (Advanced Encryption Standard).

Data at rest can include database fileshares, workstations, laptops, tablets, iPads, phones, USB drives, flash drives, data backup tapes, CDs and DVDs, cameras and external hard drives.10

The SANS Institute, specializing in Internet security training, recommends employing whole-disk or folder-level encryption for ePHI. For ePHI in databases, they recommend full database or column-level encryption. Additionally, key management procedures and processes that separate duties and least-privilege for users and applications are important for ePHI at rest. Lastly, they recommend hashing or digitally signing all stored ePHI.11

HIPAA Encryption of Data in Transit
For data in transit (also referred to as “in flight”), complying with the Federal Information Processing Standards (FIPS) 140-2 also includes standards described in NIST Special Publication 800-52, 800-77 and 800-113.12 Data in transit crosses the Internet, wireless networks, from tier to tier within an application, or across wired or wireless connections without being stored. Data in transit remains in a non-persistent state - where it's not being written to disk or other media or being retained.

The SANS Institute recommends implementing a VPN (Virtual Private Network) using either IPSec or SSL for all remote systems that need to transmit ePHI, and to implement encryption for all systems and users that may need to email ePHI. Many organizations add two-factor authentication to their VPN login process such as Duo Security to further protect data and prevent unauthorized use.

HIPAA Encryption in the Cloud and Data Center
Encrypting data at rest and in transit is key to protecting ePHI. Encryption should be addressed both in transit through SSL connections or VPN tunnels as well as at the disk level. If IT infrastructure is outsourced, make sure to read the details of their audit reports and that the hosting provider will sign a Business Associate Agreement (BAA).

Reviewing the hosting provider’s data breach insurance policy can also be a key indicator of the level of attention and priority given to preventing data breaches. HIPAA compliant data centers should be able to turn over a complete HIPAA risk assessment performance against the OCR HIPAA Audit protocol guidelines or equivalent documentation of controls that prove thorough due diligence to protect sensitive data.

Particularly in the cloud, encryption is an important aspect of keeping data safe and in compliance with the HIPAA Security Rule. A HIPAA compliant cloud should provide encryption of data at rest, including data that is stored as backups and archived as part of an IT disaster recovery plan. The cloud should also provide encryption of data in transit for complete security.

HIPAA Hosting White PaperRead our HIPAA Compliant Hosting White Paper for details on achieving compliance with health IT, including a diagram of a HIPAA compliant infrastructure, and what to look for in a HIPAA hosting provider. 

2.2. PCI DSS Encryption

The PCI DSS (Payment Card Industry Data Security Standard) requires companies that deal with credit cardholder data to employ encryption or other technology to render data unreadable:

3.4 Render PAN (Primary Account Number) unreadable anywhere it is stored (including on portable digital media, backup media, and in logs) by using any of the following approaches:

  • One-way hashes based on strong cryptography (hash must be of the entire PAN)
  • Truncation (hashing cannot be used to replace the truncated segment of PAN)
  • Index tokens and pads (pads must be securely stored)
  • Strong cryptography with associated key-management processes and procedures

3.4.1.c Verify that cardholder data on removable media is encrypted wherever stored.

When it comes to data storage, the PCI Security Standards Council (SSC) recommends that merchants render cardholder data unreadable by using strong cryptography, and to use other layered security technologies to minimize risk of criminal exploits. They also warn against locating servers or other payment card system storage devices outside of a locked, fully-secured and access-controlled room.13

One way to achieve this is by partnering with a PCI hosting provider that can meet the physical and technical requirements of the standard by maintaining a PCI compliant data center. Physical security should include locked server racks, suites and cages, and dual-identification control access to data centers. Environmental control can be achieved with 24x7 monitoring, logged surveillance and alarm systems.

The PCI SSC specifically recommends ‘strong cryptography’ which they define as cryptography that is based on industry-tested and accepted algorithms, key lengths and proper key management practices. They call out AES (128 bits and higher) as an example of strong cryptography14 and refer technical users to the NIST Special Publication 800-57 (three parts) on recommendations for key management.15

P2PE (Point-to-Point Encryption)
Merchants that transmit cardholder data from a POS terminal (Point of Sale) to a payment processor should use point-to-point encryption (P2PE) to secure data and reduce risk of unauthorized interception during transmission. While the PCI SSC released a very detailed document outlining infrastructure encryption requirements, a high-level overview of the six main areas that must be secured within the SCD (secure cryptographic devices, or the hardware/hardware level) include:

  1. Encryption Device Management: Use secure encryption devices and protect devices from tampering. Requirements include building/using PCI-approved POI devices (Point of Interaction, or the device that accepts PINs), and securely managing equipment used to encrypt account data. The POI device should be managed by the solution provider and hardware encryption should be performed by the device.
  2. Application Security: Secure applications in the P2PE environment. Requirements include protecting PAN/SAD (Primary Account Number and Sensitive Authentication Data); developing and maintaining secure apps; and implementing secure app-management processes. Apps should be on PCI-approved POI devices.
  3. Encryption Environment: Secure environments where POI devices are present. Requirements include not storing CHD after transactions are complete, securing POI devices throughout their lifecycle, implementing secure device management processes and maintaining a P2PE Instruction Manual (PIM) for merchants.
  4. Segmentation between Encryption and Decryption Environments: Segregate duties/functions between encryption and decryption environments. Requirements include all decryption operations managed by solution provider; merchant has no access to the encryption or encryption environment and the merchant has no involvement in the operations.
  5. Decryption Environment and Device Management: Secure decryption environments/devices. Requirements include using only approved decryption devices; securing all decryption systems and devices; implementing secure device-management processes and maintaining secure decryption environment. The merchant must also have no access to the decryption environment and the decryption environment must be PCI DSS compliant.
  6. Cryptographic Key Operations: Use strong cryptographic keys and secure key management functions. Requirements include using secure encryption and key generation methods; and distribute, load, use and administrate cryptographic keys in a secure manner.

See the PCI Point-to-Point Encryption: Solution Requirements and Testing Procedures v1.1.1 Encryption, Decryption and Key Management within SCDs (Hardware/Hardware) document for more details and testing procedures to help you build and verify a valid P2PE solution that meets the PCI SSC standards for hardware encryption.16

PCI DSS Hosting White PaperRead our PCI DSS Compliant Hosting White Paper for details on what a PCI DSS compliant IT infrastructure should entail, including technical services that can help fulfill the requirements of the standard for the ecommerce and retail industry.

2.3. SOX Encryption

SOX, or Sarbanes-Oxley Act, was created to protect the sensitive financial reporting data in public companies. While not explicitly stated, encryption can help satisfy the requirements as follows:

DS5.11 Exchange of Sensitive Data: Exchange sensitive transaction data only over a trusted path or medium with controls to provide authenticity of content, proof of submission, proof of receipt, and non-repudiation of origin.

DS11.6 Security Requirements for Data Management: Define and implement policies and procedures to identify and apply security requirements applicable to the receipt, processing, storage and output of data to meet business objectives, organizational security policy, and regulatory requirements.

DS5.8 Cryptographic Key Management: Determine that policies and procedures are in place to organize the generation, change, revocation, destruction, distribution, certification, storage, entry, use and archiving of cryptographic keys to ensure the protection of keys against modification and unauthorized disclosure.

The SANS Institute recommends more general encryption best practices, such as employing remote management using secure encrypted channels (SSH, SSL, and IPSec); encrypting security device log data at rest and in transit; using dedicated key storage devices and apps; role-based key access policies; key recovery procedures, and guidance of key handling in different environments.17

3.0. Encryption Guidelines

3.1. Find, Classify and Determine What to Encrypt

Determining what kind of data to encrypt is the first objective to protecting it. Your organization needs to determine where your data is located, how sensitive it is and what is in your data repositories in order to protect it with encryption.

Classifying your data as regulated, or data that falls under the data protection laws such as HIPAA or PCI, can clarify what you should encrypt. A few examples of regulated data include personally identifiable data, patient data or credit cardholder data. It can be inefficient to attempt to encrypt everything – finding the sensitive data that is transmitted as network traffic, found in file shares, databases and all endpoints helps you specify where it is necessary to employ encryption.

Non-centralized and unstructured repositories such as Word documents, spreadsheets and text files are also at risk to be overlooked and unsecured. A recent healthcare data breach by the Oregon Health & Science University (OHSU) was a result of several health departments using unencrypted Google spreadsheets to maintain and exchange patient information.18 Data sharing outside of the centralized database can lead to unsecure practices.

Confidential data is another category your data may fall under – this might include client contracts, purchasing agreements, etc. While a breach of this type of data may not get you fined by federal law or regulatory entities, your organization likely does not want these documents available to the public. Non-public data may be employee-related, such as salary history, payroll, leave, etc. And public information is data available on your website, such as marketing materials that are intended to be distributed publicly.

After classifying your data, you should be able to make an encryption determination about which categories of data you want to encrypt. Creating an asset inventory that correlates classified data to certain devices, disks, storage area networks and servers is next as you consider where you should employ encryption.

3.2. Encryption Guidelines

3.2.1. Data at Rest

Data at rest, or in storage, may include end user devices such as personal computers, mobile devices and removable storage media. Encryption for stored data can be applied to one individual file with sensitive data or applied to all stored data, as previously mentioned.

NIST explains some of the high-level security controls required for secure and encrypted stored data:

Authentication
Encrypted stored data requires users to verify their identity and authenticate to gain access by means of an authentication method; including passwords, personal ID numbers (PINs), biometrics, tokens, etc.

Data Backups
While storing backups at an offsite location is good practice for organizations in case of a disaster, they must also be encrypted and secured. Traditional tape backup puts you at risk of physical loss of data. Using a cloud-based disaster recovery solution eliminates tape backup and virtualizes the entire operation system, apps, patches and data.

Encrypting sensitive data before it is sent to an offsite backup facility is another way to ensure data is secure. NIST recommends any local backups should also be encrypted and physically secured within the organization’s facilities.

End User Devices
Securing and maintaining device operating systems, apps and wired or wireless network traffic should also help reduce the risk of a data breach.

See section 5.0. Choosing Encryption Techniques for a more technical overview of encrypting data at rest.

3.2.2. Data in Transit

Although encryption of data at rest is mainly the focus for security since it is more likely that a hacker will target systems with data on local drives or storage networks, encrypting data in transit is also important to avoid potential interception by unauthorized persons. Data in transit may include data that travels across the Internet, wireless networks, from tier to tier within an application, across wired or wireless connections in a non-persistent state.

Complying with the Federal Information Processing Standards (FIPS) 140-2 that includes the standards described in NIST SP 800-52, -77 and -113 can provide acceptable encryption for data in transit for the healthcare industry, which can also work for other industries.19

Virtual Private Network (VPN)
When using a VPN (Virtual Private Network), you are connecting one network to another network in order to access a server remotely – this means you are also transmitting data that needs to be encrypted. VPNs can use a variety of secure protocols to transmit, or tunnel traffic from one network to another securely. The data is encrypted while sent from one network to another, and the VPN server decrypts the data and forwards it to the receiving server.

Two-Factor Authentication
Two-factor authentication for VPN (Virtual Private Network) access is an optimal security measure to protect against online fraud and unauthorized access for clients that connect to their networks from a remote location.

Two-factor authentication (also known as dual-factor or multi-factor) requires the use of one form of authorization (username/password), and an additional form of authentication to gain access to a network remotely. Two-factor authentication provides an extra layer of protection to ensure the user is truly the one who is allowed access to the network, and to protect against unauthorized entry.

SSL Certificates
SSL (Secure Sockets Layer) is a cryptographic protocol that can provide security as information is transmitted over the Internet. When a browser tries to connect with a website secured with SSL, the browser first requests that the web server identify itself. After the server sends a copy of its SSL certificate, the browser checks its credentials and approves it. The server then sends a digital signature to start an encrypted SSL session, and then encrypted data is shared between the browser and server.20

SFTP
Another method for securing data in transit is with SFTP, the SSH File Transfer Protocol that allows for secure file transfers between hosts as well as access/management of files on remote file systems. SFTP ensures that the data being transmitted is secured as well as your login credentials.

To create a strong layered security solution, use both a VPN and SSL connection. A VPN can transmit data securely even for standards that don’t have encryption built in. While the file itself isn’t encrypted, the entire path that the file traverses is encrypted.

4.0. Choosing Encryption Techniques

4.1. Storage-Level Encryption

4.1.1. Full Disk Encryption (FDE) or Whole Disk Encryption (WDE)

Full disk encryption (FDE) is the encryption of all data on a hard drive used to boot a computer. NIST explains software-based FDE to function as the following:

When FDE software is installed on a computer, the computer’s master boot record (MBR – that which determines which software will be executed when the computer boots from the media) redirects from the computer’s primary operating system to a pre-boot environment (PBE). This PBE controls access to a computer and requires some type of authentication (username/password) before decrypting and booting the OS.

NIST notes that there may be a marginal delay in opening or saving files as the FDE software transparently decrypts and encrypts the parts of the hard drive as needed. FDE software can also cause conflicts with other software at the disk level that also store code in the same space as the PBE.

For the hardware solution, FDE can be built into a hard drive disk controller. Hardware-based FDE cannot be disabled or removed from the drive – the encryption code and authenticators, including passwords and keys, are also stored on the hard drive. According to NIST, since there is no OS role in encryption/decryption, there is typically very little performance impact.21

Additionally, software-based FDE can be centrally managed, but hardware-based FDE can typically be only managed locally, which can make key management more resource-intensive for hardware-based FDE.

Database encryption also doesn’t protect against application-level attacks since the encryption function is implemented within the database management system (DBMS).22

Windows Disk Encryption
For Microsoft disk encryption, EFS (Encrypting File System) is a feature of Windows that uses AES to encrypt data at rest. Another service available for Windows Server 2008 and up is Windows Bitlocker that allows you to encrypt your hard drives. Bitlocker uses 128 or 256-bit AES encryption, which is recommended by NIST. Integrating BitLocker helps guard against data theft or exposure in the case of a lost or stolen decommissioned computer.23 As part of the operating system install, BitLocker is not enabled until it’s initialized by using the setup wizard.

Linux Disk Encryption
For Linux disk encryption, LUKS (Linux Unified Key Setup) is a platform that can integrate with Windows using FreeOTFE. LUKS uses dm-crypt, which is a disk encryption system within the kernel. The advantage of LUKS is that you can encrypt individual partitions, such as the partition on which the data lives. In the event the system was stolen, and the hard drive was swapped out, the data would still be secure.

4.1.2. Virtual Disk/Volume Encryption

With virtual disk encryption, a container that holds many files and folders is encrypted. After authentication, the container is then typically mounted as a virtual disk. The container is a single file within a logical volume; one example is the boot, system and data volumes on a personal computer.

Volume encryption is the process of encrypting an entire logical volume. Examples include volume-based removable media like USB flash drives and external hard drives.

4.1.3. File/Folder Encryption

Individual files are encrypted on a storage medium, and the same with folder encryption. The difference between virtual disk/volume and file/folder encryption is that a container is completely encrypted and no data can be viewed until after decryption. File/folder encryption allows anyone with access to the file system to read titles and other metadata for the encrypted files and folders. According to NIST, common options for customizing file/folder encryption include:

  • Allowing the user to designate which files and folders should be encrypted
  • Automatically encrypting:
    • Administrator-designated folders
    • Certain file types, denoted by a particular file extension (i.e., .doc or .png)
    • All files written by particular applications
    • All data files for certain users

4.2. Database-Level Encryption

Database-level encryption is the process of encrypting data as it is written to and read from a database. This level of encryption can protect against storage media theft and database-level attacks, but it does not encrypt data transmitted over networks.

Typically, database-level encryption is done at the column-level within a database table; as an alternative to whole database encryption, column encryption encrypts individual columns of data. Column-level encryption can hide data in each cell or data field in a particular column from certain user groups that don’t have access to the entire data table.

Column-level encryption allows you to protect columns in databases that exist in different platforms. Other methods of database encryption include more granular approaches, from row, cell, table space and file-level encryption that require authentication for each.

Encrypting the entire database is easy and faster to implement than encrypting select data, however, it is very resource-intensive. It may work well for a small-sized database but it could slow down a large database due to increased demands on system resources. In SQL Server 2008, when the entire database is encrypted, the entire database needs to be decrypted before you can access or query the database.

4.3. Application-Level Encryption

Encrypting data at the application-level allows for more granular and custom encryption of data, meaning the application can identify where and what sensitive data to encrypt, and has insight into which users have what kind of access.

How does it work? A user connects to their company’s system via a VPN and then sends data to an SSL protected website. After they upload their files, they are encrypted on the system servers. The application server decrypts the contents of the files and determines how to store the data in the database.

Application-level encryption is still at risk for hackers that may use development tools to gain access to encryption keys to decrypt or turn off encryption and gain access to data within the application. Application-level encryption also does not protect against database attacks. Encryption at the application layer can also affect performance of the application, particularly applications that need to process large volumes of data at a fast rate.

5.0. Keys

Cryptographic keys are essential to data encryption security, as your data security is only as strong as your key management. Keys need to be protected against modification and unauthorized disclosure, as well as preserved for the entire lifetime of the data. NIST defines the different types of cryptographic keys as the following:

  1. Public Signature Key – This is one of the keys in the key pair used by an asymmetric (public) algorithm to verify digital signatures.
  2. Private Signature Key – The other key in the key pair used by an asymmetric (public) algorithm to generate digital signatures.
  3. Public Authentication Key – Used in an asymmetric (public) key pair, the public authentication key provides assurance of the identity of the originating entity.
  4. Private Authentication Key – Used in an asymmetric (public) key pair, the private authentication key provides assurance of the identity of the originating entity.
  5. Symmetric Authentication Key – Used with symmetric key algorithms to provide source authentication and protect data integrity.

For a full list of the different types of keys, refer to the NIST Special Publication 800-57, Recommendation for Key Management: Part 1: General (Revision 3).24

From a high-level overview, key management, including policies and procedures for key use are very important for encryption effectiveness. Some of the basic key policies that should be addressed by your organization are:25

  • How are keys stored? For Full or Whole Disk Encryption, cryptographic keys are stored securely on the hard drive
  • Ensure unique keys are provided by technology vendors – often the same keys are used across multiple organizations, which can present a major risk
  • Different keys should be generated for different cryptographic systems and different applications
  • Key distribution and how keys should be activated when received
  • Key storage, as well as how authorized users obtain access to the keys
  • Changing or updating keys, and rules on when keys should be changed
  • Dealing with keys that have been compromised – either accessed by unauthorized users or manipulated in any way
  • Key recovery in the event that keys are lost or corrupted, as part of your business continuity and IT disaster recovery plan
  • Archiving, destroying, logging of keys and key management activities

For secure key management systems, the following precautions and best practices should be followed:

  • Fully automated key management – cutting down on potential key exposure by personnel
  • No key should appear unencrypted
  • Keys should be randomly chosen from the entire key space and preferably by hardware
  • Key-encrypting keys are separate from data keys, and no data ever appears in plaintext that was encrypted using a key-encrypting key. (A key-encrypting key is used to encrypt other keys, securing them from disclosure)
  • Keys with a long life should be rarely used, as the more a key is used, the greater the opportunity for a hacker to discover the key
  • Keys should be changed frequently to increase the effective key length of an algorithm
  • Keys transmitted should be sent securely to authenticated users
  • Key generating equipment should be physically and logically secure during installation, operation and removal from service

6.0. Encryption in the Cloud

According to a Ponemon Institute study on Encryption in the Cloud, about half of the respondents transfer sensitive or confidential data to a cloud environment, including regulated data that falls under compliance laws.26 Yet, when it comes to security responsibilities, 44 percent believe the cloud provider has primary responsibility for protecting sensitive or confidential data in the cloud environment, while 30 percent believe it is up to the cloud consumer.

Protect-Data-in-the-Cloud

Source: The Ponemon Institute, Encryption in the Cloud

As the Center for Democracy and Technology stated in a FAQ on HIPAA and cloud computing:

Cloud computing outsources technical infrastructure to another entity that essentially focuses all its time on maintaining software, platforms or infrastructure. But a covered entity such as a health care provider still remains responsible for protecting PHI in accordance with the HIPAA Privacy and Security Rules, even in circumstances where the entity has outsourced the performance of core PHI functions.27

Point being, data security in the cloud is still the responsibility of both the organization and its cloud service provider, and for certain compliance standards, the organization is held liable in the case of a data breach.

However, gaining transparency into your cloud provider’s environment can help you determine how secure your sensitive data will be in your cloud infrastructure. While some providers do not provide built-in encryption, others may provide both encryption of data at rest (in storage) and data in transit so you don’t have to seek another vendor for a software-based cloud encryption solution for the complete data security package.

6.1. Outsourcing the Encrypted Cloud

This paper will discuss outsourcing the cloud model infrastructure as a service (IaaS) in which a cloud service provider supplies the hardware, networking and maintenance, while the application is managed by the organization. As the CDT stated, the outsourced cloud is beneficial for many reasons:

  • The cloud offers faster computing performance, capacity, flexibility and security at lower costs.
  • Cloud providers allow organizations to focus resources on their core business, not IT.
  • For companies with limited IT staff and budget, outsourcing allows them to take advantage of a cloud provider’s investments in software and hardware upgrades.
  • For companies that require storage and resource-intensive support (i.e. medical imaging documents and applications), the cloud can quickly scale to meet unexpected demands.
  • An encrypted cloud offers protection of data in transit and at rest – depending on the technology used; it can provide security without affecting performance.

6.2. Considerations

What should you look for if you plan to outsource your cloud infrastructure to an encrypted cloud hosting provider? For complete assurance of your data’s security in the cloud, check for the following:

  1. Encryption. Ask if they offer encryption of data at rest and in transit, and how it is implemented to avoid spending to add software-based encryption that may slow your cloud down.
  2. Audit Reports on Compliance. For general assurance of a cloud provider’s data center facility security, check for a SSAE 16 or SOC 2 report (see the section 8.1 Data Center Audits Cheat Sheet for details). For industry-specific compliance, check for a HIPAA or PCI DSS Report on Compliance (ROC), as well as the dates of their last audit.
  3. Policies and Procedures. Ask about your cloud provider’s policies and procedures around data breach notification, data access, technical services for compliance, data termination after contract end and more.
  4. Private Clouds. With a private cloud, you can ensure your resources are dedicated to only your organization and always available for you when you need them. Some public clouds allocate resources to other tenants on a first-come, first-served basis.
  5. Disaster Recovery and Offsite Backup. For compliance and best practice, establishing a backup and complete disaster recovery (DR) plan can help recover systems in the event of a natural disaster or other unforeseen business disruption. Ask your cloud provider what kind of DR and backup services they can integrate with your service.

Our Encrypted Cloud Solution
Online Tech’s cloud encrypts data at rest and in transit at the drive-level within a storage area network (SAN) provided by EMC, called the VMAX SAN. Our enterprise-class clouds are an example of how hardware-based encryption can provide data security and meet compliance requirements while having no impact on cloud performance.

Using built-in, hardware-based data encryption, the data is encrypted when written to drives and decrypted when read from drives. This type of back-end encryption ensures there is no risk of stored data exposure when drives are removed or arrays are replaced.28

For key management, EMC uses their RSA Embedded Key Manager that provides self-managed, separate and unique DEKs (data encryption keys) for each drive in the array. The RSA key manager follow the key generation, distribution and management capabilities as defined by the industry standards NIST 800-57 and ISO 11770. Audit logs keep track of key management activities; including key creation, key deletion and key recovery.

Their keys are kept in a ‘key lockbox’ – a repository encrypted with 256-bit AES. For more about their secure key management and data encryption; including detailed examples of how data stays encrypted during installation, drive replacement and system decommissioning; read their white paper, EMC Symmetrix Data at Rest Encryption: Detailed Review (PDF).

When paired with the use of VPNs (Virtual Private Networks) and SSL certificates by the client when connecting with a mobile device remotely, the data is transferred encrypted and then stored in the VMAX SAN, encrypted. This can create a completely encrypted environment for data to be shared and stored safely within compliance requirements.

Ensure reliability of data in transit in the cloud with high availability (HA), dedicated firewalls, web application firewalls and servers. High availability solutions in the data center infrastructure allow organizations to increase their uptime and availability. When mission-critical data is at stake, using an HA architecture can greatly reduce the risk of downtime to your business due to a single point of failure. With HA protection in place, businesses can hedge against the loss of electrical power and network connectivity disruptions, and have the peace of mind in knowing your data is protected, available, and safe.

Data center infrastructure components should be designed to ensure no single points of failure exist for a successful cloud implementation. Those components include:

  • Electrical power connections
  • UPS (Uninterruptible Power Supply) systems
  • Generators
  • Air conditioning
  • Network connections, switches and firewalls
  • Server and storage devices

Read more about Online Tech’s Encrypted Cloud (PDF).

Online Tech’s Defense-in-Depth Stack
The following diagram shows each component of a solid defense-in-depth, encrypted hosting solution, including:

  • Industry-specific audit reports
  • Administrative safeguards, including change management and documentation; and audited staff and policies
  • Encryption of data in transit with VPN, SSL and high availability, redundant critical hardware
  • Encryption of data at rest with high availability EMC VMAX SANs and offsite backup

Defense-in-Depth

Additional Technical Security Tools
As mentioned before, encryption is but one tool that can be used in conjunction with other technical security tools to ensure a layered, defense-in-depth IT security solution. Find out more about each tool and how they work to protect your systems, networks, servers and data:

Daily Log Review and Log Monitoring 

Daily Log Review

Some providers may only offer logging (tracking user activity, transporting and storing log events) - seek a provider that offers the complete logging experience with daily log review, analysis, and monthly reporting.

  File Integrity Monitoring

File Integrity Monitoring (FIM)

Monitoring your files and systems provides valuable insight into your technical environment and provides an additional layer of data security. File integrity monitoring (FIM) is a service that can monitor any changes made to your files.

Web Application Firewall

Web Application Firewall (WAF)

Protect your web servers and databases from malicious online attacks by investing in a web application firewall (WAF). A network firewall’s open port allows Internet traffic to access your websites, but it can also open up servers to potential application attacks (database commands to delete or extract data are sent through a web application to the backend database) and other malicious attacks.

Two-Factor Authentication 

Two-Factor Authentication

Two-factor authentication for VPN (Virtual Private Network) access is an optimal security measure to protect against online fraud and unauthorized access for clients that connect to their networks from a remote location.

  Vulnerability Scanning

Vulnerability Scanning

Vulnerability scanning checks your firewalls and networks for open ports. It is a web application that can detect outdated versions of software, web applications that aren’t securely coded, or misconfigured networks. If you need to meet PCI compliance, you need to run vulnerability scans and produce a report quarterly.

  Patch Management

Patch Management

Why is patch management so important? If your servers aren’t updated and managed properly, your data and applications are left vulnerable to hackers, identity thieves and other malicious attacks against your systems.

  Anti-Virus

Antivirus

Antivirus software can detect and remove malware in order to protect your data from malicious attacks. Significantly reduce your risks of data theft or unauthorized access by investing in a simple and effective solution for optimal server protection.

SSL Certificate 

SSL Certificate

In order to safely transmit information online, a SSL (Secure Sockets Layer) certificate provides the encryption of sensitive data, including financial and healthcare. A SSL certificate verifies the identity of a website, allowing web browsers to display a secure website.

7.0. Conclusion

Encryption is a key technical tool within a comprehensive security strategy designed to protect sensitive data that may be regulated by compliance standards such as HIPAA or PCI DSS. Determining the best method to use for your organization is a start to implementing encryption within the broader security framework of your infrastructure.

For data at rest, including stored, archived and data found on electronic devices, hardware-based encryption at the disk-level can provide seamless data protection without affecting the performance of the cloud. For data in transit, including data traveling across the Internet, wireless networks, within an application, or across wired or wireless connections, a combination of SSL certificates, VPN (Virtual Private Networks) and two-factor authentication can ensure data is encrypted at all times along its path.

When outsourcing the cloud, look for built-in encryption options, audit reports, additional technical security services to accompany encryption, disaster recovery and offsite backup options, and robust encryption key management. Remember, your encryption method is only as strong as your key management, and the ability to keep keys secure.

8.0. References

8.1. Encryption Glossary

AES (Advanced Encryption Standard)
The Advanced Encryption Standard specifies a federally-approved algorithm used to protect electronic data, considered strong enough to satisfy Federal Information Processing standards (FIPS). The algorithm encrypts and decrypts information, and is capable of using cryptographic keys of 128, 192 and 256 bits.

Asymmetric Key Cryptography (Public-Key Cryptography)
Encryption in which one key is used to encrypt messages (public key), while a private key is used to decrypt them. The private key must be kept secret, while the public key has no risk even if it becomes known to others (the public key is meant to be shared).

Cryptography
Storage encryption technology uses cryptographic keys to encrypt and decrypt data. Cryptography is based on computationally secure algorithms designed to protect data.

Cryptographic Hash
This is the algorithm that can change a large block of data into a fixed-length string; the cryptographic hash. No two blocks of different data have the same hash. This is an example of one-way encryption. A hacker who obtains the password cannot run the hash through an algorithm to decrypt the password.

Ciphertext
After plaintext has been passed through a cryptographic encryption algorithm, ciphertext is the result. Ciphertext is irreversible without the encryption key.

Data Encryption Key (DEK)
Used for the encryption of plaintext and for the computation of message integrity checks (signatures).

Encryption
Rendering plaintext into ciphertext, meaning to render original data unreadable or undecipherable.

File Encryption
Individual files are encrypted on a storage medium and are accessible only after proper authentication.

Full Disk Encryption (FDE)
All of the data on the hard drive used to boot a computer, including the computer’s operating system, is encrypted.

Key Size
The key size/length is measured in bits of the key used in a cryptographic algorithm – for AES; there is a fixed block size of 128 bits and a key size of 128, 192 or 256 bits.

Symmetric Key Cryptography
Encryption in which the sender/receiver of a message shares a single key used to both encrypt/decrypt the message. Symmetric encryption relies on the secrecy of the key.

Volume Encryption
An entire volume is encrypted and access to the data on the volume is allowed only after authentication.

8.2. Data Center Audits Cheat Sheet

SAS 70
The Statement on Auditing Standard No. 70 was the original audit to measure a data center’s financial reporting and recordkeeping controls. Developed by the AICPA (American Institute of CPAs, there two types:

  • Type 1 – Reports on a company's description of their operational controls
  • Type 2 – Reports on an auditor's opinion on how effective these controls are over a specified period of time (six months)

SSAE 16
The Statement on Standards for Attestation Engagements No. 16 replaced SAS 70 in June 2011. A SSAE 16 audit measures the controls relevant to financial reporting.

  • Type 1 – A data center’s description and assertion of controls, as reported by the company.
  • Type 2 – Auditors test the accuracy of the controls and the implementation and effectiveness of controls over a specified period of time.

SOC 1
The first of three new Service Organization Controls reports developed by the AICPA, this report measures the controls of a data center as relevant to financial reporting. It is essentially the same as a SSAE 16 audit.

SOC 2
This report and audit is completely different from the previous. SOC 2 measures controls specifically related to IT and data center service providers. The five controls are security, availability, processing integrity (ensuring system accuracy, completion and authorization), confidentiality and privacy. There are two types:

  • Type 1 – A data center’s system and suitability of its design of controls, as reported by the company.
  • Type 2 – Includes everything in Type 1, with the addition of verification of an auditor's opinion on the operating effectiveness of the controls.

SOC 3
This report includes the auditor’s opinion of SOC 2 components with an additional seal of approval to be used on websites and other documents. The report is less detailed and technical than a SOC 2 report.

HIPAA
Mandated by the U.S. Health and Human Services Dept., the Health Insurance Portability and Accountability Act of 1996 specifies laws to secure protected health information (PHI), or patient health data (medical records). When it comes to data centers, a hosting provider needs to meet HIPAA compliance in order to ensure sensitive patient information is protected.

A HIPAA audit using the testing guidelines provided by the OCR HIPAA Audit Protocol can provide a documented report to prove a data center operator has the proper policies and procedures in place to provide HIPAA hosting solutions.

PCI DSS
The Payment Card Industry Data Security Standard was created by the major credit card issuers, and applies to companies that accept, store process and transmit credit cardholder data. When it comes to data center operators, they should prove they have a PCI compliant environment with an independent audit. They should also know what services can help your company fulfill the 12 PCI requirements.

Contact Us

Contact our encrypted cloud and hosting experts at Online Tech for more information if you still have questions about secure and compliant hosting at our data centers.

Visit: www.onlinetech.com
Email: This email address is being protected from spambots. You need JavaScript enabled to view it.
Call: 734.213.2020


[1] Online Tech & The Ponemon Institute; Healthcare Industry Loses $7 Billion Due to HIPAA Data Breaches

[2] Dept. of Health & Human Services; Breach Notification for Unsecured Protected Health Information (PDF)

[3] The Ponemon Institute; Third Annual Benchmark Study on Patient Privacy & Data Security (PDF)

[4] Health Data Management; OCR Fines Alaska Medicaid $1.7 Million for HIPAA Violations

[5] Dept. of Health & Human Services; Massachusetts Provider Settles HIPAA Case for $1.5 Million

[6] Chicago Tribune Business; Regulators to Investigate Advocate Health Data Breach

[7] Modern Health Care; Breach Spurs Lawsuit Seeking $4.9 Billion

[8] U.S. Dept. of Health & Human Services; Breach Notification Rule

[9] National Institute of Standards and Technology (NIST); Guide to Storage Encryption Technologies for End User Devices (PDF)

[10] Chris Heuman, CISSP, CSCS, CHP; Encryption: Perspective on Privacy, Security & Compliance webinar/slides (PDF)

[11] The SANS Institute; Regulations and Standards: Where Encryption Applies (PDF)

[12] U.S. Dept. of Commerce; FIPS Publication: Security Requirements for Cryptographic Modules (PDF)

[13] PCI Security Standards Council; PCI Data Storage Do’s and Don’ts (PDF)

[14] PCI Security Standards Council; Glossary of Terms, Abbreviations and Acronyms (PDF)

[15] National Institute of Standards and Technology (NIST); SP 800-57 Part 1-3

[16] PCI Security Standards Council; PCI Point-to-Point Encryption: Solution Requirements and Testing Procedures v1.1.1 Encryption, Decryption and Key Management within SCDs (Hardware/Hardware) (PDF)

[17] The SANS Institute; Regulations and Standards: Where Encryption Applies (PDF)

[18] Online Tech; No Encryption or BAAs: Keep PHI off Unsecure Clouds

[19] National Institute of Standards and Technology; Security Requirements for Cryptographic Modules (PDF)

[20] Symantec; Secure Sockets Layer (SSL): How It Works

[21] National Institute of Standards and Technology (NIST); Guide to Storage Encryption Technologies for End User Devices (PDF)

[22] Protegrity Corp.; Database Encryption – How to Balance Security with Performance (PDF)

[23] Microsoft TechNet Library; BitLocker Drive Encryption Overview

[24] NIST; Special Publication 800-57, Recommendation for Key Management: Part 1: General (Revision 3) (PDF)

[25] Chris Heuman CHP, CHSS, CSCS, CISSP, Practice Leader, RISC Management & Consulting; Encryption – Perspective on Privacy, Security & Compliance (Webinar)

[26] Ponemon Institute; Encryption in the Cloud

[27] Center for Democracy and Technology (CDT); FAQ: HIPAA and “Cloud Computing” (v1.0) (PDF)

[28] EMC Corporation; EMC Symmetrix Data at Rest Encryption: Detailed Review (PDF)

…(continue reading)

Executive Briefing Room

Please click the link below of the appropriate Sales Rep to access their login page. From there, please use the credentials provided by the Sales Rep for access into their Executive Briefing Room.

For any additional questions or concerns, please call or email your Sales Rep.

…(continue reading)

Disaster Recovery White Paper

Download the Disaster Recovery White Paperpdf-icon 

View the full white paper below.

1.0. Executive Summary
2.0. Business Continuity and Disaster Recovery
     2.1. Business Drivers
     2.2. Compliance Concerns
            2.2.1. PCI DSS
            2.2.2. HIPAA
3.0. Business Continuity
4.0. Disaster Recovery
     4.1. Recovery Point and Time Objectives
     4.2. Designing for Recovery
5.0. Technical Implementation Considerations
     5.1. Virtualization/Cloud Computing Disaster Recovery
            5.1.1. Traditional Disaster Recovery 
            5.1.2. Active-Passive 
            5.1.3. Active-Active
            5.1.4. Cloud Case Study
     5.2. Location for Disaster Recovery
            5.2.1. Micro-Sufficiency vs. Maco-Efficiency 
            5.2.2. Geography Matters  
            5.2.3. Selection of Second Sites
      5.3. Hardware Protection vs. Data Center Protection
            5.3.1. Offsite Backup Options
      5.4. SAN-to-SAN Replication
      5.5. Best Practices
            5.5.1. Encryption
            5.5.2. Network Replication
            5.5.3. Testing
            5.5.4. Communication Plan Testing
6.0. Conclusion
7.0. References
     7.1. Questions to Ask Your Disaster Recovery Provider
8.0. Contact Us

 

1.0. Executive Summary

Investing in risk management means investing in business sustainability – designing a comprehensive business continuity and disaster recovery plan is about analyzing the impact of a business interruption on revenue.

Mapping out your business model, identifying key components essential to operations, developing and testing a strategy to efficiently recover and restore data and systems is an involved, long-term project that may take 12-18 months depending on the complexity of your organization.

Addressing high-level business drivers for designing, implementing and testing a business continuity and disaster recovery plan, this white paper makes a case for the investment while discussing the innate challenges, benefits and detriments of different solutions from the perspective of experienced IT and data security professionals.

Speaking directly to different compliance requirements, this paper addresses how to protect sensitive backup data within the parameters of standards set for the healthcare and e-commerce industries.

From there, this paper delves into different disaster recovery and offsite backup technical solutions, from traditional to virtualization (cloud-based disaster recovery), as well as considerations in seeking a disaster recovery as a service solution (DRaaS) provider. A case study of the switch from physical servers and traditional disaster recovery to a private cloud environment details the differences in cost, uptime, performance and more.

This white paper is ideal for executives and IT decision-makers seeking a primer as well as up-to-date information regarding disaster recovery best practices and specific technology recommendations.

2.0. Business Continuity and Disaster Recovery

Business continuity is the process of analyzing the mission critical components required to keep your business running in the event of a disaster – business continuity is an overarching plan involving a few steps (see section 3.0 Business Continuity for a detailed description of what each step entails):

  • Business Impact Analysis (BIA)
  • Recovery Strategies
  • Plan Development
  • Testing and Exercises

Creating an IT disaster recovery plan is part of the Plan Development step. As can be seen from the multiple steps within business continuity planning, disaster recovery is only a subset within a larger overarching plan to keep a business running. Disaster recovery requires creating a plan to recover and restore IT infrastructure, including servers, networks, devices, data and connectivity.

2.1. Business Drivers

Why allocate budget toward a business continuity and IT disaster recovery plan? According to a Forrester/Disaster Recovery Journal Business Continuity Preparedness Survey, the top reason is due to an increased reliance on technology.[1]

Increased Reliance on Technology
An increased reliance on technology can be seen from the retail industry that must upgrade to digital transactions and mobile payments to the healthcare industry that relies on electronic patient data entry, information exchange, processing, etc., demarcating the shift from paper records to electronic health record systems (EHRs).[2] Ensuring network and power connectivity is essential to support the availability of websites, data and applications critical to business operations and profitability – this is where the greatest benefit can be seen in investing in an IT disaster recovery plan.

Increased Business Complexity
Other business drivers include the increasing business complexity of their organization; more relevant for larger businesses that might juggle many vendors, different processes and components that are all necessary to keep business operations running.

With so many different factors in play as well as individuals, a business continuity and IT disaster recovery plan tackles the challenge of coordinating efforts and navigating a complex communication and workflow model in the event of a disaster. The plan must identify and support the complex interdependencies typically found in a larger organization that all work to keep the business running.

Increasing Frequency and Intensity of Natural Disasters
An increasing frequency and intensity of natural disasters is also motivation for establishing a plan to deal with the effects of, for example, Hurricane Sandy; a largely unanticipated and devastating natural disaster that caused delays, power outages and downed businesses/websites. Ideally, your disaster recovery data center should be located in a region with low risk of natural disasters.

However, Gigaom.com reports that the greatest amount of data centers are located in states that also experienced the greatest number of FEMA (Federal Emergency Management Agency) disaster declarations, suggesting a change in disaster recovery strategy is in order. Which states were hit the hardest, with the highest concentration of existing data centers? The top three include Texas with 332 disasters and 120 data centers; California with 211 disasters and greater than 160 data centers; and New York with 91 disasters and greater than 120 data centers.[3]

Disaster Declarations and Data Centers

Source: Giagom, FEMA, Data Center Map

For more on geography, data centers and disaster recovery, see section 5.2.2. Geography Matters.

Increased Reliance on Third-Parties
Another business driver is the increased business reliance on third-parties (i.e., outsourcing, suppliers, etc.). As one factor in the business complexity of an organization, vendors can also introduce potential new or increased risks, depending on their internal security policies and practices, as well as general security awareness. Read more about Administrative Security to find out what to look for in a security-conscious third-party vendor, from audits, reports and policies to staff training.

Increased Regulatory Requirements
Increased regulatory requirements have also shifted attention to the need for disaster recovery. For the e-commerce, retail and franchise industries, the Payment Card Industry Data Security Standards (PCI DSS) require the offsite backup and verification of the physical security of the facility in which cardholder data is found. Another requirement explicitly mandates the establishment and testing of an incident response plan in the event of a system breach.[4] (See section 2.2.1 PCI DSS for more).

The healthcare industry is regulated by the Health Insurance Portability and Accountability Act (HIPAA) and more specifically the Health Information Technology for Economic and Clinical Health (HITECH) Act that addresses privacy and security concerns related to the electronic transmission of health information. Within the Administrative Safeguards of the HIPAA Security Rule standards, a contingency plan is required, comprised of: a data backup plan, disaster recovery plan, emergency mode operation plan, testing and revision procedures and applications and data criticality analysis.[5]

Accordingly, failure to meet regulatory requirements can result in federal fines, legal fees, loss of business credibility, and other significant consequences; motivating businesses of all sizes to implement a compliant disaster recovery and backup plan.

Increased Threat of Cyber Attacks
The last risk factor making a case for disaster recovery is the increased threat of cyber attacks. From attacks on federal agencies to corporate franchises to mobile malware, hackers are frequently developing new methods to gain unauthorized access to systems – or to take down entire systems.

A denial-of-service attack (DoS attack) is one method of sending an abnormally high volume of requests/traffic in an attempt to overload servers and bring down networks. While many other technical security tools can be used to prevent, detect and mitigate potential cyber attacks, a comprehensive disaster recovery plan is essential in order to properly recover and restore critical data and applications after an attack.

2.2 Compliance Concerns

As mentioned in the previous section, failure to meet industry compliance/regulatory requirements can result in federal fines, legal fees, loss of business credibility, and other significant consequences – with disaster recovery and backup as an integral part of the requirements, it’s important to review what’s at stake and why for each industry.

2.2.1. PCI DSS

For companies that deal with credit cardholder data, including e-commerce, retail, franchise, etc., the Payment Card Industry Data Security Standards (PCI DSS) are the official security guidelines set by the major credit card brands.

Of the 12 PCI DSS requirements and sub-requirements, 12.9.1 dictates:[6]

Create the incident response plan to be implemented in the event of system breach. Ensure the plan addresses the following, at a minimum:

    • Roles, responsibilities, and communication and contact strategies in the event of a compromise including notification of the payment brands, at a minimum
    • Specific incident response procedures
    • Business recovery and continuity procedures
    • Data back-up processes
    • Analysis of legal requirements for reporting compromises
    • Coverage and responses of all critical system components
    • Reference or inclusion of incident response procedures from the payment brands

In addition, the PCI standard 9.5 requires a data backup plan, disaster recovery plan, emergency mode operation plan, testing and revision procedures, and application and data criticality analysis.[7]

Store media back-ups in a secure location, preferably an off-site facility, such as an alternate or back-up site, or a commercial storage facility. Review the location’s security at least annually.

The auditor testing procedures call for observation of the storage location’s physical security. A PCI compliant data center should have proper physical security including limited access authorization, dual-identification control access to the facility and servers, and complete environmental control with monitoring, logged surveillance, alarm systems and an alert system.

Ideally, if outsourcing your disaster recovery solution, partner only with a disaster recovery provider that allows physical tours and walkthroughs of their facilities. What else should you look for in a PCI disaster recovery provider?

  • Policies and procedures, process documents, training records, incident response/data breach plans, etc.
  • Proof that all PCI requirements are in place and sufficiently compliant within the scope of their contracts

PCI DSS Hosting White PaperRead more about the required network and technical security, and high availability infrastructure in PCI Compliant Data Centers. For a complete guide to outsourcing data hosting and disaster recovery solutions, read our PCI Compliant Hosting white paper.

2.2.2. HIPAA

For companies that deal with protected health information (PHI), including healthcare providers, hospitals, physicians, hospital systems, etc., the HIPAA Insurance Portability and Accountability Act (HIPAA) is the official legislation set forth by the U.S. Dept. of Health and Human Services (HHS).

This set of security standards work to protect the availability, confidentiality and integrity of PHI – the availability aspect becomes all the more dependent on the reliability of your IT infrastructure as hospitals and healthcare practices increase reliance on the use of electronic health record systems (EHRs). Healthcare applications and Software as a Service (SaaS) companies need offsite backup for their data in the event that a production data center experiences a disaster.

The Contingency Plan standard (§ 164.308(a)(7)) of the Administrative Safeguards of the HIPAA Security Rule requires covered entities to:

Establish (and implement as needed) policies and procedures for responding to an emergency or other occurrence (for example, fire, vandalism, system failure, and natural disaster) that damages systems that contain electronic protected health information.[8]

The specifications of the standard include a data backup plan, disaster recovery plan, emergency mode operation plan, testing and revision procedures, and applications and data criticality analysis.

Read Components of a HIPAA Compliant IT Contingency Plan for a detailed overview and a customizable IT Contingency Plan template provided by the Dept. of Health and Human Resources.

HIPAA Hosting White PaperRead more about the required physical, network and technical security, and high availability infrastructure in HIPAA Compliant Data Centers. For a complete guide to outsourcing data hosting and HIPAA disaster recovery solutions, read our HIPAA Compliant hosting white paper.

3.0. Business Continuity

Within a business continuity plan exists a few steps:[9]

Business Impact Analysis (BIA)
This involves determining the operational and financial impact of a potential disaster or disruption, including loss of sales, credibility, compliance fines, legal fees, PR management, etc.

It also includes measuring the amount of financial/operational damage depending on the time of the year. A risk assessment should be conducted as part of the BIA to determine what kind of assets are actually at risk – including people, property, critical infrastructure, IT systems, etc.; as well as the probability and significance of possible hazards – including natural disasters, fires, mechanical problems, supply failure, cyber attacks; etc.[10]

Mapping out your business model and determining where the interdependencies lie between the different departments and vendors within your company is also part of the BIA. The larger the organization, the more challenging it will be to develop a successful business continuity and disaster recovery plan. Sometimes organizational restructuring and business process or workflow realignment is necessary not only to create a business continuity/disaster recovery plan, but also to maximize and drive operational efficiency.[11]

Ready.gov/business has a BIA worksheet available[12] (PDF) (seen below) to help you document and calculate the operational and financial impact of a potential disaster by matching the timing and duration of an interruption with the loss of sales/income, as well as on a per department, service and process basis.

BIA

Recovery Strategies
Analyzing your company’s most valuable data, that is data that directly leads to revenue, is key when determining what you need to backup and restore as part of your information technology (IT) disaster recovery plan.

Create an inventory of documents, databases and systems that are used on a day-to-day basis to generate revenue, and then quantify and match income with those processes as part of your recovery strategy/business impact analysis.[13]

Aside from IT, a recovery strategy also involves personnel, equipment, facilities, a communication strategy and more in order to effectively recover and restore business operations.

Plan Development
Using information derived from the business impact analysis in conjunction with the recovery strategies, establish a plan framework. Documenting an IT disaster recovery plan is part of this stage.

As can be seen from the multiple steps within business continuity planning, disaster recovery is a subset within a larger overarching plan to keep a business running. It involves restoring and recovering IT infrastructure, including servers, networks, devices, data and connectivity (see section 4.0. Disaster Recovery for more).

A data backup plan involves choosing the right hardware and software backup procedures for your company, scheduling and implementing backups as well as checking/testing for accuracy (see section 5.3.1. Offsite Backup Options for more).

Testing & Exercises
Develop a testing process to measure the efficiency and effectiveness of your plans, as well as how often to conduct tests. Part of this step involves establishing a training program and conducting training for your company/business continuity team.

Testing allows you to clearly define roles and responsibilities and improve communication within the team, as well as identify any weaknesses in the plans that require attention. This allows you to allocate resources as needed to fill the gaps and build up a stronger, more resilient plan. Read section 5.5.3 Testing for more information.

4.0. Disaster Recovery

As an integral part of business continuity plan development, creating an IT disaster recovery plan is essential to keep businesses running as they increasingly rely on IT infrastructure (networks, servers, systems, databases, devices, connectivity, power, etc.) to collect, process and store mission-critical data.

A disaster recovery plan is designed to restore IT operations at an alternate site after a major system disruption with long-term effects. After successfully transferring systems, the goal is to restore, recover, test affected systems and put them back in operation.

Your IT infrastructure is, in most cases, the lifeblood of your organization. When websites are down or patient data is unavailable due to hacking, natural disasters, hardware failure or human error, businesses cannot survive.

According to FEMA, a recovery strategy should be developed for each component:

  • Physical environment in which data/servers are stored – data centers equipped with climate control, fire suppression systems, alarm systems, authorization and access security, etc.
  • Hardware – Networks, servers, devices and peripherals.
  • Connectivity – Fiber, cable, wireless, etc.
  • Software applications – Email, data exchange, project management, electronic healthcare record systems, etc.
  • Data and restoration

Identify the critical software applications and data, as well as the hardware required to run them. Additionally, determining your company’s custom recovery point and time objectives can prepare you for recovery success by creating guidelines around when data must be recovered.

4.1. Recovery Point and Time Objectives

Recovery Point Objective (RPO)
A recovery point objective (RPO) specifies a point in time that data must be recovered and backed up in order for business operations to resume. The RPO determines the minimum frequency at which interval backups need to occur, from every hour to every 5 minutes.[14]

Recovery Time Objective (RTO)
The recovery time objective (RTO) refers to the maximum length of time a system (or computer, network or application) can be down after a failure or disaster before the company is negatively impacted by the downtime. Determining the amount of lost revenue per amount of lost time can help determine which applications and systems are critical to business sustainability.

For example, if your email server was down for only an hour, yet a large portion of your database was wiped out and you lost 12 hours’ worth of email, how would that impact your business?

4.2. Designing for Recovery

High Availability Infrastructure
Strategic data center design involving high availability and redundancy can help support larger companies that rely on mission-critical (high-impact) applications. High availability is a design approach that takes into account the sum of all the parts including the application, all the hardware it is running on, power infrastructure, and the networking behind the hardware.[15]

Using high availability architecture can reduce the risks of lost revenue and customers in the event of Internet connectivity or power loss – with high availability, you can perform maintenance without downtime and the failure of a single firewall, switch, or PDU will not affect your availability. With this type of IT design, you can achieve 99.999%, meaning you have less than 5.26 minutes of downtime per year.

High availability power means the primary power circuit should be provided by the primary UPS (Uninterruptible Power Supply) and be backed up by the primary generator. A secondary circuit should be provided by the secondary UPS, which is backed up by the secondary generator. This redundant design ensures that a UPS or generator failure will never interrupt power in your environment.

For a high availability data center, you should seek not only a primary and secondary power feed, but also a primary and secondary Internet uplink if purchasing Internet from them. Additionally ensure any available hardware, firewalls or switches include redundant hardware.

If using managed services and purchasing a server from a data center, ensure all of the hardware is configured for high availability, including dual power supplies and dual NIC (network interface controller) cards. Ensure their server is also wired back to different switches, and the switches are dual homed to different access layer routing so there is no single point of failure anywhere in the environment.

Offsite backup and disaster recovery are still important; as high availability cannot help you recover from a natural disaster such as a flood or hurricane. Additionally, disaster recovery comes after high availability has completely failed and you must recover to a different geographical location.

Redundant Infrastructure
Redundancy is another factor to consider when it comes to disaster recovery data center design. With a fully redundant data center design, automatic failover can ensure server uptime in the event that one provider experiences any connectivity issues.

This includes multiple Internet Service Providers (ISPs) and fully redundant Cisco networks with automatic failover. Pooled UPS (Uninterruptible Power Supply), battery and generators can ensure a backup source of power in the event one provider fails. View an example of Online Tech’s redundant network and data centers below:

Redundant Connectivity and Network

Cold Site Disaster Recovery
A cold site is little more than an appropriately configured space in a building. Everything required to restore service to your users must be retrieved and delivered to the site before the process of recovery can begin. As you can imagine, the delay going from a cold backup site to full operation can be substantial.

Warm Site Disaster Recovery
A warm site is leasing space from a data center provider or disaster recovery provider that already has the power, cooling and network installed. It is also already stocked with hardware similar to that found in your data center, or primary site. To restore service, the last backups from an offsite storage facility are required.

Hot Site Disaster Recovery
A hot site is the most expensive yet fastest way to get your servers back online in the event of an interruption. Hardware and operating systems are kept in sync and in place at a data center provider's facility in order to quickly restore operations. Real time synchronization between the two sites may be used to completely mirror the data environment of the original site using wide area network links and specialized software. Following a disruption to the original site, the hot site exists so that the organization can relocate with minimal losses to normal operations. Ideally, a hot site will be up and running within a matter of hours or even less.

When you partner with a data center/disaster recovery provider, you're sharing the cost of the infrastructure, so it's not as expensive if you were to have an entirely secondary data center.

5.0. Technical Implementation Considerations

5.1. Virtualization/Cloud Computing Disaster Recovery

With virtualization, the entire server, including the operating system, applications, patches and data are encapsulated into a single software bundle or server – this virtual server can be copied or backed up to an offsite data center, and spun up on a virtual host in minutes in the event of a disaster.

Since the virtual server is hardware independent, the operating system, applications, patches and data can be safely and accurately transferred from one data center to a second site without reloading each component of the server.

This can reduce recovery times compared to traditional disaster recovery approaches where servers need to be loaded with the OS and application software, as well as patched to the last configuration used in production before the data can be restored.

Virtual machines (VMs) can be mirrored, or running in sync, at a remote site to ensure failover in the event that the original site should fail; ensuring complete data accuracy when recovering and restoring after an interruption.

Another aspect of cloud-based disaster recovery that improves recovery times drastically is full network replication. Replicating the entire network and security configuration between the production and disaster recovery site as configuration changes are made saves you the time and trouble of configuring VLAN, firewall rules and VPNs before the disaster recovery site can go live.

In order to achieve full replication, your cloud-based disaster recovery provider should manage both the production cloud servers and disaster recovery cloud servers at both sites.

For warm site disaster recovery, backups of critical servers can be spun up on a shared or private cloud host platform.

For SAN-to-SAN replication, hot site disaster recovery is more affordable – SAN replication allows not only rapid failover to the secondary site, but also the ability to return to the production site when the disaster is over.

For a case study of a real physical-to-cloud switch scenario from a business enterprise perspective, read section 5.1.4. Cloud Case Study for a detailed comparison of managing physical servers vs. a private cloud environment, including differences in costs, energy use, uptime, performance and development.

5.1.1. Traditional Disaster Recovery

With traditional disaster recovery outsourced to a vendor with a shared infrastructure, after a disaster is declared, the hardware, software and operating system must be configured to match the original affected site.

Data is being stored on offsite tape backups – after a disaster, the data must be retrieved and restored in the remote site location that has been configured to match the original. This can take hours or a few days to recover and restore completely. If not outsourcing, the traditional disaster recovery method of using a cold site can be very time-consuming and very costly.

If you have a disaster recovery infrastructure with preconfigured hardware and software ready at a secondary site (a warm site), this can cut down on the time it takes to recover. However, even with a secondary site, your organization is still dependent on retrieving physical backup tapes for complete restoration. There is no data synchronization and no failback option available with traditional disaster recovery.

The missing step in many traditional disaster recovery plans is how to return to the production site once it has been re-established. Traditional disaster recovery plans are often not fully tested through a full failover disaster scenario due to the time-consuming design of the plan.

5.1.2. Active-Passive

In an active-passive disaster recovery setup, the original or primary site is designed so that the network fails over at an alternative or secondary site with delayed resiliency. Applications and configurations must be replicated with a delay anywhere from five minutes to 24 hours. With a secondary site, there is reduced capacity hardware, and failback requires a maintenance window.

5.1.3. Active-Active

In an active-active disaster recovery setup, there is synchronous data replication between the primary and secondary sites, with no delayed resiliency. The database spans the two data centers, and the application layer multi-writes. There is equivalent capacity hardware at a secondary data center to ensure full capacity redundancy.

5.1.4. Cloud Case Study

Online Tech is one example of making the switch from traditional physical servers to a cloud environment that resulted in savings in hardware, disaster recovery and more. Back in 2011, we found our growth was beginning to become difficult to manage internally.

Mission Critical Hardware, Facilities and Employees
We had two data centers, hundreds of circuits, network devices, racks, cages and private suites to manage and maintain. We also had thousands of servers and support tickets due to a rapidly growing client-base, as well as certification and auditing processes to keep up annually (SSAE 16, SOC 2, HIPAA, PCI DSS, SOX) in order to maintain compliance and data security for our clients.

With employees at five different locations and in two different countries, we needed a scalable and efficient solution to support our mission critical business components.

Mission Critical Systems
Within our administrative department, Exchange, SharePoint, a file server, and domain controller supported their everyday processes. Our marketing department uses a production and development website to test and implement updates, as well as load-balanced website to optimize resources.

For OTPortal, our client and intranet portal, we use Microsoft .net applications and a MS SQL database. For OTMobile (provides mobile access for our engineers), we use a PHP application. Within our operations department, we use a custom Centos program to manage the data and create a MySQL database for our bandwidth management and billing processes.

Operations has thousands of patches to apply each month, as well as firewall, IDS management consoles, antivirus management, server and cloud backup managers, SAN and NAS management, and uptime/performance monitoring to maintain. We also have a sandbox for testing in our lab.

From Physical Servers to a Private Cloud
We consolidated from 23 physical servers (18 Windows, 5 CentOS servers, 4 database servers; each with 10 percent utilization) to private cloud. The private cloud consisted of 2 redundant hardware servers (N+1) and an 8 terabyte SAN. Our high availability (HA) configuration includes automatic load-balancing across hosts, and automatic failover to a single server.

The private cloud also includes continuous offsite backup, allowing for real-time data synchronization. We employ a disaster recovery warm site located in Ann Arbor, Michigan that allows us a four hour recovery time that has been fully tested.

Leveraging Our Cloud
When we switched over, we actualized several benefits, from faster client-support development, lower total cost of ownership, improved uptime and performance, as well as significantly decreasing our energy usage and carbon footprint.

Pace of Development
With the switch to our private cloud, we’ve increased the pace of development. A project that would typically take two weeks can now be completed within an hour, as we can create new servers and test concepts using production data.

As a result, our development team can update the client portal, OTPortal, with new releases every two weeks; implementing new time-saving features much sooner than before.

Total Cost of Ownership (TCO)
We also reduced the total cost of managing our infrastructure. Our old TCO required management of 26 physical Dell servers with a variety of specifications, versions, bios, CPU, memory configurations and the need for several different spares. In addition, we had to manage 26 backups, antivirus and machines to network and patch.

We also had four Cisco network switches, two racks in the data center, more than a hundred network cables and half a dozen power strips. It took hours to upgrades disks, and downtime also contributed to costs, as it was required to upgrade memory.

The cloud TCO consolidated everything into two servers, one SAN, two network switches, two power strips and down from two racks to a quarter of a rack. Overall, we saved 50 percent on hardware and 90 percent on management costs.

Improved Uptime
Another benefit is improved uptime – always a major benefit when it comes to hosting critical data for our clients. With N+ 1 (redundant) hosts, every virtual server we create is protected from a failed hardware server. For redundancy with physical servers, it would have required an additional 26 servers, adding to cost, time management and energy expenditure.

To guard against SAN failure, we have redundant controllers in our SAN, with RAID array drives and spare drives on hand. With our high availability power configuration, we were further protected against downtime.

Initially we considered using a separate server for the database, resulting in a hybrid cloud configuration, which would have required a cluster of database servers for the same protection. Instead, we upgraded our entire cloud for less than a new single database server, resulting in protection against server failure for significantly lower cost than a cluster.

Improved Performance
We also improved our ability to respond to performance issues. Previously with our physical server setup, it took a few days to get the right RAM/disk/CPU, and we had to schedule downtime with anywhere from two days to one week of notice.

The actual process included shutting down and removing the server from the rack, opening the server, installing additional resources and then booting up the server. Then we would have to test the performance, turn the server back off in order to re-rack it, and then restart the server; resulting in about two hours of downtime.

When we switched to the cloud, the steps were reduced to: schedule downtime; click to add more RAM/disk/CPU; reboot server and test performance – a total of five minutes of downtime. The entire cloud upgraded hardware one host at a time with nearly zero downtime.

Cloud Computing Energy SavingsDecreasing Energy, Carbon Footprint & Costs
We significantly reduced our energy use and subsequent carbon footprint. When it came to power consumption, for 100 percent uptime at 300 watts/server and a PUE of 1.8, we went through 1.58 lbs. of CO2/kwhr. For 26 physical servers and a network, that amounts to about 200,000 lbs. of CO2 per year, and twice as much annually for redundancy.

The cloud required two physical servers, network and SAN. With a 35 server capacity, we are burning 31,000 lbs. of CO2 per year – a savings of nearly 17,000 lbs. of CO2 annually.[16]

Faster Disaster Recovery
With every server we create, we’re reassured they are automatically protected, as we have a single backup process for each host and backups for every virtual server on all hosts. We have a 4 hour RTO from catastrophic failure – we can failover to a secondary data center site in less than 4 hours. With the cloud, we are able to test twice a year to ensure the process runs smoothly.

In a virtualized environment, the entire server, including the operating system (OS), apps, patches and data are captured on a virtual server that can be backed up to another one of our data centers and spun up in a matter of minutes.[17] This makes both testing and full recovery and failover much faster and efficient than if we still used physical servers.

5.2. Location for Disaster Recovery

Strategic distance between the primary and secondary sites for disaster recovery is important in order to avoid natural disasters, ensure data synchronicity, allow for business scalability and maximize operational efficiency.

5.2.1. Micro-Sufficiency vs. Macro-Efficiency

Micro-sufficiency is the concept in which your core functions or critical pieces of your infrastructure are centrally located, as well as replicated regionally. In an example of business model scaling, core departments may be human resources, IT and/or legal located centrally at a headquarters.

However, each branch of the business in different regions has its own local core departments. The idea behind this model is that risk is mitigated by dispersing core functions close to each region and their local customers, and not solely in one location (headquarters). Each branch also has different strategies to better serve their unique customers as their needs vary from region to region.

Similarly, with disaster recovery planning, once you identify your critical business processes, you can distribute those core functions into your various operational units in the event of a disaster or interruption. Designing with this concept of redundancy and resiliency in your infrastructure can result in a graceful and safe failover with efficient recovery.

Partnering with a disaster recovery data center provider allows your organization to take advantage of the risk-mitigating benefits of the micro-sufficiency concept, while avoiding the costs of building and maintaining your own data center. Instead of installing your own redundant set of equipment in an alternate facility, colocation or disaster recovery with a partner allows you to pay for space in a fully staffed, redundant environment.

With macro-efficiency, the concept is economy of scale – the bigger your company, the more buying power you have and the larger equipment you can buy. In an example of business model scaling, the core departments make blanket corporate decisions across their regional branches, regardless of differences in customers and needs. Without recognizing differences and identifying the workflow of interdependencies between different departments, the model suffers from lack of organization and inability to identify and recover critical functions.[18]

Micro-sufficiency is the ideal model for disaster recovery and business continuity planning, as it effectively mitigates risk and presents a better strategy for protecting data through redundant and resilient design.

5.2.2. Geography Matters

The geographic selection of low natural disaster zones is essential for lowering the risk of critical IT infrastructure destruction. A large enough distance between your primary and secondary sites ensures that your secondary site isn’t affected by a potential natural disaster. Read on for more about specific parameters of secondary disaster recovery sites.

5.2.3. Selection of Second Sites

If your organization or primary site is located in a disaster-prone zone, consider a secondary site in a landlocked and more temperate region. Compared to coastal regions, the Midwest has low national averages for significant natural disasters such as floods, tornadoes, hurricanes and fires that cause mass destruction and may be a threat to your business.

Ideal Location for Data Centers

If your organization or primary site is located on the coast or in an earthquake zone, your secondary site should be located at least 100 miles away.[19] Ideally, your secondary site should be located far enough away to mitigate the risk of it being affected by the same disaster affecting your primary site.

The design of your secondary site should also be strategic – never locate generators in a basement or other location that may be difficult to service, or prone to destruction.

Additionally, ensure your secondary data center is located close enough to your primary for optimal bandwidth and response time, as well as the ability to mirror data in real time. Facilities should also be easily reached by your IT team in the event of a disaster for faster service and recovery.

However, your disaster recovery data center should always be located on a separate utility power grid than your primary data center. In the event of a power outage at your primary, separate power grids ensures that your secondary site will still be up and running.

5.3. Hardware Protection vs. Data Center Protection

5.3.1. Offsite Backup Options

Sending data offsite ensures a copy of your critical data is available in the event of a disaster at your primary site, and it is considered a best practice in disaster recovery planning. There are several offsite data backup media options available, including the traditional tape backup method that involves periodic copying of data to tape drives that can be done manually or with software.

However, physical tape backup has its drawbacks, including read or write errors, slow data retrieval times, and required maintenance windows. With critical business data from medical records to customer credit card data, your organization can’t afford to risk losing archives or the ability to completely recover after a disaster.

offsitebackup

According to NIST, the different types of data backups include:[20]

  • Full backup – All files on the disk or within the folder are backed up. This can be time-consuming due to the sheer size of files. According to NIST, maintaining duplicates of files that don’t change very often, such as system files, can lead to excessive and costly storage requirements.
  • Incremental – Files that were created or changed since the last backup are captured in an incremental backup. Backup times are shorter and more efficient, but might require compiling backups from multiple days and media, depending on when files where changed.
  • Differential – All files that were created or modified since the last full backup – if a file is changed after the last backup, the file will be saved each time until the next full backup is completed. Backup times are shorter than a full backup, and require less media than incremental.

For more about specific offsite backup technology, read section 5.4 SAN-to-SAN Replication and SAN Snapshots.

Outsource vs. In-Source
Outsourcing your offsite backup to a managed services provider can provide your organization with continuous data protection and full file-level restoration, and offload the burden of installing, managing, monitoring as well as complete restoration after a disaster.

With a vendor, your encrypted server files are sent to an onsite backup manager (primary site), which are then sent to a secondary, offsite backup manager, ideally far enough apart to reduce the chances of the secondary site being affected by the same disaster or interruption.

While offsite backup managed in-house can be costly due to building out, maintaining and upgrading both primary and secondary sites, outsourcing your offsite backup to professionals means you can take advantage of their investments in capital, technology and expertise.

As NIST (National Institute of Science and Technology) states, backup media should be stored offsite or at an alternate site in a secure, environmentally controlled facility.[21] An offsite backup data center should have physical, network and environmental controls to maintain a high level of security and safety from possible backup damage.

Physical security at a data center means only authorized personnel have limited access to client servers, and the facility itself should require dual-identification control access (through the use of a secondary identification device, such a biometric authentication that requires a fingerprint scan). Environmental controls should include 24x7 monitoring, logged surveillance cameras and multiple alarm systems.

Any sensitive infrastructure should be protected by restricted access, and redundancy in routers, switches and paired universal threat management devices should provide network security for your offsite backup data.

Vendor Selection Criteria
When vetting offsite backup and disaster recovery vendors (also known as disaster recovery as a service, or DRaaS) check certain criteria to ensure your data is protected. Look for certain security certifications, compliance, communication styles and technology when comparing offsite backup providers, as well as the basic disaster recovery criteria of geographic area, accessibility, security, environment and costs discussed in section 5.2 Location for Disaster Recovery.

Compliance
One way to gain assurance of an offsite backup/data center provider’s security practices is to inquire about their industry security and compliance reports.

Vendors that have invested the significant time and resources toward building out and meeting regulatory requirements for operating excellence and security practices will have undergone independent audits. They should also be able to provide a copy of their audit report under NDA (non-disclosure agreements).

Look for these data center audit compliance reports:

  • SSAE 16 (Statement on Standards for Attestation Engagements), which replaced SAS 70 (Statement on Auditing Standard), measures controls and processes related the financial recordkeeping and reporting. A SOC 1 (service organization controls) report measures and reports on the same controls as an SSAE 16 report.
  • A SOC 2 audit is actually most closely related to reporting on the security, availability and privacy of the data in your offsite backup and data hosting environment. A SOC 2 report is highly recommended for companies that host or store large amounts of data, particularly data centers. A SOC 3 report measures the same controls as a SOC 2, yet has less technical detail, and can be used publicly.
  • For specific industries that deal with certain types of data, there exist more stringent sets of compliance regulations. For the healthcare industry, or any company that touches protected health information (PHI), HIPAA compliance (Health Insurance Portability and Accountability Act) is federally mandated to protect health data. If your disaster recovery/offsite back data center provider has undergone an independent HIPAA audit of its facilities and processes, you can be assured your data is secure.
  • For e-commerce, retail, franchise and any other company that touches credit cardholder data (CHD), PCI DSS compliance (Payment Card Industry Data Security Standard) is the regulatory requirements designed to protect CHD.

Communication
When there’s an interruption in your service or issue at the data center, you should be able to count on your disaster recovery provider to promptly communicate with you in order to give your IT staff or clients proper notification. An updated contact list and tested communication plan should be key aspects of your disaster recovery and business continuity plan.

The lack of communication can put a company out of business and leave coworkers and customers in the dark. Designate a primary contact and backup contacts from your company to be the first to know in the event of a disaster, as well as assemble a technical team that can work with your provider, if outsourcing your disaster recovery solution.

When searching for an offsite backup/data center provider, ask about their communication policies and processes. Good communication can also give you insight into their level of transparency into their business operations. See section 5.5.4. Communication Plan Testing for more about establishing a realistic and effective communication plan between your company and vendors.

Fully Reserved or First-Come, First-Served?
Does your provider offer fully reserved servers for disaster recovery? Or do they lease a number of physical servers and resources to be used on a first-come, first-served basis, shared with other companies?

Providers that offer this service allow companies to load applications and attempt to recover operations on “cold” servers – these servers are considered bare metal servers with no operating system (OS), applications, patches or data. Recovery would take longer due to the time spent retrieving tape backups and traveling to the secondary site.

Ask your provider if they offer fully reserved servers for complete assurance that your company will be able to recover your data as quickly as possible, without the chance of being second in line. In addition, virtualization can eliminate the need of restoring from tape or disk, thus reducing recovery times compared to traditional disaster recovery in which physical servers need to first be loaded with the OS and application software, as well as patched to the last configuration used in production before data restoration.

5.4. SAN-to-SAN Replication

SAN (Storage Area Network)
Due to compliance reasons or due diligence, many companies not only want a backup locally that they can recover to very quickly, but they also need to get that data offsite in the event that they experience a site failure. SAN can help with these backup and recovery needs.

SAN Snapshots
A snapshot is a point-in-time reference of data that you can schedule after your database dumps and transaction logs have finished running. A SAN snapshot gives you a virtual copy/image of what your database volumes, devices or systems look like at a given time. If you have an entire server failure, you can very quickly spin up a server, install SQL or do a bare metal restore, then import all of your data and get your database server back online.

SAN-to-SAN Replication
The counterpart to SAN snapshots is SAN-to-SAN replication (or synchronization). With replication, if you had a SAN in one data center, you can send data to another SAN in a different data center location. You can back up very large volumes of data very quickly using SAN technology, and you can also transfer that data to a secondary location independently of your snapshot schedule.

This is more efficient because traditional backup windows can take a very long time and impact the performance of your system. By keeping it all on the SAN, it allows backups to be done very fast, and the data copy can be done in the background so it’s not impacting the performance of your systems.

You can configure and maintain snapshots on both your primary and disaster recovery sites, i.e., you can keep seven days’ worth of snapshots on your primary site, and you can keep seven days of replication on your disaster recovery site.

However, SANs are fairly expensive, and snapshots and replication can use a lot of space. You will also need specialized staff to configure and manage SAN operations.

SAN-based recovery focuses on large volumes of data, and it is more difficult to recover individual files. Traditional recovery focuses on critical business files for more granular recovery, but that comes at the cost of speed. With a large volume of data, traditional recovery can be much slower than SAN-based snapshots.

SAN-to-SAN replication can support a private cloud environment and provide fast recovery times (RTO of 1 hour and RPO of minutes). After a disaster is mitigated, SAN-to-SAN replication provides a smooth failback from the secondary site to the production site by reversing the replication process.

SAN vs. Traditional Backup and Disaster Recovery
Traditionally, 10 or 15 years ago, people had email servers, FTP/document servers, unstructured data and database servers. The backup and recovery of these systems must be viewed differently as they each present their own unique challenges.

With email servers, they are mission critical, highly transactional and essential to a business. They may have SQL or custom databases, and they can take a long time to rebuild after a disaster. The actual install and configuration of the application that sits on top of the database itself can be very intensive, and rebuilding that system may put you over your recovery time objective (RTO).

For a smaller company, an exchange server may be 100 to 200 GB in size. FTP/file servers can be terabytes in size, and contain large volumes of unstructured data. They are less transactional than email servers, and server configuration could be minimal. Each individual file must be backed up. When looking at systems of that size, you should stop looking at traditional backups, and you can start leveraging SAN (Storage Area Network) technology – which is a large group of disks.

Instead of having a backup window that runs for an entire day that can slow operations, you can use a SAN snapshot technology which allows you to back up more efficiently. If you need a backup of your FTP/file servers every night, you can leverage a snapshot during off-hours very quickly, from a matter of seconds to a minute. SAN snapshots can back up a large amount of data with very little impact on your production environment.

The tradeoff is it can be slightly harder to restore the data because you would need to bring up your file drive online and present it to the server. However, it can be faster than having to restore terabytes of data from a tape backup.

For standalone database servers with a large volume of structured data that are highly transactional, consider using SAN snapshot technologies with specified volumes for database dumps and transaction logs.

5.5. Best Practices

5.5.1. Encryption

What is encryption? Encryption takes plaintext (your data) and encodes it into unreadable, scrambled text using algorithms that render it unreadable unless a cryptographic key is used to convert it. Encryption ensures data security and integrity even if accessed by an unauthorized user.

According to NIST (National Institute of Science and Technology), encryption is most effective when applied to both the primary data storage device and on backup media going to an
offsite location in the event that data is lost or stolen on its way or at the site, meaning data in transit and at rest.[22] NIST also recommends keeping a solid cryptographic key management process in order to allow encrypted data to be read and available as needed (decryption).

According to data security expert Chris Heuman, Certified Information Systems Security Professional (CISSP), performing a disaster recovery test of encrypted data should be an important part of your business continuity strategy. Forcing recovery from an encrypted backup source and forcing a recovery of the encryption key to the recovery device allows organizations to find out if encryption is effective before a real disaster or breach occurs.

Encryption for HIPAA and PCI Compliance
Encryption is considered a best practice for data security and is recommended for organizations with sensitive data, such as healthcare or credit card data. It is highly recommended for the healthcare industry that must report to the federal agency, Dept. of Health and Human Services (HHS), if unencrypted data is exposed, lost stolen or misused.

The federally mandated HIPAA Security Rule for healthcare organizations handling electronic protected health information (ePHI) dictates that organizations must:

In accordance with §164.306… Implement a mechanism to encrypt and decrypt electronic protected health information. (45 CFR § 164.312(a)(2)(iv))

HIPAA also mandates that organizations must:

§164.306(e)(2)(ii): Implement a mechanism to encrypt electronic protected health information whenever deemed appropriate.
Protecting ePHI at rest and in transit means encrypting not only data collected or processed, but also data stored or archived as backups.

For organizations that deal with credit cardholder data, they must adhere to PCI DSS standards that require encryption only if cardholder data is stored.[23] PCI explicitly states:[24]

3.4 Render PAN (Primary Account Number) unreadable anywhere it is stored (including on portable digital media, backup media, and in logs) by using any of the following approaches:

    • One-way hashes based on strong cryptography (hash must be of the entire PAN)
    • Truncation (hashing cannot be used to replace the truncated segment of PAN)
    • Index tokens and pads (pads must be securely stored)
    • Strong cryptography with associated key-management processes and procedures

3.4.1.c Verify that cardholder data on removable media is encrypted wherever stored.

While both addressable and required for compliance, encryption is also considered an industry best practice – no longer just an option but necessary to protect backup data in rest and in transit to your disaster recovery/offsite backup site.

For more on encryption from both a technical and compliance perspective, check back to our White Paper section for our Encryption white paper to be released Fall 2013. Or, watch our recorded encryption webinar series with IT and data security professional guest speakers as well as experts from Online Tech in:

5.5.2. Network Replication

With a single stand-alone server, cloud-based disaster recovery allows you to ship a copy of your virtual server image offsite to run on a cloud server in the event of a disaster. However, for enterprise or more complex server configurations, more than just a server image is required for recovery. Firewall rules, VLANs, VPNs and the network replication must be fully replicated at the disaster recovery site before the site can go live.

In order to achieve rapid recovery time objectives (RTOs), the server and network must be fully replicated at the secondary site in synchronicity with the production site as changes are made. Ideally, a cloud-based disaster recovery provider should have control of both the production and disaster recovery sites to ensure network replication.

5.5.3. Testing

Testing your disaster recovery plan at least annually is a best practice for numerous reasons, including verifying that the plan actually works and training your team in the process. Testing also allows you to figure out where weaknesses lie, or gaps in the process that need to be addressed. According to NIST, the following areas should be tested:

  • Notification procedures
  • System recovery on secondary site
  • Internal and external connectivity
  • System performance with secondary equipment
  • Restoration of normal operations

Testing with a traditional disaster recovery plan can be time-consuming and costly due to the retrieval, restoration and system re-configuration required, and often conventional plans are rarely tested through a full failover scenario. With cloud-based disaster recovery, testing is easier, faster and less disruptive to your production environment and business operations than traditional disaster recovery.

Since the cloud offers offsite backup of the entire virtual server in sync with the production site, there is no need to retrieve tapes to test full recovery.

5.5.4. Communication Plan Testing

Part of your overall disaster recovery and business continuity planning should involve a well-documented communication plan based on your BIA (Business Impact Analysis).

Mapping out the interdependencies and complexity of your organization can help you identify who is the proper point of contact for any given critical function. Testing your communication plan is key to getting everyone on board and working together to achieve a smooth and realistic recovery.

Determine who is responsible for officially declaring a disaster – from IT to executives, a communication plan should be in place for business interruption or disaster notification, and then a formal declaration. After declaration, a process should be established for notifying shareholders, employees, customers, vendors and the general public, if necessary.

Aside from notification, a trained disaster recovery IT team should be identified for the secondary site, as well as for production. If working with a disaster recovery provider, ensure your contracts and agreements reflect notification and communication policies to clarify their roles and responsibilities involved in facilitating recovery.

Someone should be tasked with keeping a well-organized and up-to-date contact list for those involved in the communication plan, with cell phone and home phone numbers as well as an alternative email address in the event that corporate email/phone systems are down during a disaster.

6.0. Conclusion

Disaster recovery technology advancements have streamlined the process to offer a faster, more accurate and complete recovery solution. Leveraging the capabilities of a disaster recovery as a service (DRaaS) provider allows organizations to realize these benefits, including cost-effective and efficient testing to ensure plan viability.

The time and resource-intensive challenge of managing a secondary disaster recovery site that both meets stringent industry compliance requirements and protects mission critical data and applications can be relieved with the right disaster recovery partner.

Here is a high-level overview of what to look for in an offsite backup and disaster recovery provider and plan (see section 7.0 Questions to Ask Your Disaster Recovery Provider for more details):

  • Strategic location
  • Risk of natural disaster
  • Recovery time objective (RTO)
  • Recovery point objective (RPO)
  • Cloud-based disaster recovery
  • High availability/redundancy
  • Annual testing
  • Compliance audits and reports

Contact our disaster recovery and offsite backup experts at Online Tech for more information if you still have questions about IT disaster recovery planning or our disaster recovery data centers.

Visit: www.onlinetech.com
Email: This email address is being protected from spambots. You need JavaScript enabled to view it.
Call: 734.213.2020

7.0. References

7.1. Questions to Ask Your Disaster Recovery Provider

When you look to a third party disaster recovery provider, what kind of questions should you ask to ensure your critical data and applications are safe? Read on for tips on what to look for in a disaster recovery as a service (DRaaS) solution from your hosting provider.

1. Do you have the following data center certifications: SSAE 16, SOC 1, 2 and 3?

Data center certifications should be up-to-date, backed up by an auditor’s report, and comprehensive of all security-related controls. Here’s a brief snippet of what each one measures:

SSAE 16The Statement on Standards for Attestation Engagements (SSAE) No. 16 replaced SAS 70 in June 2011 – if your current disaster recovery provider only has a SAS 70 certification, keep looking! SSAE 16 has made SAS 70 extinct.

A SSAE 16 audit measures the controls, design and operating effectiveness of data centers, as relevant to financial reporting. (Note: SSAE 16 does not provide assurance of controls directly related to data centers/disaster recovery providers).

SOC 1 - The first of three new Service Organization Controls reports developed by the AICPA, this report measures the controls of a data center as relevant to financial reporting. SOC 1 is essentially the same as SSAE 16 – the purpose of the report is to meet financial reporting needs of companies that use data hosting services, including disaster recovery.

SOC 2 - SOC 2 measures controls specifically related to IT and data center service providers, unlike SOC 1 or SSAE 16. The five controls are security, availability, processing integrity (ensuring system accuracy, completion and authorization), confidentiality and privacy.

SOC 3 - SOC 3 delivers an auditor’s opinion of SOC 2 components with the additional seal of approval needed to ensure you are hosting with an audited and compliant data center. A SOC 3 report is less detailed and technical than a SOC 2 report.

2. What is your recovery time objective and recovery point objective SLA?

Recovery Time Objective (RTO): This refers to the maximum length of time a system can be down after a failure or disaster before the company is negatively impacted by the downtime.

Recovery Point Objective (RPO): This specifies a point in time that data must be recovered and backed up. The RPO determines the minimum frequency at which interval backups need to occur, from every hour to every 5 minutes.

Clarifying the time objectives with your disaster recovery provider can help your organization plan for the worst and know what to expect, when.

3. Where are your disaster recovery data centers located?

Natural disasters happen at any time, almost anywhere – but you can decrease your odds of experiencing them by choosing to partner with a disaster recovery provider that has data center facilities located in a disaster-free zone. The Midwest is one region that is relatively free from major disasters. Read more in High Density of Data Centers Correlate with Disaster Zones; Michigan Provides Safe Haven.

4. Do you offer cloud-based disaster recovery?

As VMware.com states, “traditional disaster recovery solutions are complex to set up. They require a secondary site, dedicated infrastructure, and hardware-based replication to move data to the secondary site.”

With cloud-based disaster recovery, you could achieve a 4 hour RTO and 24 hour RPO. Cloud-based disaster recovery replicates the entire hosted cloud (servers, software, network and security) to an offsite data center, allowing for far faster recovery times than traditional disaster recovery solutions can offer.

5. How often do you test your disaster recovery systems?

Disaster recovery providers should test at least annually to ensure systems are prepared for an emergency response whenever a disaster is declared. Testing also allows for a valuable learning experience – if anything goes wrong, professionals can investigate and remediate before an actual disaster occurs. It’s also a test run for the personnel involved in managing the event to ensure the documented communication plan actually works as anticipated.

8.0 Contact Us

Contact our disaster recovery and offsite backup experts at Online Tech for more information if you still have questions about IT disaster recovery planning or our disaster recovery data centers.

Visit: www.onlinetech.com
Email: This email address is being protected from spambots. You need JavaScript enabled to view it.
Call: 734.213.2020


[1] Forrester Research and Disaster Recovery Journal, The State of Business Continuity Preparedness; http://www.drj.com/images/surveys_pdf/forrester/2011_Forrester_SOBC.pdf

[2] Online Tech, Risks on the Rise: Making a Case for IT Disaster Recovery; http://resource.onlinetech.com/risks-on-the-rise-making-a-case-for-it-disaster-recovery/

[3] Gigaom, The States with the Most Data Centers Are Also the Most Disaster-Prone [Maps]; http://gigaom.com/2013/01/10/the-states-with-the-most-data-centers-are-also-the-most-disaster-prone-maps/

[4] PCI Security Standards Council, PCI DSS Requirements and Security Assessment Procedures, Version 2.0; https://www.pcisecuritystandards.org/documents/pci_dss_v2.pdf (PDF)

[5] U.S. Depart. of Health and Human Services (HHS), HIPAA Security Series: Security Standards: Organizational, Policies and Procedures and Documentation Requirements; http://www.hhs.gov/ocr/privacy/hipaa/administrative/securityrule/pprequirements.pdf (PDF)

[6] PCI Security Standards Council, PCI DSS Requirements and Security Assessment Procedures, Version 2.0; https://www.pcisecuritystandards.org/documents/pci_dss_v2.pdf (PDF)

[7] PCI Security Standards Council, PCI DSS Requirements and Security Assessment Procedures, Version 2.0; https://www.pcisecuritystandards.org/documents/pci_dss_v2.pdf (PDF)

[8] U.S. Dept. of Health and Human Services, Administrative Safeguards; http://www.gpo.gov/fdsys/pkg/CFR-2009-title45-vol1/pdf/CFR-2009-title45-vol1-sec164-308.pdf (PDF)

[9] FEMA (Federal Emergency Management Agency), Business Continuity Plan; http://www.ready.gov/business/implementation/continuity

[10] FEMA (Federal Emergency Management Agency, Risk Assessment; http://www.ready.gov/risk-assessment

[11] Online Tech, Business Continuity in Lean Times (Webinar); http://www.onlinetech.com/events/business-continuity-in-lean-times

[12] Ready.gov, Business Impact Analysis Worksheet; http://www.ready.gov/sites/default/files/documents/files/BusinessImpactAnalysis_Worksheet.pdf

[13] Online Tech, Business Continuity in Lean Times (Webinar); http://www.onlinetech.com/events/business-continuity-in-lean-times

[14] Online Tech, Seeking a Disaster Recovery Solution? Five Questions to Ask Your DR Provider; http://resource.onlinetech.com/five-questions-to-ask-your-disaster-recovery-provider/

[15] Online Tech, Online Tech Expert Interview: What is High Availability?; http://resource.onlinetech.com/michigan-data-center-operator-online-tech-expert-interview-what-is-high-availability/

[16] Online Tech, How the Cloud is Changing the Data Center’s Bad Reputation for Energy Inefficiency; http://resource.onlinetech.com/how-the-cloud-is-changing-the-data-centers-bad-reputation-for-energy-inefficiency/

[17] Data Center Knowledge/Online Tech, How the Cloud Changes Disaster Recovery; http://www.datacenterknowledge.com/archives/2011/07/26/how-the-cloud-changes-disaster-recovery/

[18] Online Tech, Disaster Recovery in Depth; http://www.onlinetech.com/events/disaster-recovery-in-depth/

[19] CIOUpdate.com, Disaster Recovery Planning; http://www.cioupdate.com/trends/article.php/3872926/Disaster-Recovery-Planning---How-Far-is-Far-Enough.htm

[20] NIST (National Institute of Science and Technology), Special Publication 800-34 Rev. 1 – Contingency Planning Guide for Federal Information Systems; http://csrc.nist.gov/publications/nistpubs/800-34-rev1/sp800-34-rev1_errata-Nov11-2010.pdf (PDF)

[21] NIST (National Institute of Science and Technology), Special Publication 800-34 Rev. 1 – Contingency Planning Guide for Federal Information Systems; http://csrc.nist.gov/publications/nistpubs/800-34-rev1/sp800-34-rev1_errata-Nov11-2010.pdf (PDF)

[22] NIST (National Institute of Science and Technology), Special Publication 800-34 Rev. 1 – Contingency Planning Guide for Federal Information Systems; http://csrc.nist.gov/publications/nistpubs/800-34-rev1/sp800-34-rev1_errata-Nov11-2010.pdf (PDF)

[23] Chris Heuman CHP, CHSS, CSCS, CISSP, Practice Leader for RISC Management and Consulting, Encryption – Perspective on Privacy, Security & Compliance; http://www.onlinetech.com/events/encryption-perspective-on-privacy-security-a-compliance (Webinar)

[24] PCI Security Standards Council, PCI DSS Requirements and Security Assessment Procedures, Version 2.0; https://www.pcisecuritystandards.org/documents/pci_dss_v2.pdf (PDF)

…(continue reading)

Compliant Hosting for Mission Critical Data

Protect Your Company with Online Tech’s Security, Compliance & Availability

Online Tech is your strategic compliant data center partner, dedicated to exceptional hosting experiences today, and as your business evolves tomorrow. With the complete range of hosting options from fully managed private clouds to dedicated servers and private colocation, we customize server, storage, and network environments to meet your needs.

  • cloud-hosting-overview
  • managed-server-hosting-overview
  • colocation-hosting-overview
  • disaster-recovery-hosting-overview

We embrace our responsibilities to protect sensitive data, and invest in independent annual audits across all current SOXHIPAA and PCI DSS compliance requirements. We’ll ease your compliance burdens by sharing our independent audit SSAE 16/SOC 1, SOC 2HIPAA and PCI DSS reports under NDA.

Have questions? Call us at 734.213.2020, email  This email address is being protected from spambots. You need JavaScript enabled to view it. , or use our handy Contact Form. Or, Chat with our team now.

Compliant Hosting Resources
whitepaper-icon-red White Papers
additional-icon-red Compliant Hosting Articles
seminar-icon-red Webinars

…(continue reading)

Meet Health IT Business Development Expert: Peggy McShane

Meet Health IT Business Development Expert: Peggy McShane

peggy mcshane

Peggy McShane, Managing Director, Segue Health

Peggy leads business development and client account management for the Federal, Commercial, and Non-Profit health market segments to define innovative services and solutions, with the ultimate goal of supporting emergent health systems transform health data into practical and useful information. Peggy's background in health and health IT includes: a ten-year career in the hospital setting as a Director of Medical Records; a ten+ year career as a health and health IT consultant with Booz Allen Hamilton; and a five-year career as the business owner of Net New Growth, LLC.

 
Meet Health IT Business Development Expert, Peggy McShane at HIMSS 13!

Attending the national healthcare conference HIMSS 13 in March?

Sign up to schedule a free one-on-one consultation with health IT business development expert, Peggy McShane!

Meet us at booth #1369!

Sign up today as time slots are limited!

ERROR - Access Denied To This Form - The form is not published!

 
 

(Bio cont.)

  • Federal, Commercial, and Non-Profit Health Business Development
  • Health and Health Information Technology (IT) Consulting Expertise
  • Past President, Health Information Management and Systems Society (HIMSS) National Capital Area (NCA); Co-Chair, HIMSS National Small Business and Diversity Roundtable

As the Managing Director of Segue Health, Peggy leads business development and client account management for the Federal, Commercial, and Non-Profit health market segments. As part of this leadership, she works with Segue’s ownership and technology teams to define innovative services and solutions, with the ultimate goal of supporting emergent health systems transform health data into practical and useful information.

Peggy has an extensive professional background in health and health IT, including: a ten-year career in the hospital setting as a Director of Medical Records; a ten+ year career as a health and health IT consultant with Booz Allen Hamilton; and a five-year career as the business owner of Net New Growth, LLC. In November 2012, Net New Growth, LLC merged with Segue Technologies, Inc. to form “Segue Health.”

Peggy holds a Bachelors of Science in Health Records Administration from York College of Pennsylvania, a Masters of Science in Information Management from Marymount University, and a Masters Certificate in Strategic Marketing from Tulane University. As a health IT focused professional, Peggy was one of the first in the nation to receive the Certified Professional in Health Information Management Systems (CPHIMS) from HIMSS.


Health IT WebinarsHealth IT Webinars:

Implications of Recent Medicare Announcements on Trends in Physician Payment Methods

Recent announcements by Medicare regarding pilot programs and 2013 payment changes represent developments in methodologies and policy changes that will impact Medicare physician payment in the future. This webinar explores these recent announcements and the underlying trends that will likely have a dramatic impact on physician payment in the future.

Watch the video recording and view the slides.

Overview of the SCOTUS Decision and Its Impact on Healthcare IT

This webinar discusses the broader implications of the recent Supreme Court decision on healthcare and how it will affect meaningful use for covered entities and business associates alike.

Watch the video recording and view the slides.

Healthcare Security Vulnerabilities

This webinar will review several REAL healthcare related security engagements, provide an overview of the IT Security world today, provide insight into the hacking community, discuss several proactive methodologies for mitigation of security vulnerabilities and explain the shortcomings of some security testing methodologies.

Watch the video recording and view the slides.


Health IT InformationHealth IT Articles

Technical Solutions to Meet the OCR HIPAA Audit Protocol

Announced this summer, the Office for Civil Rights (OCR) created its own set of guidelines for auditing covered entities pursuant to the HITECH Act audit mandate. As the governing entity of HIPAA law, the OCR determines if an organization is … Continue reading →

Addressing the Top IT Security Issues of 2012

Trustwave’s 2012 Global Security Report produced several key findings on data breaches and security issues across many industries. Here are a few of the findings, with resources to help remedy them: Customer records made up 89 percent of all breached … Continue reading →

2012 State of Mobile Health IT

The 2nd Annual HIMSS Mobile Technology Survey, sponsored by Qualcomm Life, found that over 90 percent of respondents reported physicians using mobile technology in their everyday operations. Eighty percent of physicians use mobile technology to provide patient care, and nearly … Continue reading →

…(continue reading)

Meet Mobile Healthcare Applications Expert: Dave Bennett

Meet Mobile Healthcare Applications Expert: Dave Bennett!

dave bennett

Dave Bennett, National Sales Director, Healthcare

Dave directs AnyPresence’s healthcare sales.  He has been in Mobility with a focus on Healthcare since 2010, and was involved in growing mobility programs with companies such as Aetna and Independence Blue Cross.  Prior to getting into mobility, Dave was with Axeda Corporation, an M2M enterprise software company providing remote connectivity to medical device and life sciences companies such as Siemens, GE and Phillips.

 
Meet Mobile Healthcare Applications Expert, Dave Bennett at HIMSS 13!

Attending the national healthcare conference HIMSS 13 in March?

Sign up to schedule a free one-on-one consultation with mobile healthcare applications expert, Dave Bennett!

Meet us at booth #1369!

Sign up today as time slots are limited!

ERROR - Access Denied To This Form - The form is not published!

 
 

(Bio cont.)

Dave’s earlier career was spent in medical devices, first selling clean rooms for immune-suppressed patients, then spending 5 years in acute hemodialysis with HemoCleanse, where he established clinical trials in Japan, the US and UK, achieved FDA clearance and subsequent sales to hospitals worldwide for an innovative therapy for multi-organ failure. Later, Dave created channel sales and pre-FDA revenue for Immunetics, a company with a new approach to HIV confirmatory testing.

Dave graduated from Duke University and received his Masters from Purdue University.


Health IT WebinarsMobile Healthcare (mHealth) Application Webinars:

FDA Regulation of Mobile Health Devices

This recorded webinar discusses how software has increasingly become an integral part of healthcare, whether through incorporation into medical devices, as a stand-alone system that practitioners use to make clinical decisions, or as a means for transmitting and storing medical records.

Watch the video recording and view the slides.

BYOD: From Concept to Reality

During this presentation, Kirk Larson, Vice President and Chief Information Officer at Children’s Hospital Central California, explains how the hospital uses a virtual environment to securely manage a BYOD (Bring Your Own Device) environment without jeopardizing sensitive data.

Watch the video recording and view the slides.


Health IT InformationMobile Healthcare Application Articles

Mobile Security White Paper: Policies, Technology & BYOD

The integration of diverse mobile devices throughout the work environment is both inevitable and enabling. Workflows previously tied to less portable devices can now enjoy free access wherever a wireless signal allows.

But enabling access also presents security, privacy, and confidentiality concerns. Industries that rely on sensitive data such as healthcare, financial, and insurance have heightened risks and concerns. Addressing security concerns is nothing new for these industries, but mobile technologies present a dizzying array of uniquely configured, user-selected hardware and software … Continue reading →

Global Mobile Trends See Rise in BYOD; Policies Lag

A recent global survey conducted by Cisco Internet Business Solutions Group (CIBSG) found that 89 percent of IT leaders from enterprise and mid-sized companies supported BYOD (Bring Your Own Device) in some form – supporting the movement toward an increase … Continue reading →

2012 State of Mobile Health IT

The 2nd Annual HIMSS Mobile Technology Survey, sponsored by Qualcomm Life, found that over 90 percent of respondents reported physicians using mobile technology in their everyday operations. Eighty percent of physicians use mobile technology to provide patient care, and nearly … Continue reading →

…(continue reading)

Meet Healthcare Disaster Recovery Expert: Chris Heuman

Meet Healthcare Disaster Recovery Expert: Chris Heuman

chris heuman

Christopher Heuman CHP, CHSS, CSCS, CISSP, Practice Leader, RISC Management & Consulting

Prior to consulting, Chris Heuman  worked in healthcare organizations in an information systems and data security capacity for over 20 years. Chris held increasingly responsible positions in healthcare IT from systems and network administration to project management, infrastructure management and information security. Prior to founding RISC Management, Chris developed consulting programs focused on information security and compliance specifically for healthcare institutions as a Director of Engineering Services at mCurve, and Practice Leader for Compliance and Security at ecfirst.

 
Meet Healthcare Disaster Recovery Expert, Chris Heuman at HIMSS 13!

Attending the national healthcare conference HIMSS 13 in March?

Sign up to schedule a free one-on-one consultation with healthcare business continuity & disaster recovery expert, Chris Heuman, Practice Leader at RISC Management.

Learn about business continuity management and planning, risk analysis, data loss prevention and disaster recovery for healthcare companies in need of HIPAA compliance.

Sign up today as time slots are limited!

Meet us at booth #1369!

ERROR - Access Denied To This Form - The form is not published!

 

 

 

 

(Bio cont.)

Through his practical experience and certifications as a Certified HIPAA Professional (CHP), Certified Security Compliance Specialist (CSCS) and Certified Information Systems Security Professional (CISSP), Chris is uniquely experienced to assist healthcare organizations in understanding and meeting the myriad compliance and security regulations and requirements they face.

As the Practice Leader at RISC Management, Chris helps providers and healthcare technology organizations by providing services in the areas of risk analysis, vulnerability assessment, business continuity management and planning, business impact analysis, disaster recovery planning, social engineering tests, data loss prevention, education and training, project management and consensus building at all organizational levels. In addition, Chris has presented training programs in the HIPAA, HITECH, compliance and security space, and has been a featured presenter for statewide healthcare organizations, for Health Information Exchanges, as a guest speaker for MBA programs, and has delivered tailored training to dozens of healthcare-related organizations and accreditation bodies.

Chris can be contacted at This email address is being protected from spambots. You need JavaScript enabled to view it.


Health IT WebinarsDisaster Recovery Webinar Series:

Online Tech's Systems Support Manager Steve Aiello led a three-part webinar series on the topic of disaster recovery, from case studies to technical implementation.

Business Continuity in Lean Times

Businesses have a responsibility to their stakeholders to think about their long-term viability - Steve provides an overview of disaster recovery and business continuity with real company examples and the benefits of developing a business continuity plan.

Watch the video recording and view the slides.

Disaster Recovery in Depth

This webinar transitions from theory and thought processes into practical application of disaster recovery. It covers various disaster response options including:

  • Hidden Benefits of BC / DR planning
  • Disaster Case Studies
  • Staffing / Facilities Recovery Strategies
  • Processes organizational design
  • Facilities design that facilities DR
  • IT tools available to increase availability

Watch the video recording and view the slides.

Technical Disaster Recovery Implementation

Ths webinar transitions from theory and thought processes into the implementation of technical disaster recovery. We identify the technical requirements necessary utilizing the business continuity concepts and technical strategies.

Watch the video recording and view the slides.


Health IT InformationDisaster Recovery Articles

Disaster Recovery for HIPAA Applications - It's All About Availability of PHI

HIPAA – The Health Insurance Portability and Accountability Act focuses on three key criteria for handling Protected Health Information (PHI):  availability, confidentiality and integrity. This blog post focuses on availability as it applies to HIPAA applications and HIPAA data. Availability … Continue reading →

Benefits of Disaster Recovery in Cloud Computing

There are a lot of benefits with cloud computing – cost-effective resource use, rapid provisioning, scalability and elasticity. One of the most significant advantages to cloud computing is how it changes disaster recovery, making it more cost-effective and lowering the bar for enterprises to deploy comprehensive DR plans for their entire IT infrastructure.  … Continue reading →

Risks on the Rise: Making a Case For IT Disaster Recovery

According to the Forrester/Disaster Recovery Journal Business Continuity Preparedness Survey from 2011 Q4, the top increasing risks cited by a survey of decision-makers or influencers when it comes to IT planning and purchasing for business continuity were as follows: (48%) … Continue reading →

…(continue reading)

Meet the Experts Thank You

Thank You for Registering!

An Online Tech representative will contact you shortly.

Have questions? Call us at 734.213.2020, email This email address is being protected from spambots. You need JavaScript enabled to view it. , or use our handy Contact Form. Or, Chat with our team now.


Meet Health IT Experts at HIMSS 13:

Legal Implications of the Final HIPAA Omnibus Rule

Brian Balow

Brian Balow, Attorney, Dickinson Wright

Brian Balow is a member of the law firm Dickinson Wright PLLC, where he concentrates his practice in the areas of information technology, healthcare law, and intellectual property. Brian has worked with Fortune 100 clients over the last fifteen years on Information Technology-related matters, including the drafting and negotiation of agreements, formulation and implementation of policies and procedures for the management of IT (including outsourcing-related issues), counseling and advising on privacy and data security issues, and assisting clients in favorably resolving disputes with IT vendors (including disputes with the BSA and SIIA).

Find out more about Brian Balow.


Healthcare Business Continuity & Disaster Recovery

chris heumanChristopher Heuman CHP, CHSS, CSCS, CISSP, Practice Leader, RISC Management & Consulting

As the Practice Leader at RISC Management, Chris helps providers and healthcare technology organizations by providing services in the areas of risk analysis, vulnerability assessment, business continuity management and planning, business impact analysis, disaster recovery planning, social engineering tests, data loss prevention, education and training, project management and consensus building at all organizational levels.


Mobile Healthcare Applications

dave bennett

Dave Bennett, National Sales Director, Healthcare

Dave directs AnyPresence’s healthcare sales.  He has been in Mobility with a focus on Healthcare since 2010, and was involved in growing mobility programs with companies such as Aetna and Independence Blue Cross.  Prior to getting into mobility, Dave was with Axeda Corporation, an M2M enterprise software company providing remote connectivity to medical device and life sciences companies such as Siemens, GE and Phillips.


Health IT Business Development

peggy mcshane

Peggy McShane, Managing Director, Segue Health

Peggy leads business development and client account management for the Federal, Commercial, and Non-Profit health market segments to define innovative services and solutions, with the ultimate goal of supporting emergent health systems transform health data into practical and useful information. Peggy's background in health and health IT includes: a ten-year career in the hospital setting as a Director of Medical Records; a ten+ year career as a health and health IT consultant with Booz Allen Hamilton; and a five-year career as the business owner of Net New Growth, LLC. Peggy holds a Bachelors of Science in Health Records Administration from York College of Pennsylvania, a Masters of Science in Information Management from Marymount University, and a Masters Certificate in Strategic Marketing from Tulane University.


HIPAA Compliant Cloud Computing

April SageApril Sage, CPHIMS, Director Healthcare Vertical, Online Tech

April Sage has focused on the IT industry for over two decades, initially founding a technology vocational program. In 2000, April founded a bioinformatics company that supported biotech, pharma, and bioinformatic companies in the development of research portals, drug discovery search engines, and other software systems. Since then, April has been involved in the development and implementation of online business plans and integrated marketing strategies across insurance, legal, entertainment, and retail industries. In her current position as Director Healthcare Vertical of Online Tech, April focuses on cloud computing and data center technologies that enable the healthcare space. April is a member of the inaugural cohort of the University of Michigan’s Masters Health Informatics program, a program fully and jointly sponsored by the School of Public Health and School of Information.


Online Tech's HIPAA Compliant Hosting Solutions and Data Centers

Online Tech is the only hosting provider independently HIPAA audited against the OCR HIPAA Audit Protocol and found to be 100% compliant. Our HIPAA security trained staff support a complete range of hosting options: colocation, managed dedicated servers, hybrid and private clouds, and disaster recovery.

Health IT Resources
additional-icon-red Health Technology Topics
seminar-icon-red Presentations
whitepaper-icon-red White Paper
webinar-icon-red Webinars

Learn More About Online Tech's Health IT Hosting Services

We embrace our responsibilities to protect ePHI (electronic protected health information), and sign business associate agreements (BAAs) with every health care client. We'll share our documented HIPAA risk assessment and any of our independent HIPAA, PCI, and SOC audit reports upon request.

  • cloud-hosting-overview
  • managed-server-hosting-overview
  • colocation-hosting-overview
  • disaster-recovery-hosting-overview

…(continue reading)

Meet our HIPAA Law Expert: Brian Balow

Meet HIPAA Law Expert: Brian Balow

Brian Balow

Brian Balow, Attorney, Dickinson Wright

Brian Balow concentrates his practice in the areas of IT, healthcare law, and intellectual property. Brian has worked with Fortune 100 clients over the last 15 years on IT-related matters, including the drafting & negotiation of agreements, formulation & implementation of policies & procedures for the management of IT (including outsourcing-related issues), counseling & advising on privacy & data security issues, and assisting clients in favorably resolving disputes with IT vendors (including disputes with the BSA and SIIA).

 
Meet HIPAA Law Expert, Brian Balow at HIMSS 13!

Attending the national healthcare conference HIMSS 13 in March?

Sign up to schedule a free one-on-one consultation with our HIPAA law expert, Attorney Brian Balow with Dickinson Wright.

Learn about the legal implications of the final HIPAA omnibus rule on covered entities, business associates and subcontractors.

Sign up today as time slots are limited!

Meet us at booth #1369!

ERROR - Access Denied To This Form - The form is not published!

 
 

(Bio cont.)

More recently, Brian has spoken and written extensively on healthcare IT and telemedicine issues (including HIPAA/HITECH issues). In 2012, Brian presented on social media in healthcare issues at HIMSS12 in Las Vegas and to the National Council of State Boards of Nursing in Idaho, on regulation of mHealth technology at the SoCal HIMSS Health IT Innovation Summit in Yorba Linda, California, and on BYOD issues at the HIMSS mHealth Summit in Washington, DC.  In December of 2011, Brian contributed the chapter entitled “Allocation and Mitigation of Liability” to the ABA Health Law Section’s “E-Health, Privacy, and Security Law” treatise.

Brian is a 1988 cum laude graduate of the University of Georgia School of Law, where he was twice a scholarship recipient and was Managing Editor of the Georgia Journal of International and Comparative Law. Following graduation, Brian served as a judicial law clerk to the Hon. James Harvey in the United States District Court, Eastern District of Michigan.


Health IT WebinarsWebinar featuring Brian Balow:

No More Excuses: HHS Releases Tough Final HIPAA Privacy and Security Rules

On January 17, 2013, the Department of Health and Human Services released its long-anticipated modifications to the Privacy, Security, Enforcement, and Breach Notification Rules under HIPAA/HITECH.

These modifications leave no doubt that covered entities, business associates, and their subcontractors must understand the application of these Rules to their operations, and must take steps to ensure compliance with these Rules in order to avoid liability.

The webinar discusses of the modifications, their impact on covered entities, business associates, and subcontractors, and mechanisms for minimizing the risk of HIPAA liability.

Watch the video recording and view the slides.


Health IT InformationArticles on the Final HIPAA Omnibus Rule

How the Final Omnibus Rule Affects HIPAA Cloud Computing Providers

The long-awaited final modifications to the HIPAA Privacy, Security, Enforcement and Breach Rules were introduced Thursday. The 563-word document outlines the changes that were initially slated for implementation last summer (remember the omnibus rule?). So how do these modifications affect … Continue reading →

HIPAA Omnibus Rule Narrows the HIPAA Hosting Market

The final HIPAA omnibus rule released late last week holds business associates (BAs) and subcontractors (the BA of a business associate) directly liable for compliance with the HIPAA rules, and sets a deadline for compliance with the new modifications. There’s … Continue reading →

…(continue reading)

Meet the Health IT Experts: HIMSS 13

Meet Health IT Experts at HIMSS 13!

Attending the national healthcare conference HIMSS 13 in March? Sign up to schedule a free one-on-one consultation with our health IT panel of experts on topics such as:

Featuring a health IT attorney, Certified HIPAA/Information Systems Security Professional, mobile health specialist, health IT business consultant and compliant cloud computing specialist.

Sign up today as time slots are limited!

Meet Health IT Experts at HIMSS 13

ERROR - Access Denied To This Form - The form is not published!

 
 

More about health IT experts attending HIMSS 13 - meet us at booth #1369:

Legal Implications of the Final HIPAA Omnibus Rule

Brian Balow

Brian Balow, Attorney, Dickinson Wright

Brian Balow is a member of the law firm Dickinson Wright PLLC, where he concentrates his practice in the areas of information technology, healthcare law, and intellectual property. Brian has worked with Fortune 100 clients over the last fifteen years on Information Technology-related matters, including the drafting and negotiation of agreements, formulation and implementation of policies and procedures for the management of IT (including outsourcing-related issues), counseling and advising on privacy and data security issues, and assisting clients in favorably resolving disputes with IT vendors (including disputes with the BSA and SIIA).

Find out more about Brian Balow.


Healthcare Business Continuity & Disaster Recovery

chris heumanChristopher Heuman CHP, CHSS, CSCS, CISSP, Practice Leader, RISC Management & Consulting

As the Practice Leader at RISC Management, Chris helps providers and healthcare technology organizations by providing services in the areas of risk analysis, vulnerability assessment, business continuity management and planning, business impact analysis, disaster recovery planning, social engineering tests, data loss prevention, education and training, project management and consensus building at all organizational levels.

Find out more about Chris Heuman.


Mobile Healthcare Applications

dave bennett

Dave Bennett, National Sales Director, Healthcare, AnyPresence

Dave directs AnyPresence’s healthcare sales.  He has been in Mobility with a focus on Healthcare since 2010, and was involved in growing mobility programs with companies such as Aetna and Independence Blue Cross.  Prior to getting into mobility, Dave was with Axeda Corporation, an M2M enterprise software company providing remote connectivity to medical device and life sciences companies such as Siemens, GE and Phillips.

Find out more about Dave Bennett.


Health IT Business Development

peggy mcshane

Peggy McShane, Managing Director, Segue Health

Peggy leads business development and client account management for the Federal, Commercial, and Non-Profit health market segments to define innovative services and solutions, with the ultimate goal of supporting emergent health systems transform health data into practical and useful information. Peggy's background in health and health IT includes: a ten-year career in the hospital setting as a Director of Medical Records; a ten+ year career as a health and health IT consultant with Booz Allen Hamilton; and a five-year career as the business owner of Net New Growth, LLC. Peggy holds a Bachelors of Science in Health Records Administration from York College of Pennsylvania, a Masters of Science in Information Management from Marymount University, and a Masters Certificate in Strategic Marketing from Tulane University.

Find out more about Peggy McShane.


HIPAA Compliant Cloud Computing

April SageApril Sage, CPHIMS, Director Healthcare Vertical, Online Tech

April Sage has focused on the IT industry for over two decades, initially founding a technology vocational program. In 2000, April founded a bioinformatics company that supported biotech, pharma, and bioinformatic companies in the development of research portals, drug discovery search engines, and other software systems. Since then, April has been involved in the development and implementation of online business plans and integrated marketing strategies across insurance, legal, entertainment, and retail industries. In her current position as Director Healthcare Vertical of Online Tech, April focuses on cloud computing and data center technologies that enable the healthcare space. April is a member of the inaugural cohort of the University of Michigan’s Masters Health Informatics program, a program fully and jointly sponsored by the School of Public Health and School of Information.

 

…(continue reading)

Mobile Security White Paper

Download the Mobile Security white paperpdf-icon 

View the full white paper below.

1.0. Executive Summary
     1.1. Mobile Growth in National Market
     1.2. Mobile Use in the Workplace
2.0. Mobile Security Issues
     2.1. Mobile Device Security Risks
     2.2. Types of Mobile Security Risks
3.0. Compliance and Mobile Devices
     3.1. Compliant Environments
     3.2. PCI DSS Recommendations for Mobile Devices
     3.3. HIPAA Recommendations for Mobile Devices
4.0. Data Security Tools
     4.1. Technical Security
     4.2. Physical Security
     4.3. Administrative Security
           4.3.1. Mobile Use Policies
5.0. Outsource vs. In-House Hosting
     5.1. Benefits of Outsourcing Hosting
     5.2. Risks of Outsourcing
6.0. Vendor Selection Criteria
     6.1. Audited Data Centers and Secure Hosting Solutions
           Reports on Compliance
           Key Data Center Audits
           Business Associate Agreement
           Staff Security Training
     6.2. Other Key Data Center Considerations
           Ownership
           Geographical Location
           Disaster Recovery
           High Availability
           Cloud Computing
           Server and storage devices
           Room to Grow
           Managed Services
7.0. Conclusion
8.0. References
     8.1. Questions to Ask Your Secure Hosting Provider
     8.2. Data Center Standards Cheat Sheet 
     8.3. Mobile Security Checklist
     8.4. BYOD Case Study
9.0. Contact Us

 

1.0. Executive Summary

The integration of diverse mobile devices throughout the work environment is both inevitable and enabling. Workflows previously tied to less portable devices can now enjoy free access wherever a wireless signal allows.

But enabling access also presents security, privacy, and confidentiality concerns. Industries that rely on sensitive data such as healthcare, financial, and insurance have heightened risks and concerns. Addressing security concerns is nothing new for these industries, but mobile technologies present a dizzying array of uniquely configured, user-selected hardware and software.

It’s a good bet that the selection of phone, carrier, and apps is driven more by usability than security. Information and security officers have a thinner tightrope to walk when enabling and protecting customers.

So what to do? This white paper explores approaches to mobile security from risk assessment (what data are truly at risk), enterprise architecture (protect the data before the devices), policies and technologies, and concludes with an example of a mobile security architecture designed and implemented within a hospital environment in which both enabling caregivers and protecting privacy, integrity, and confidentiality are paramount.

1.1. Mobile Growth in National Market

An increase in mobile device and application demand has skyrocketed over the past few years, with users investing primarily in smartphones – the worldwide smartphone market has grown 42.5 percent year over year in the first quarter of 2012. [1]

With mobile device use increasing, applications have also grown in number. Gartner predicts more than 80 billion mobile apps will be downloaded in 2013, growing to more than 300 billion in 2016. [2]

In step with the demand, the worldwide application development software market is predicted to reach more than $9 billion by the end of 2012. [3] Gartner reports cloud technology, mobility (specifically smartphones and tablets) and open source software tools will continue to drive app development.

1.2. Mobile Use in the Workplace

Mobile device use has shaped the communication landscape and subsequent workflow of certain industries, specifically, healthcare. Forty percent of consumers reported they would pay for mobile remote patient monitoring, using smartphones as hubs that would monitor chronic diseases. [4] Remote patient monitoring via a mobile device reduces the need for more costly custom medical devices, and cuts down on diagnosis error and medication overuse.

Further streamlining patient care, 40 percent of physicians report they could eliminate up to 30 percent of office visits by using mobile health strategies. [5] Mobile devices also extend the reach of care beyond geographical limitations – rural health care in remote locations has improved due to imaging and monitoring capabilities afforded by applications, that otherwise would not be available to these areas.

mobile-security-1Every industry can realize the benefits of mobile device use: 

  • Increased productivity – The ability for remote access allows employees to work from anywhere outside of normal business hours or locations. Preparing in advance for the workday is easier with mobile devices and remote connections.
  • Personal – Highly customizable and often toted around everywhere, mobile devices have a higher engagement rate due to ease of usability and convenience, making it ideal to use for work purposes.
  • Streamlined workflow – By virtualizing the desktop environment and creating a secure connection between the device and the network systems, mobile devices can access critical applications from any location. A more efficient, paperless workflow can be realized with virtualization and device support.
  • Better client experience – Employees that provide a support function to clients may be able to respond to requests faster, particularly outside of regular office hours, allowing for improved client satisfaction.

2.0. Mobile Security Issues

With the use of personal mobile devices in the workplace come security issues around device use, data exchange and storage, connectivity and more. The security risks raise the need for IT staff to standardize device use and establish, implement and train users on mobile device policies.

mobile-security-2According to Symantec’s Internet Security Threat Report, there has been a 93 percent increase in mobile malware development since 2010. [6] 

Half of the total attacks were directed at small and medium-sized businesses (SMBs) with fewer than 2,500 employees, and 17 percent hit companies with fewer than 250 employees.

To hackers, smaller organizations are seen as an open door to larger organizations due to partnerships. In the case of franchises, franchisees can be gateways into the larger corporate network. With limited resources to invest in proper security, or due to a lack of security knowledge and culture, they are often left vulnerable.

mobile-security-3Industries with the greatest percent of stolen identities included healthcare, computer software and IT sectors, with healthcare at 43 percent. 

2.1. Mobile Device Security Risks

Mobile devices introduce several risks and points of entry, including insider threats, malware, spear phishing and more as described in section 2.2.

A recent report by Trend Micro, a content security software provider, reports that mobile malware growth targeting Androids has escalated from 40,000 to 175,000 malicious apps between July and September 2012. [7] Most of this is attributed to adware, which collects user information without user consent.

Mobile malware development has risen in accordance with the increase in mobile device sales. Unfortunately, with the advent of easily downloadable apps and the frequent use of devices, users may not realize that certain apps can be malicious. Mobile users should use the same precaution and security services available to personal computers and networks to protect against mobile malware.

2.2. Types of Mobile Security Risks

The U.S. Department of Homeland Security has posted a bulletin warning the public about mobile device security risks, and the several points of entry that could leave an organization or individual vulnerable to an attack. [8]

  • Insider – Employees can introduce a threat if they were to transfer information by use of portable media or the cloud. The most common method of data exfiltration involves network transfer by email, remote access channel or file transfer.
  • Malware – Many different types of malware are designed to steal user information, including keystroke loggers that can record passwords and other mobile activity and remote access Trojans that allow hackers to access your phone by masquerading as a credible program or file. They gain access to smartphones via website links or as a text message sent to appear as a system update.
  • Spear Phishing – Malicious attachments are sent via email or links and targeted at management, administrators and other key personnel, bypassing email filters and antivirus software in attempts to penetrate a network.
  • Web – Silent redirection, obfuscated JavaScript and even search engine optimization are a few web behaviors used to gain access to a network. Web servers with injection flaws or broken authentication may also lead to a data breach.
  • Equipment Loss – Mobile devices are easily lost due to size and transportability – as more and more sensitive data is stored directly on the device, theft and loss of the devices leads to data breaches due to poor physical security mechanisms and hardware encryption.

3.0. Compliance and Mobile Devices

The prolific increase in endpoints is inherent in mobile device use and brings new security risks and compliance concerns in industries such as e-commerce and healthcare that transmit, store, or process sensitive digital data. Each industry has different regulating entities and expectations for proving due diligence and achieving compliance. Balancing the potential benefits of innovations in the mobile space with the increasing stakes that sensitive digital information adds new dimensions to risk analysis.

In e-commerce and e-retail applications, for example, businesses will need to comply with the standards of PCI DSS (Payment Card Industry Data Security Standard) established by the large credit institutions (Visa, Mastercard, Discover) in order to enjoy the use of merchant accounts for online transactions. The PCI DSS compliance audit is a point-in-time audit, assessing the security of a business at that moment in time.

In the healthcare industry, the Department of Health and Human Services has established the HIPAA (Health Insurance Portability and Accountability Act) Security Rule as a result of the ARRA to describe how the health information should be protected. The Office of Civil Rights is the entity that audits healthcare providers (covered entities) and applies penalties for violations. The FCC regulates communications, and, where mobile devices are used as medical devices, the FDA has achieved regulatory rights to oversee the safety of the patients they interact with.

Publicly held companies, or those that touch financial and insurance industries, must meet Sarbanes-Oxley (SOX) compliance requirements. The AICPA’s SSAE 16 Type II audit, also called SOC 1, is a period-of-time assessment by an independent auditor for how well a company meets its self-described financial controls. The AICPA’s SOC 2 audit is also a period-in-time audit, but sets a common standard of controls specific to the security, availability, confidentiality, processing integrity, and privacy of sensitive data.

Businesses that serve clients across multiple verticals and industries are facing a complex tapestry of compliance and regulatory requirements which require significant investment in understanding, implementing, auditing and maintaining these environments. Imagine a business that wants to develop a mobile application that accepts patient payments. This business needs to fulfill the compliance requirements for PCI DSS, HIPAA, FCC, and possibly Sarbanes-Oxley. How about a business writing mobile apps that monitor patient health and make recommendations based on patient behavior? This would definitely be subject to HIPAA and FCC compliance, and possibly FDA compliance, depending on the type of recommendations made.

Companies that are developing mobile apps will likely have enough intimate knowledge of their software and systems to make assessments and take adequate precautions to meet whatever compliance regulations they are subject to. However, as they partner with other vendors for usability design, data storage, and hosting services, they are also responsible for ensuring that their entire network of associates is cognizant and capable of meeting the same requirements. A good place to start for any company in a regulated space is to ask their partners for written documentation of third-party compliance audits for the respective industries they serve. Those that understand what’s at stake will be able to provide the relevant audit reports.

3.1. Compliant Environments

mobile-security-4Where does your data live? If you deal with sensitive data such as patient health information or credit cardholder data, avoid keeping it on your mobile device whenever possible. Even temporary storage of sensitive data is risky. If you lose your smartphone or laptop, you could be held liable to a data breach and compliance violations that may cost your organization litigation, remediation, and other fees. If there is absolutely no way to avoid sensitive information on the device, ensure that appropriately strong encryption is used. 

In May 2012, 63,000 patients were affected by stolen portable media devices. [9] Howard University Hospital’s contractor had a password-protected laptop stolen from a vehicle in late January, affecting 34,503 patients. Now the hospital requires encryption on all employee laptops. Another case of a stolen laptop from a local physician office at the Our Lady of the Lake Regional Medical Center contained records for 17,000 former patients in March. [10]

The portability of mobile devices allows for easy theft or loss, and many organizations that don’t implement mobile security policies also do not provide proper security training for their employees. User behavior and acceptable use policies, such as never storing sensitive data locally, can significantly cut down on the risk of a data breach.

In an age when health information is stored and transported on portable devices such as laptops, tablets and mobile phones, special attention must be paid to safeguarding the information held on these devices. [11] - Office for Civil Rights Director Leon Rodriguez

David S. Holtzman of the OCR (Office for Civil Rights) also recommends reducing the risk of a data breach by using network or enterprise storage as an alternative to local devices, as well as encrypting data at rest on any device or desktop that stores sensitive information. [12]

With a secure remote connection to a secure network (and encryption), users can access sensitive data with their mobile devices without compliance concerns. (For a diagram of a HIPAA and PCI compliant data center environment, please see Compliant Data Centers, pg. 19).

3.2. PCI DSS Recommendations for Mobile Devices

PCI DSS (Payment Card Industry Data Security Standards) is a detailed list of technical, physical and administrative security requirements for merchants – organizations that deal with credit cardholder data (merchants).

Created and enforced by the major card issuers, the standard is now evolving with the development of new technology and mobile device capabilities. The PCI SSC (Payment Card Industry Security Standards Council) has released new security recommendations for both merchants and developers to meet mobile device payments, specifically for smartphones, tablets or PDAs.

The PCI SSC recommends that data is encrypted before it reaches a mobile device, which can be achieved by validating a PCI P2PE (Point-to-Point Encryption) solution, as seen below. [13] In this case, encrypted data would flow from either an approved PED (pin entry device), or an approved secure card reader to the mobile device, and then to a P2PE solution provider.

mobile-security-5

The PCI SSC also recommends that data storage should be temporarily stored in a secured storage environment before processing and authorization. A PCI compliant data center can provide a secure environment (See PCI Compliant Hosting Stack, pg. 20).

Data should be encrypted or rendered unreadable if it is ever stored on the mobile device after authentication. Encryption should meet PCI DSS standard 3.5 to limit application, personnel and process access to the keys. [14]

Lastly, preventing account data from being intercepted upon transmission out of the mobile device can be achieved by preventing unauthorized logical device access by implementing certain design features, including secure lock screens and time-sensitive sessions requiring logins. Creating server-side controls can also help prevent interception, including:

  • An updated access control list.
  • Ability to monitor system events.
  • Ability to track and monitor patterns of events to determine normal from abnormal events.
  • Ability to report abnormal events that could indicate a system breach or data leak; including encryption key changes, invalid login attempts, app updates and more.
  • Enable the ability to remotely disable payment applications.
  • Use GPS or other location apps/technology to detect theft or loss, and require re-authentication of the user/device.
  • Ensure any supporting systems are compliant with PCI DSS.
  • Prefer online transactions whenever the mobile payment-acceptance app on the host is inaccessible in order to prevent offline transactions/storage of transactions.
  • All mobile payment apps should conform to secure coding, engineering and testing as required by the Payment Application Data Security Standard (PA-DSS).
  • Protect against known vulnerabilities by evaluating updates, checking the source, and applying updates in a timely manner.
  • Protect against unauthorized applications on the mobile device.
  • Protect devices from malware.
  • Protect devices from unauthorized attachments.
  • Document device implementation and use.
  • Support secure merchant receipts – mask the PAN (Payment Account Number) and never use email or SMS to send PAN or SAD (Sensitive Account Data).
  • Provide an indication of secure state, similar to an active SSL session in a browser.

3.3. HIPAA Recommendations for Mobile Devices

HIPAA (Health Insurance Portability and Accountability Act) sets the national standards for the security of electronic protected health information (ePHI) with security and privacy safeguards to be implemented by healthcare organizations and business associates. The Office of Civil Rights has released the OCR Audit Protocols to guide covered entities (healthcare providers) and their business associates in their risk assessment and management plans. As mobile environments cross the hospital threshold, HIPAA audits will need to adapt to incorporate ePHI protection with these new mobile endpoints.

While the healthcare industry is currently working on HIPAA compliance recommendations for mobile devices and enforced by the Office for Civil Rights, the FCC is another federal agency that also regulates the mobile industry. The FCC has created a mHealth Task Force from a group of the nation’s leading mobile healthcare IT industry, including federal and academic experts. Preliminary recommendations are not technical in nature but instead focused primarily on agency initiatives to increase collaboration and outreach.

The FDA is also entering the mHealth space as an enforcing authority. As mobile devices become tools to monitor, report, or suggest actions based on a patient’s health, they become subject to FDA regulation like other medical devices. For more information on FDA Regulation of Mobile Health Devices, listen to http://www.onlinetech.com/events/fda-regulation-of-mobile-health-devices.

See the Technical Security, Compliant Data Centers and HIPAA Compliant Hosting Stack below for details on how to create a secure HIPAA compliant environment.

4.0. Data Security Tools

Mobile security involves much more than the specifics around security the devices themselves. It begins with a qualified risk assessment of the entire architecture and infrastructure with a careful evaluation of the appropriate levels of security that should be applied. Companies that begin at the device level are often quickly overwhelmed trying to manage the security of hundreds of different hardware and software combinations.

By the time a mobile application can be appropriately protected and functional across every hardware and software combinations, a whole new set of devices and platforms will be on the market. Smart mobile security is about keeping sensitive information off of the devices wherever possible. Sophisticated virtual desktop environments and a Software-as-a-Service (SaaS) model can support a huge variety of disparate platforms with only one or a small handful of security profiles to manage across all of them. Best practices in these approaches are emerging from leaders in every industry.

This isn’t to say that technical security tools shouldn’t be employed to provide mobile security. After a secure and compliant architecture for a mobile application has been determined, it’s time to apply the appropriate technical security protections. Some examples follow.

4.1. Technical Security

Secure hosting solutions require a multi-layered approach with the use of several different security tools. Not only do these tools help your company meet various compliance standards, but they also strengthen the security framework of your systems and minimize your overall risk of data loss.

 

Daily Log Review and Log Monitoring 

Daily Log Review

Some providers may only offer logging (tracking user activity, transporting and storing log events) - seek a provider that offers the complete logging experience with daily log review, analysis, and monthly reporting.

  File Integrity Monitoring

File Integrity Monitoring (FIM)

Monitoring your files and systems provides valuable insight into your technical environment and provides an additional layer of data security. File integrity monitoring (FIM) is a service that can monitor any changes made to your files.

Web Application Firewall

Web Application Firewall (WAF)

Protect your web servers and databases from malicious online attacks by investing in a web application firewall (WAF). A network firewall’s open port allows Internet traffic to access your websites, but it can also open up servers to potential application attacks (database commands to delete or extract data are sent through a web application to the backend database) and other malicious attacks.

Two-Factor Authentication 

Two-Factor Authentication

Two-factor authentication for VPN (Virtual Private Network) access is an optimal security measure to protect against online fraud and unauthorized access for clients that connect to their networks from a remote location.

  Vulnerability Scanning

Vulnerability Scanning

Vulnerability scanning checks your firewalls and networks for open ports. It is a web application that can detect outdated versions of software, web applications that aren’t securely coded, or misconfigured networks. If you need to meet PCI compliance, you need to run vulnerability scans and produce a report quarterly.

  Patch Management

Patch Management

Why is patch management so important? If your servers aren’t updated and managed properly, your data and applications are left vulnerable to hackers, identity thieves and other malicious attacks against your systems.

  Anti-Virus

Antivirus

Antivirus software can detect and remove malware in order to protect your data from malicious attacks. Significantly reduce your risks of data theft or unauthorized access by investing in a simple and effective solution for optimal server protection.

SSL Certificate 

SSL Certificate

In order to safely transmit information online, a SSL (Secure Sockets Layer) certificate provides the encryption of sensitive data, including financial and healthcare. A SSL certificate verifies the identity of a website, allowing web browsers to display a secure website.

4.2. Physical Security

Implementing strong access controls to protect your physical servers that contain sensitive data (and only accessed through your mobile device through a secure, remote access service) is another layer of security that ensures only authorized users access your data.

Within a data center, strong facility controls can translate to implementing the following:

  • Two-factor authentication - If not personally escorted, anyone in the data center should be wearing a badge to identify them and need at least two forms of identification for access such as badge and access code, or biometric fingerprint scanner and badge. If you go for a data center visit and are not asked to sign-in and wear a badge, security should be considered less than adequate.
  • Prolific use of video surveillance - Ask to see the video logs and how long they are kept (should be at least 90 days).
  • Visitor logging - The entries in the logbook should directly match the video surveillance tapes. Ask when the last independent auditor confirmed the match of visitor logs with the video archives. Ask who the auditor was and investigate the auditor's company to confirm their credibility.
  • Procedure Documentation - Ask to review the documentation for the procedure to allow access by unannounced visit, phone call, or email. Don't just ask the security or compliance officer - ask anyone. If there is a consistent policy and procedure in place, you should get a consistent and reassuring answer.

4.3. Administrative Security

Administrative security includes the audits, policies, staff training, and, for HIPAA-specific requirements, business associate training. Equally important as ensuring the physical and technical security of your data environment, administrative security addresses the business-facing concerns of partnering with a third-party hosting provider. Within your organization, administrative security can also include educating and managing employee behavior and mobile device use in order to keep security intact.

4.3.1. Mobile Use Policies

mobile-security-6Devising a set of mobile device/BYOD (Bring Your Own Device) use policies is one way of establishing standards for security and uniform use company-wide. Based on best practices, your policies should address the following:

Activate remote management and tracking settings and applications. By using a remote wipe feature on your phone or downloading an application that has the same capability, you can significantly reduce the risk of sensitive data (which should really be located on a secure network, not your device) from falling into the wrong hands. Limiting the scope of risk is part of an incident response plan process. By having the capability to remotely delete data off of a lost or stolen device, you can reduce the risk of data misuse.

With an iPhone, you can enable Find my iPhone, an app that attempts to locate your lost or stolen iPhone and pinpoint the location on Google Maps. Choose to remotely send a message to your phone or activate a sound to help locate or draw attention to your device.

For Android phones, third-party apps can be used for remote wipe, such as the free Mobile Defense app. [15] This app can also locate your device with the exact address displayed on an embedded interactive map. Mobile Defense will also email you if someone tries to swap out your SIM card as part of their security measures.

Update software frequently.

As new mobile malware develops, updating mobile software to protect against the latest malicious code is important for any user. Make frequent updates a part of company policy as a simple precaution against running old operating systems that are left open to known vulnerabilities.

One example is a short piece of code that targeted a Samsung Galaxy S3 smartphone - if a user visited a web page with the embedded code while browsing on their phone, their phone would be wiped without permission. [16] This code could be sent via a text message as well. Samsung released a software update and urged customers to download it as soon as possible in September 2012.

Similar to your computer software or Internet browser, taking a minute to download the latest update can save your organization from a potential data breach and the subsequent headaches. Using the services of a managed hosting provider can also help with keeping servers up-to-date with timely patch management.

Use two-factor authentication for remote access to networks.

mobile-security-7

Two-factor authentication for VPN (Virtual Private Network) access is an optimal security measure to protect against online fraud and unauthorized access for clients that connect to their networks from a remote location.

Instead of accessing critical data or applications locally on your mobile device, it is significantly more secure to restrict access to a remote network that provides the same services, but housed in a secure environment. That way, when you lose your smartphone, you will not lose any data, and implementing two-factor adds an extra layer of authentication needed to authorize your identity before connecting to a network.

The first factor involves a username and password login - the second factor requires the use of your phone. Depending on the software used for authentication, you may have these methods as a choice of your second factor: [17]

  1. Push authentication - login and transaction details are sent to your smartphone, and with one tap of the ‘Approve’ button, you will have completed the second authentication factor to achieve network access.
  2. Smartphone passcode - a generated login passcode works on all smartphone platforms.
  3. Text message - login passcodes are sent via text message - enter this passcode online to authenticate the second factor.
  4. Phone call - answer a phone call and press a key to authenticate.

Use audited applications.

Particularly important for mobile use in the healthcare industry, any application that creates, stores, accesses, sends or receives electronic protected health information (ePHI) must meet HIPAA security standards. Application service providers must also meet SSAE 16/SOC 1 standards of compliance.

Create secure passwords and update them frequently.

Many a data breach has occurred due to the use of default or easy-to-guess passwords. Establishing a password policy in your organization can help protect mobile devices and network access attempts by hackers. Ensure your passwords are at least eight or more characters, and vary the complexity of your password by including symbols and numbers. A good password policy requires frequent updating of passwords, every 90 days. Microsoft’s password strategy and password checker are great resources to help you create an effective password policy. [18]

Implement lockouts and set reentry times.

Set devices to lock your account after a certain number of failed login attempts, and set automatic lockout after a certain amount of time in order to keep data protected during downtime. Lockout should require re-entry of your password for access.


5.0. Outsource vs. In-House Hosting

For mobile app developers wanting to create apps that mission critical industries can rely on, investing in and maintaining high-availability hardware, power, network, and server infrastructures is essential, but capital intensive. Couple this with the resources that auditing, security, and server administration adds, and it’s easy to see why mobile app developers often turn to hosting partners.

The benefits of outsourced hosting must be weighed against any risk added by a hosting partner. Ideally, the hosting partner will improve security, availability, and compliance in addition to managing the maintenance of the server and underlying infrastructure.

5.1. Benefits of Outsourcing Hosting

Choosing where to host a mobile app that needs to meet security and compliance requirements is not an easy task. The right decision should help you sleep better at night - not stay awake longer. Look for cost savings over the capital and maintenance costs of managing your own hardware and infrastructure in addition to a true partner in compliance and security.

Save on Costs

Managed hosting allows your IT team to focus on the mobile applications directly related to your business, not on the day-to-day details involved with server updates, data center infrastructure, network management and security which can more readily be outsourced to a trusted provider.

If you’re developing or providing services for the healthcare or e-commerce industry, you also need to ensure you can meet compliance requirements for securing data. Compliance can be a costly and time-consuming process to invest in. Outsourcing your hosting solution to a third-party can save on resources if they’ve undergone an independent audit confirming their ability to comply with HIPAA or PCI compliance. While it does not release you of the obligation and responsibility of meeting compliance, it helps you mitigate the risk of a data breach.

Security

A managed hosting provider can provide the latest tested and audited technology to secure your applications and data. With a variety of required and recommended security methods, you can trust experienced, certified professionals to maintain, monitor and accurately generate logs of activity on your servers.

Outsourcing allows you to benefit from the various levels of security that a quality hosting provider should have in place. These advantages include physical security, environmental controls, logged access and video surveillance, and multiple alarm systems to detect unauthorized access.

Network security includes protection of sensitive infrastructure, including managed servers, cloud, power and network infrastructure built with redundant routers, switches and paired universal threat management devices to protect sensitive information.

Availability

The use of high-availability (HA) solutions in a fully redundant and compliant data center can allow clients to increase their uptime and application availability. Using an HA infrastructure can reduce the risk of business downtime due to a single point of failure. Outsourcing to a managed hosting provider means your business can take advantage of your data center operator’s design of power connections, UPS (Uninterruptible Power Supplies) systems, generators, air conditioning and networks.

Flexibility

Outsourcing allows you to benefit from the latest virtualization technologies, such as fifth-generation VMware that dominates the market for applications that require a high degree of scalability. Choosing a high-performance managed cloud allows for the ability to scale servers up and down as needed to respond to the demands of end-users with fast deployment time.

Compliant Data Centers

If you’re developing or providing services for the healthcare or e-commerce industry, you also need to ensure you can meet compliance requirements for securing data. By outsourcing, you can take advantage of your managed hosting provider’s investment in independent audits and reports relevant to your target industry. For e-commerce and retail, look for a managed hosting provider that has passed an independent PCI audit against the PCI DSS standard. For healthcare, look for a managed hosting provider that has invested in an independent HIPAA audit against the OCR HIPAA Audit Protocol.

A managed hosting provider also invests in maintaining and upgrading their data centers and hosting environments, allowing you to focus on your core business objectives. What does a compliant data center look like? Review the HIPAA and PCI compliant diagrams below for complete overview of the technicaladministrative and physical security requirements your hosting provider should offer:

PCI Compliant Hosting Stack

mobile-security-8

HIPAA Compliant Hosting Stack

mobile-security-8

Trained Staff

The technical and physical security of your data environment is only as secure as the people that run it. Staff training cuts down on human error, promotes security awareness and may prevent or allow for early detection of a data breach.

Documented policies and procedures are only effectual if employees are made aware of and trained on a regular basis. Check the last dates of employee training as well as the scope of training across the entire hosting company, and inquire about hiring policies to ensure that your data is in safe hands.

5.2. Risks of Outsourcing

However, the risks of outsourcing managed hosting to a service provider can mean extending your circle of trust to include a third-party vendor. These service providers may open your company up to the potential risk of a data breach or compromised application, and depending on your industry compliance standards, there are different financial and business consequences that may occur as a result.

Healthcare Data Breach Fines and Penalties

For the healthcare industry, the fines and penalties for a HIPAA violation (a data breach, whether lost or stolen) range from $100 per violation with a maximum fee of $25,000 for repeat violations to $50,000 per violation with a maximum fee of $1.5 million. [19]

The fine amount varies by different classification levels dependent on violation criteria, with minimum and maximum penalties for first-time/repeat violations and annual fees:

HIPAA Violation Types and Penalties [20]

VIOLATION TYPE

MIN. PENALTY

MAX. PENALTY

Individual didn’t know they violated HIPAA

$100/violation; annual max of

$25,000/repeat violations

$50,000/violation; annual

max of $1.5 million

Reasonable cause and not willful neglect

$1,000/violation; annual max

of $100,000/repeat violations

$50,000/violation; annual

max of $1.5 million

Willful neglect but corrected with time

$10,000/violation; annual max

of $250,000/repeat violations

$50,000/violation; annual

max of $1.5 million

Willful neglect and is not corrected

$50,000/violation; annual max

of $1.5 million

$50,000/violation; annual

max of $1.5 million

Another category of a HIPAA violation is determined by covered entities and individuals that knowingly breached the HIPAA regulations – for these, criminal penalties apply.

The maximum offense is a HIPAA breach committed with intent to sell, transfer or use individually identifiable health information for personal/financial gain or malicious harm, resulting in fines of $250,000 and imprisonment for up to ten years.

Ultimately, covered entities are held responsible when it comes to monetary and reputational consequences, although responsibility will extend to include business associate in recent proposed revisions to the HIPAA rules.

PCI DSS Data Breach Penalties

According to the PCIComplianceGuide.org:

The payment brands may, at their discretion, fine an acquiring bank $5,000 to $100,000 per month for PCI compliance violations. The banks will most likely pass this fine on downstream till it eventually hits the merchant. Furthermore, the bank will also most likely either terminate your relationship or increase transaction fees. Penalties are not openly discussed nor widely publicized, but they can be catastrophic to a small business.

According to the PCI Security Standards Council, if you are not compliant with PCI DSS, you could damage your reputation and ability to conduct business. Data breaches also could lead to loss of sales, relationships, good standing in your community, and depressed share prices if you are a public company. Other consequences include lawsuits, insurance claims, cancelled accounts, payment card issuer fines and government fines. [21]

6.0. Vendor Selection Criteria

6.1. Audited Data Centers and Secure Hosting Solutions

Reports on Compliance

As the number of reported data breaches and the cost of these data breaches rise, it becomes imperative for companies to choose a third-party managed hosting provider that has invested in a number of independent audits and can provide a copy of their audit report to ensure they are following compliant policies and procedures.

See below for more on each audit standard and what it means.

Key Data Center Audits

These key data center audits can give you guidance and insight into a vendor’s ongoing compliance and level of operating standards, as well as the quality of service you can expect to receive.

SAS 70 [22] - Now expired, the Statement on Auditing Standard No. 70 was originally used to measure a service provider’s controls related to financial reporting and recordkeeping. Two types are recognized by the AICPA (American Institute of CPAs) - Type 1 reports on a company’s description of their operational controls, while Type 2 includes an auditor’s opinion on how effective these controls are over a specified period of time. In both cases, keep in mind that the audited company gets to specify the controls that they will be audited against. Some specify only a handful of weak controls. Others specify dozens of strong controls. Make sure you read the details of the controls.

SSAE 16 - The Statement on Standards for Attestation Engagements No. 16 replaced SAS 70 in June 2011. A SSAE 16 audit measures the controls relevant to financial reporting. Type 1 reports on a data center’s description and assertion of controls, as reported by the company. Type 2 provides a description of an auditor’s test the accuracy of the controls and the implementation and effectiveness of controls over a specified period of time. No two SSAE 16 audit reports are the same as there is no standard of controls. Make sure you read the details of the controls.

SOC 1 [23] - One of the three new Service Organization Controls (SOC) reports developed by the AICPA, this report measures the controls of a data center as relevant to financial reporting. It measures the same controls as an SSAE 16 audit.

SOC 2 [24] - Most beneficial for clients partnering with a managed hosting provider, this report is a very detailed account of the technical aspects as they relate to controls specifically concerning IT and data center operators. The five controls include security, availability, processing integrity (ensuring system accuracy, completion and authorization), confidentiality and privacy. There are two types: Type 1 reports on a data center’s system and suitability of its design of controls, as reported by the company. Type 2 includes everything in Type 1, with the addition of verification of an auditor's opinion on the operating effectiveness of the controls. This is the first AICPA audit to begin standardizing controls so there is less variety between reports. However, since every audit, auditor, and company are different, it is wise to read the details of the report – don’t take it for granted.

SOC 3 [25] - This report includes the auditor’s opinion of SOC 2 components with an additional seal of approval to be used on websites and other documents. The report is less detailed and technical than a SOC 2 report.

PCI DSS [26] - The Payment Card Industry Data Security Standards was created and implemented by major credit card issuers and it applies to companies that collect, store, process and transmit cardholder data. Data center operators that host cardholder data need to have undergone a PCI audit to achieve an attestation of compliance report (the latest version is 2.0), and they should have a full understanding of what technical components can help your company meet the PCI requirements.

HIPAA - Mandated by the U.S. Health and Human Services Dept., the Health Insurance Portability and Accountability Act of 1996 specifies laws to secure protected health information (PHI), or patient health data (medical records). When it comes to data centers, a hosting provider needs to meet HIPAA compliance in order to ensure sensitive patient information is protected. A HIPAA audit conducted by an independent auditor against the OCR HIPAA Audit Protocol can provide a documented report to prove a data center operator has the proper policies and procedures in place to provide HIPAA hosting solutions. No other audit or report can provide evidence of full HIPAA compliance.

As with any type of audit, covered entities must review each individual compliance reports to determine the full scope and depth of their applicability. Each SSAE 16 or HIPAA audit is unique to each hosting provider.

Business Associate Agreement

Key to the healthcare industry, a business associate agreement (BAA) is a contract that defines roles and responsibilities between a healthcare organization (covered entity) and a third-party service provider. The lack of a BAA implies negligence and may fall under the HIPAA violation category of Willful Neglect. Check to make sure your business associate has a thorough BAA with documented policies that discuss how they handle PHI, from breach notification to contract termination and data ownership.

Part of your due diligence as a covered entity is to understand your hosting provider’s documented policies and procedures when it comes to securing your data and handling a data breach. Check for their timeline to notify covered entities in their breach notification policy - they are required by law to do so in a timely manner, and subsequently, covered entities must notify affected individuals within 10 days. [27]

Another key clause of a BAA should have terms and effective dates, with language around how PHI will be handled after termination, including the return and destruction of data. Data ownership, access and rights should also be discussed in the agreement.

Staff Security Training

Your secure hosting provider should have documented internal processes and policies that are considered best practice. Within their organization, they should have an appointed Risk Management Officer that oversees that the custom policies and procedures are being followed and are in compliance with the HIPAA regulations, for healthcare clients, and in compliance with PCI DSS, SOX, etc.

The Risk Management Officer also conducts employee training to educate and implement security policies and procedures that affect the day-to-day operations of their organization. Employee training is important when it comes to any third-party service provider, as many data breaches are a result of human error, or an employee mishandling sensitive data, and not hacker-related. Ask your managed hosting provider for the most recent date of their security policy training and percent of employees that have completed training during the vendor selection process.

6.2. Other Key Data Center Considerations

Ownership

As stated earlier, data ownership is especially important to review in your hosting contract. Some providers reserve the right to access, allow access, and claim ownership of your sensitive information while it is hosted on their servers or in their environment. This is an issue that can occur especially in the cloud, as some cloud vendors may claim legal ownership of the data once in their possession.

Another consideration is ownership and operation of the data center(s). Some hosting providers will provide a service that is run in data centers owned and operated by different companies - this further extends the “chain of trust” to include potentially unknown third-parties. If you have no way of knowing who has access to or controls the environment that houses your servers, let alone their level of compliance, you are putting your data and business at risk.

Geographical Location

Hosting facility location is another important consideration, as data centers located in certain regions are more susceptible to natural disasters, risking the complete destruction of your data. Choosing a data center located in a neutral, low-risk region such as the Midwest is one step closer to complete data safety.

Another factor is climate - a region that allows a data center operator to take advantage of natural cooling for most of the year also allows you, as the client, to take advantage of their operating cost-savings. It also reduces the risk of overheating and potential hardware failure that could affect your data availability.

Knowing where your data lives is a key consideration - if your data leaves the country, do you still have control of it? Data centers operating outside of the country do not have to comply with certain compliance expectations, as many are set and enforced within the United States. Once your data travels overseas, it is possible you will be put at risk of a data breach since international vendors are not required to observe our federal security regulations.

Disaster Recovery

Preserving the integrity of information means putting formal data backup and recovery plans in place to ensure data can be accurately and quickly accessed in the event of a disaster or failure. Location is important when it comes to offsite backup and disaster recovery - a copy of your data in a separate location can preserve the integrity of your information.

On-demand data access requires high availability hosting and infrastructure. Choosing a data center operator with a well-designed geographical separation between their data centers helps availability, as well as having multiple power grids to further boost utility resiliency should one power provider experience a prolonged outage.

High Availability

A high availability (HA) hosting infrastructure is imperative to ensuring data is always accessible. HA solutions increase uptime and availability and lower risks. It’s not a matter of “if” something fails, it’s planning for “when” failures happen - and they will. In your evaluation of any data center - yours or a third-party – you should endeavor to identify all of the single points of failure. It’s worth an outside opinion if reviewing your own data center (nothing beats an independent pair of eyes) and when visiting a potential data center hosting provider - ask the hard questions whenever you suspect complete redundancy is not in place.

With HA protection in place, providers can hedge against the loss of electrical power, network connectivity disruptions, router failures, firewall attacks, cooling problems, and have peace of mind knowing data is protected, available, and safe.

A managed hosting solution takes into account several design factors to ensure no single points of failure exist. This is true for the data center infrastructure layer components, as well as the individual servers and components in the rack.

The major design points for a successful secure and reliable hosting implementation include building in redundancies in critical equipment and infrastructure, including:

  • Power connections - Dual independent power feeds are run from disparate circuit breakers, to two separate power supplies in the server. Each power supply on a server is plugged into separate power strips in the rack. Power strips with digital amp load readouts aid in monitoring power levels and help avoid tripping a circuit breaker, which would shut down the entire power strip.
  • UPS systems - Uninterruptible Power Supplies (UPS) clean and distribute power and provide backup power through a bank of batteries in the event of a power outage. The clean power from the UPS is stable; therefore, any fluctuation in power, both power surge and brown-out, is regulated by the UPS.
  • Generators - Each UPS is fed with one or more power feeds from the utility company. The utility power feed is wed to multiple generators that run on either diesel or natural gas. If utility power is lost, the UPS maintain stable power to the racks while the generators start and provide backup power. Fuel supply contracts must be in place from several vendors, and fuel delivery SLAs must be in place.
  • Air conditioning – N+1 redundant cooling is in place with environmental monitoring, and scheduled maintenance plans to ensure the data center climate remains in the safe zone.
  • Network connections, switch and firewalls - The network connectivity in a managed cloud is designed to replicate the same redundancy as the power distribution so the network and Internet connectivity offer no single source of failure. Each server in the cloud should have at least two separate Network Interface Cards (NICs) that allow the server to connect to the redundant HA network infrastructure. Each NIC in the server is connected to different network switches, which disperse the network connectivity to all servers contained within the cloud.

Each network connection is connected to a pair of redundant firewalls, which protects traffic on each segment of the network from intruders and security threats. Additionally, each firewall connection is connected to separate routers and network access switches. These routers are then connected to multiple Internet Service Providers (ISPs) to provide diverse network paths to and from the Internet.

Cloud Computing

Server and storage devices

A high performance managed cloud relies on top notch technology for server hosts and SAN storage. Virtualization technologies like VMware (in its fifth generation) dominate the market for applications that require a high degree of resiliency, security, and scalability. The ability to scale up and down servers as needed also introduces flexibility into the managed cloud architecture, so that clients can be responsive to the needs of their end-users.

VMware backed by name-brand SAN and server technology create the server and storage platforms necessary to deliver highly available cloud solutions. Regardless of which brand of hardware is chosen, using multiple server hosts allow VMware to failover to secondary hosts in the event of a hardware failure, keeping critical systems online in the cloud.

And finally, a SAN with multiple redundant controllers and high-speed RAID disk systems are designed to meet the performance and availability needs of virtualization environments for today’s demanding applications. Today’s SANs’ combine intelligence and automation with fault tolerance to provide simplified administration, rapid deployment, enterprise performance and reliability, as well as seamless scalability.

Room to Grow

When choosing a managed hosting company, you want to partner with a business that can give you room to grow. On-demand resources can be deployed rapidly with a managed cloud solution, meaning you can easily scale servers up and down as needed.

Managed Services

With a managed hosting provider, you can take advantage of their managed services to ease the burden on your own IT staff and resources. An investment in managed hosting services means a trained and professional IT team can perform maintenance and updates, freeing up your IT staff to focus on developing your core business and applications. Some of the managed services available when you outsource include:

  • Patch Management - Ask your potential vendor if they provide OS patch management as a managed service. Why is patch management important? If your servers aren’t updated and managed properly, your data and applications are vulnerable to hackers and all types of malicious attacks against your systems. Your hosting provider should provide notification of outstanding updates, path installation assistance and offer different levels of patch management for optimal security.
  • 24/7 Emergency Response - In the event of unauthorized access or a disaster/failure, your hosting provider should have a responsive, trained support team ready to report and remediate the issue.
  • Proactive Server Monitoring - With a remote server monitoring service, you should be able to check the status of your servers even if you’re not located at the data centers. Your hosting provider should have a monitoring service that allows you to check your current disk space or bandwidth usage, and your application, web and database performance, all through a single-pane-of-glass portal.

If you choose to keep your hosting in-house, it is likely you may not have the resources or budget to accommodate all of the features listed above, including the investment in capital and hardware. Keeping operations in-house may require training or hiring of new staff to manage server hardware, storage, virtual servers or data center infrastructure as you work to implement different technologies to achieve data and application security. One example is building an offsite disaster recovery solution – some cloud hosting providers could provide a disaster recovery solution at a significantly lower cost compared to the cost of building it internally.

7.0. Conclusion

Mobile devices and applications have bolstered the productivity and efficiency of workflows across diverse industries while introducing new security risks and integration challenges.

Designing, building and maintaining multiple layers of safeguards with attention to industry security compliance standards such as HIPAA and PCI is essential to making a BYOD environment work in any organization.

Implementing appropriate physical security controls, and selecting key hardware and software for technical security are the first steps toward protecting your data and applications. Investing in administrative security (often overlooked when addressing technology integration) is equally important to address the business-facing concerns of mobile device use, including required and recommended audits, reports, policies, and staff training.

Initially storing data and applications on servers in a secure environment instead of locally on a device can significantly limit risks, while customizing and establishing mobile device use policies and procedures can also reduce the risk introduced by human behavior.

Keeping data privacy, integrity and confidentiality intact while reaping the benefits of mobile device use is possible with the right combination of the proper security tools. Using an audited third-party vendor can also achieve the same results with fewer burdens on your resources. Ultimately, careful planning and informed decisions can strike a balance between security and leveraging the benefits of a mobile device workplace.

8.0. References

8.1. Questions to Ask Your Secure Hosting Provider

1. What specific technical, physical and administrative security controls are used to protect my applications and data? Are they considered best practice in the industry for mobile security?

____________________________________________________________________________

____________________________________________________________________________

____________________________________________________________________________

2. Who performed your independent audits (SSAE 16, SOC, HIPAA and PCI) and do you provide copies of your audit reports?

____________________________________________________________________________

____________________________________________________________________________

____________________________________________________________________________

3. If disaster strikes, how long will it take before my data is available again?

____________________________________________________________________________

____________________________________________________________________________

____________________________________________________________________________

4. Do you have documented policies and procedures?

____________________________________________________________________________

____________________________________________________________________________

____________________________________________________________________________

5. Do your employees undergo security training and when were they last trained?

____________________________________________________________________________

____________________________________________________________________________

____________________________________________________________________________

6. For healthcare companies - do you sign a BAA (business associate agreement) with documented and communicated policies?

____________________________________________________________________________

____________________________________________________________________________

____________________________________________________________________________

8.2. Data Center Standards Cheat Sheet

SAS 70

The Statement on Auditing Standard No. 70 was the original audit to measure a data center’s financial reporting and recordkeeping controls. Developed by the AICPA (American Institute of CPAs, there two types:

Type 1 – Reports on a company's description of their operational controls

Type 2 – Reports on an auditor's opinion on how effective these controls are over a specified period of time (six months)

SSAE 16

The Statement on Standards for Attestation Engagements No. 16 replaced SAS 70 in June 2011. A SSAE 16 audit measures the controls relevant to financial reporting.

  • Type 1 – A data center’s description and assertion of controls, as reported by the company.
  • Type 2 – Auditors test the accuracy of the controls and the implementation and effectiveness of controls over a specified period of time.

SOC 1

The first of three new Service Organization Controls reports developed by the AICPA, this report measures the controls of a data center as relevant to financial reporting. It is essentially the same as a SSAE 16 audit.

SOC 2

This report and audit is completely different from the previous. SOC 2 measures controls specifically related to IT and data center service providers. The five controls are security, availability, processing integrity (ensuring system accuracy, completion and authorization), confidentiality and privacy. There are two types:

  • Type 1 – A data center’s system and suitability of its design of controls, as reported by the company.
  • Type 2 – Includes everything in Type 1, with the addition of verification of an auditor's opinion on the operating effectiveness of the controls.

SOC 3

This report includes the auditor’s opinion of SOC 2 components with an additional seal of approval to be used on websites and other documents. The report is less detailed and technical than a SOC 2 report.

HIPAA

Mandated by the U.S. Health and Human Services Dept., the Health Insurance Portability and Accountability Act of 1996 specifies laws to secure protected health information (PHI), or patient health data (medical records). When it comes to data centers, a hosting provider needs to meet HIPAA compliance in order to ensure sensitive patient information is protected. A HIPAA audit conducted by an independent auditor against the OCR HIPAA Audit Protocol can provide a documented report to prove a data center operator has the proper policies and procedures in place to provide HIPAA hosting solutions.

No other audit or report can provide evidence of full HIPAA compliance.

8.3. Mobile Security Checklist

  • Establish mobile device use policies and procedures.
  • Conduct mobile device and general security training for all employees.
  • Do your due diligence in vetting your managed hosting provider (ask them the questions in 8.1 and review all audit reports).
  • Implement best practice technology to secure your hosting environment.
  • Follow mobile security best practices, including creating secure passwords, enabling tracking and remote wipe device, data encryption, etc.
  • Never store sensitive information locally on a mobile device.

8.4. BYOD Case Study

One successful example of implementing a compliant BYOD (Bring Your Own Device) environment was presented at Online Tech’s Fall into IT 2012 technology seminar. Kirk Larson, Vice President and Chief Information Officer (CIO) at Children’s Hospital Central California, explained how he leveraged a virtual desktop infrastructure to integrate mobile device use seamlessly into the hospital’s workflow.

Who: Children’s Hospital Central California, a 348 bed pediatric hospital in California’s Central Valley, with a medical staff of 525 physicians practicing in over 40 subspecialities. The hospital is one of the 10 largest children’s hospitals in the U.S. Children’s performs more than 11,000 surgeries a year and sees more than 67,000 emergency room visits annually.

Technical environment: Children’s environment runs Dell, VMware, NetApp, Cisco, and manages .5 PB (petabytes) of data, 10,000 network elements, 8,500 user accounts and 300 servers. Like most operations, they are a Microsoft Windows shop, Lenovo, HP, Panasonic, etc. On the application side, the hospital uses Meditech 5.65 client/server, and is meaningful use stage 1 certified. They use Lawson for their ERP (Enterprise Resource Planning) and Picis in operating rooms.

Electronic healthcare system implementation: In 2011, the hospital went live with Advanced Clinical Systems (ACS.) This included electronic nursing documentation and CPOE (computerized physician order entry). This fundamentally changed the way care was delivered and changed requirements for ITS (information technology services) based on an increase in both users and different devices.

Virtual Desktop Infrastructure: The hospital had three concerns: the security around mobile devices; the exponential increase in number of clinical users; and resource effectiveness (how to best leverage the resources they already have, and the resources they will require over time.) Children’s decided to leverage their virtual desktop infrastructure (VDI) to support these concerns. The hospital was one of the first hospitals in the nation to use VMware View Client for iPads, which allows for secure access to a virtual desktop with the ability to deliver services from your cloud. [28]

What are some BYOD issues?

  • Multiple device preferences - From tablets to laptops to smartphones, different employees use different types of devices for different purposes.
  • Different applications works differently with different devices - Not all vendors have caught up with the capability of today’s mobile devices. Using tablets in healthcare is good for static data review (i.e. x-rays,) but if tablets are relied on for heavy data entry, the screen and keyboard may not be the best device for the task.
  • Different workflows - A dietician may favor an iPad because he or she is reviewing data instead of entering data. An iPad’s design allows for ease of viewing images and data, despite not being suitable for extensive data entry.
  • Cost - The initial reaction is that there will be cost savings in buying devices, licenses, antivirus software, etc., since people will be using their own. While this is true to an extent, there is additional investment in the VDI on the backend. So, there is a net savings, but there are still costs and the program will not eliminate all devices from the IT budget.
  • Safeguarding of data - Using BYOD, it is essential that data is safe and secure, and should never reside on the actual device.

What was their solution?

The hospital leveraged their existing VDI environment. By installing a VMware view client on a device, users can securely access their virtual desktop on the backend. Although they run Windows, if a user’s device preference is an iPad, the user can install the client and access their Windows-based desktop from their Apple device. Regardless of where and from which device a user accesses the virtual desktop, the familiar look and feel of the application allows for consistency throughout the IT environment. The follow-me feature allows users to switch between hospital-provided and personal devices while maintaining their open applications.

BYOD Policies

The hospital rolled out their BYOD environment in early 2012 in a pilot phase. Their mobile device policies were developed with input from their end user community and the IT team engaged them in the process of creating the environment.

One policy example is the defining of IT support for a BYOD environment. ITS supports device connectivity to the VDI, but not the device itself (i.e. helping with iTunes). While the BYOD environment was designed with physicians in mind, any clinical user can access this resource.

One physician concern was the question of what type of images and content should be available and displayed on the mobile devices. Physicians also wanted the full versatility of using any type of device securely in the workplace, and they did not want to be limited to using a certain type or model of device.

Considerations: Customer Support

  • Users have multiple device preferences - BYOD enables the same customer experience on different devices
  • ITS team will see multitude of devices - Offer high level training but focus on connectivity
  • Device will be used outside of the hospital - Enforce the same infection control measures
  • Cost of devices - BYOD shifts many costs out of IT

Considerations: Applications

  • Applications work differently on different platforms - Device strategy might be ahead of software vendors
  • Not all applications may be needed or wanted on BYOD
  • Should exempt and nonexempt employees have the same access? From a policy perspective, the answer is no. Sometimes leveraging policy instead of technology is preferable.
  • Accessing one common VDI image - Ensure buy-in on what initial image includes.

Considerations: Infrastructure

  • Potential spike in number of VDI sessions - Provision sessions in advance or limit sessions.
  • Potential decrease in number of purchased devices - Initial cost savings, but some reinvestment in VDI necessary.
  • How best to leverage VDI - Some user training is necessary.

Considerations: Security

  • Allowing user purchased devices on network - Partition existing network or create separate network.
  • Confidential data - Adopt VDI or similar solution that prevents data from being on the device.
  • How device is used - Leverage policy in addition to technology.

Concluding Thoughts: Things to Think About with BYOD

Prepare to lose some sense of control with BYOD. Users will bring in a variety of devices, and IT has to be prepared to host whatever they choose to use. It’s important to set ground rules on what to support; i.e. supporting only the connectivity of the device. Consider scalability and how to support an exponentially growing number of sessions and users. Securing data is easier when data is never stored locally on the device, so if the device is stolen or lost, data cannot be accessed or lost.

Kirk Larson, Vice President and Chief Information Officer, Children’s Hospital Central California

Kirk Larson is the Vice President and Chief Information Officer of Children’s Hospital Central California, one of the 10 largest pediatric hospitals in the country. Kirk has spent his entire career in healthcare and / or technology. He has consulting experience with the Big Five firm Arthur Andersen; vendor experience with the largest pure play HCIS company, Cerner Corporation; and provider experience as the CIO of two different hospitals in California.

Kirk holds a Master of Business Administration and Master of Health Services Administration from the University of Michigan, and a Bachelor of Science in mathematics from North Central College.

View Kirk’s full presentation, BYOD: From Concept to Reality, with respective slides:

http://www.onlinetech.com/events/fall-into-it

9.0. Contact Us

Contact us for more information if you still have questions about mobile security, secure hosting, or our compliant data centers.

Visit: www.onlinetech.com

Email: This email address is being protected from spambots. You need JavaScript enabled to view it.

Call: 734.213.2020



[8] Attack Surface: Healthcare and Public Health Sector (PDF); National Cybersecurity and Communications Integration Center; U.S. Department of Homeland Security

[9] Breaches Affecting 500 or More Individuals ; U.S. Department of Health and Human Services

[11] Massachusetts Provider Settles HIPAA Case for $1.5 Million ; U.S. Department of Health and Human Services

[15] Mobile Defense; https://www.mobiledefense.com

[17] Duo Security; http://www.duosecurity.com

[19] Rules and Regulations ; Office of Civil Rights, Federal Register Vol. 74, No. 209

[20] HIPAA Violations and Enforcement , American Medical Association

[21] Why Comply With PCI Security Standards?; PCI Security Standards Council

[22] SAS No. 70 Transformed ; American Institute of CPAs

[26] PCI SSC Data Security Standards Overview ; The PCI Security Standards Council

[27] Breach Notification Rule ; U.S. Department of Health and Human Services

[28] VMware View 5 Features ; VMware.com

 

…(continue reading)

Thank You for Viewing our Mobile Security White Paper

Thank you for your interest in our Mobile Security white paper! If you have any further questions about mobile security or hosting, please feel free to call us at 734.213.2020 or email  This email address is being protected from spambots. You need JavaScript enabled to view it. .

While you wait for a response, here are other mobile security resources you might find useful:

Mobile Security Seminars & Webinars

  • BYOD: From Concept to Reality - During this presentation, Kirk Larson, VP & CIO at Children’s Hospital Central California, explains how the hospital uses a virtual environment to securely manage a BYOD (Bring Your Own Device) environment.
  • Overcoming Cloud-Based Mobility Challenges in Healthcare - During this webinar, April and Rich review the common challenges associated with mobile enablement, and introduce the new technologies that are empowering healthcare providers to securely engage their patients and practitioners through the mobile channel.

Mobile Security Articles

  • 2012 State of Mobile Health IT - The 2nd Annual HIMSS Mobile Technology Survey, sponsored by Qualcomm Life, found that over 90 percent of respondents reported physicians using mobile technology in their everyday operations. 
  • Latest Federal Mobile Malware Report - The Internet Crime Complaint Center (IC3), a partnership between the Federal Bureau of Investigation (FBI) and the National White Collar Crime Center (NW3C), recently released a report on the latest versions of mobile malware to affect Android smartphones.
  • PCI Mobile Payment Security Recommendations Released by PCI SSC - The PCI SSC (Payment Card Industry Security Standards Council) just released a document addressing mobile device (smartphone, tablet or PDA) payments, PCI Mobile Payment Acceptance Security Guidelines, version 1.0.

 

…(continue reading)

Midwest IT Resource Library

Online Tech’s Midwest Advantage: Dedicated to Serving Our Neighbors

Visit the resources in our Midwest technology library to learn more about the wide range of hosting solutions we offer, including colocation, managed dedicated servers, cloud hosting and disaster recovery

Have questions? Call us at 734.213.2020, email This email address is being protected from spambots. You need JavaScript enabled to view it. , or use our handy Contact Form. Or, Chat with our team now.

Midwest Resources
midwest-icon Midwest Technology Topics
seminar-icon Presentations
whitepaper-icon White Papers
webinar-icon Webinars

Learn More About Online Tech's Midwest Hosting Services

Established in 1994, Online Tech originated as one of Michigan’s first Internet Service Providers, evolving to become one of the Midwest’s largest managed data center operators. As the leader in the state’s multi-tenant hosting market, we continue to make investments to sustain a 30 percent plus annual growth and support our clients in many diverse industries.

We’re dedicated to serving Midwest businesses in our own backyard, and provide a significant advantage over East and West coast providers with the strategic design of our facilities. Learn more about us in our Company overview.

  • cloud-hosting-overview
  • managed-server-hosting-overview
  • colocation-hosting-overview
  • disaster-recovery-hosting-overview

…(continue reading)

Michigan IT Resource Library

Online Tech’s Deep Michigan Roots: Economic Gardening at its Finest

Established in 1994, Online Tech has evolved to become one of Michigan’s largest managed data center operators, providing multi-tenant hosting. We’re dedicated to serving local Michigan businesses and provide a significant advantage over East and West coast providers with the strategic design of our facilities - find out why by visiting Michigan Data Centers.

Have questions? Call us at 734.213.2020, email This email address is being protected from spambots. You need JavaScript enabled to view it. , or use our handy Contact Form. Or, Chat with our team now.

Michigan Resources
michigan-icon Michigan Technology Topics
seminar-icon Presentations
whitepaper-icon White Papers
webinar-icon Webinars

Learn More About Online Tech's Michigan Hosting Services

  • cloud-hosting-overview
  • managed-server-hosting-overview
  • colocation-hosting-overview
  • disaster-recovery-hosting-overview

 

As the leader in the state’s multi-tenant hosting market, we continue to make investments to sustain a 30 percent plus annual growth and support our clients in many diverse industries.

Visit the resources in our Michigan technology library to learn more about the wide range of hosting solutions we offer, including colocationmanaged dedicated serverscloud hosting and disaster recovery.

…(continue reading)

Page 1 of 3

Have Questions?
Call Today 1-734-213-2020

live-chatemail-us

Live Chat
Events 1