HIPAA Compliance for Medical Practices
83.7K views | +20 today
Follow
HIPAA Compliance for Medical Practices
HIPAA Compliance and HIPAA Risk management Articles, Tips and Updates for Medical Practices and Physicians
Your new post is loading...
Your new post is loading...
Scoop.it!

The UCLA Health System Data Breach: How Bad Could It Be…?

The UCLA Health System Data Breach: How Bad Could It Be…? | HIPAA Compliance for Medical Practices | Scoop.it

Just hours ago, a Los Angeles Times report broke the news that hackers had broken into the UCLA Health System, creating a data breach that may affect 4.5 million people. This may turn out to be one of the biggest breaches of its kind in a single patient care organization to date, in the U.S. healthcare system. And it follows by only a few months the enormous data breach at Anthem, one of the nation’s largest commercial health insurers, a breach that has potentially compromised the data of 4.5 million Americans.


The L.A. Times report, by Chad Terhune, noted that “The university said there was no evidence yet that patient data were taken, but it can't rule out that possibility while the investigation continues. And it quoted Dr. James Atkinson, interim president of the UCLA Hospital System, as saying “We take this attack on our systems extremely seriously. For patients that entrust us with their care, their privacy is our highest priority we deeply regret this has happened.”


But Terhune also was able to report a truly damning  fact. He writes, “The revelation that UCLA hadn't taken the basic step of encrypting this patient data drew swift criticism from security experts and patient advocates, particularly at a time when cybercriminals are targeting so many big players in healthcare, retail and government.” And he quotes Dr. Deborah Peel, founder of Patient Privacy Rights in Austin, Texas, as saying, “These breaches will keep happening because the healthcare industry has built so many systems with thousands of weak links.”


What’s startling is that the breach at the Indianapolis-based Anthem, revealed on Feb. 5, and which compromised the data of up to 80 million health plan members, shared two very important characteristics with the UCLA Health breach, so far as we know at this moment, hours after the UCLA breach. Both were created by hackers; and both involved unencrypted data. That’s right—according to the L.A. Times report, UCLA Health’s data was also unencrypted.


Unencrypted? Yes, really. And the reality is that, even though the majority of patient care organizations do not yet encrypt their core, identifiable, protected health information (PHI) within their electronic health records (EHRs) when not being clinically exchanged, this breach speaks to a transition that patient care organizations should consider making soon. That is particularly so in light of the Anthem case. Indeed, as I noted in a Feb. 9 blog on the subject, “[A]s presented in one of the class action lawsuits just recently filed against it,” the language of that suit “contains the seeds of what could evolve into a functional legal standard on what will be required for health plans—and providers—to avoid being hit with multi-million-dollar judgments in breach cases.”


As I further stated in that blog, “I think one of the key causes in the above complaint [lawsuits were filed against Anthem within a few days of the breach] is this one: ‘the imminent and certainly impending injury flowing from potential fraud and identity theft posed by their personal and financial information being placed in the hands of hackers; damages to and diminution in value of their personal and financial information entrusted to Anthem for the sole purpose of obtaining health insurance from Anthem and with the mutual understanding that Anthem would safeguard Plaintiff’s and Class members’ data against theft and not allow access and misuse of their data by others.’ In other words, simply by signing up, or being signed up by their employers, with Anthem, for health insurance, health plan members are relying on Anthem to fully safeguard their data, and a significant data breach is essentially what is known in the law as a tort.”


Now, I am not a torts or personal injury lawyer, and I don’t even play one on TV. But I can see where, soon, the failure to encrypt core PHI within EHRs may soon become a legal liability.


Per that, just consider a March 20 op-ed column in The Washington Post by Andrea Peterson, with the quite-compelling headline, “2015 is already the year of the health-care hack—and it’s going to get worse.” In it, Peterson,  who, according to her authoring information at the close of the column, “covers technology policy for The Washington Post, with an emphasis on cybersecurity, consumer privacy, transparency, surveillance and open government,” notes that “Last year, the fallout from a string of breaches at major retailers like Target and Home Depot had consumers on edge. But 2015 is shaping up to be the year consumers should be taking a closer look at who is guarding their health information.” Indeed, she notes, “Data about more than 120 million people has been compromised in more than 1,100 separate breaches at organizations handling protected health data since 2009, according to Department of Health and Human Services data reviewed by The Washington Post.” Well, at this point, that figure would now be about 124.5 million, if the UCLA Health breach turns out to be as bad as one imagines it might be.


Indeed, Peterson writes, “Most breaches of data from health organizations are small and don't involve hackers breaking into a company's computer system. Some involve a stolen laptop or the inappropriate disposal of paper records, for example -- and not all necessarily involve medical information. But hacking-related incidents disclosed this year have dramatically driven up the number of people exposed by breaches in this sector. When Anthem, the nation's second-largest health insurer, announced in February that hackers broke into a database containing the personal information of nearly 80 million records related to consumers, that one incident more than doubled the number of people affected by breaches in the health industry since the agency started publicly reporting on the issue in 2009.”


And she quotes Rachel Seeger, a spokesperson for the Office for Civil Rights in the Department of Health and Human Services, as saying in a statement, following the Anthem breach, “These incidents have the potential to affect very large numbers of health care consumers, as evidenced by the recent Anthem and Premera breaches."


So this latest breach is big, and it is scary. And it might be easy (and lazy blogging and journalism) to describe this UCLA Health data breach as a “wake-up call”; but honestly, we’ve already had a series of wake-up calls in the U.S. healthcare industry over the past year or so. How many “wake-up calls” do we need before hospitals and other patient care organizations move to impose strong encryption regimens on their core sensitive data? The mind boggles at the prospects for the next 12 months in healthcare—truly.

No comment yet.
Scoop.it!

State AGs clash with Congress over data breach laws

State AGs clash with Congress over data breach laws | HIPAA Compliance for Medical Practices | Scoop.it

Attorneys general from all 47 states with data breach notification laws are urging Congress not to preempt local rules with a federal standard.

“Any additional protections afforded consumers by a federal law must not diminish the important role states already play protecting consumers from data breaches and identity theft,” they wrote in a letter sent to congressional leaders on Tuesday.

Lawmakers have been weighing a number of measures that would create nationwide guidelines for notifying customers in the wake of a hack that exposes sensitive information. Industry groups have argued that complying with the patchwork set of rules in each state is burdensome and costly.


The rapidly rising number of breaches at retailers, banks and government agencies has only raised pressure on Congress to pass legislation.

While the concept of a federal standard has bipartisan appeal, the two parties have split over whether to totally preempt state laws.

Democrats fear a nationwide rubric that preempts state law could weaken standards in states that have moved aggressively on data breach laws. Republicans fear that an overly strict federal standard could empower overzealous government regulators.

Lawmakers also disagree on what type of breaches should trigger a notification.

The differing views have spawned a cavalcade of bills on Capitol Hill, many of which would preempt state laws.

“Given the almost constant stream of data security breaches, state attorneys general must be able to continue our robust enforcement of data breach laws,” said Virginia Attorney General William Sorrell, who oversees a law that requires companies to notify officials within 14 days of discovering a breach, in a statement. “A federal law is desirable, but only if it maintains the strong consumer protection provisions in place in many states.”

Many state attorneys general, including Sorrell, favor a Senate data breach offering from Sen. Patrick Leahy (D-Vt.) and co-sponsored by five other Democrats.

Notably the bill does not preempt state laws that are stricter than the standard delineated in Leahy’s bill.

It also provides a broad definition of what type of information would constitute a notification-worthy breach. It includes photos and videos in addition to more traditional sensitive data such as Social Security numbers or financial account information.

But most important for states is retaining their ability to set their own standards.

“States should also be assured continued flexibility to adapt their state laws to respond to changes in technology and data collection,” the letter said. “As we have seen over the past decade, states are better equipped to quickly adjust to the challenges presented by a data-driven economy.”

No comment yet.
Scoop.it!

Website Error Leads to Data Breach

Website Error Leads to Data Breach | HIPAA Compliance for Medical Practices | Scoop.it

An error in a coding upgrade for a Blue Shield of California website resulted in a breach affecting 843 individuals. The incident is a reminder to all organizations about the importance of sound systems development life cycle practices.


In a notification letter being mailed by Blue Shield of California to affected members, the insurer says the breach involved a secure website that group health benefit plan adminstrators and brokers use to manage information about their own plans' members. "As the unintended result of a computer code update Blue Shield made to the website on May 9," the letter states, three users who logged into their own website accounts simultaneously were able to view member information associated with the other users' accounts. The problem was reported to Blue Shield's privacy office on May 18.


Blue Shield of California tells Information Security Media Group that the site affected was the company's Blue Shield Employer Portal. "This issue did not impact Blue Shield's public/member website," the company says. When the issue was discovered, the website was promptly taken offline to identify and fix the problem, according to the insurer.


"The website was returned to service on May 19, 2015," according to the notification letter. The insurer is offering all impacted individuals free credit monitoring and identity theft resolution services for one year.


Exposed information included names, Social Security numbers, Blue Shield identification numbers, dates of birth and home addresses. "None of your financial information was made available as a result of this incident," the notification letter says. "The users who had unauthorized access to PHI as a result of this incident have confirmed that they did not retain copies, they did not use or further disclose your PHI, and that they have deleted, returned to Blue Shield, and/or securely destroyed all records of the PHI they accessed without authorization."


The Blue Shield of California notification letter also notes that the company's investigation revealed that the breach "was the result of human error on the part of Blue Shield staff members, and the matter was not reported to law enforcement authorities for further investigation."

Similar Incidents

The coding error at Blue Shield of California that led to the users being able to view other individuals' information isn't a first in terms of programming mistakes on a healthcare-sector website leading to privacy concerns.


For example, in the early weeks of the launch of HealthCare.gov in the fall of 2013, a software glitch allowed a North Carolina consumer to access personal information of a South Carolina man. The Department of Health and Human Services' Centers for Medicare and Medicaid Services said at the time that the mistake was "immediately" fixed once the problem was reported. Still, the incident raised more concerns about the overall security of the Affordable Care Act health information exchange site.


Software design and coding mistakes that leave PHI viewable on websites led to at least one healthcare entity paying a financial penalty to HHS' Office for Civil Rights.


An OCR investigation of Phoenix Cardiac Surgery P.C., with offices in Phoenix and Prescott, began in February 2009, following a report that the practice was posting clinical and surgical appointments for its patients on an Internet-based calendar that was publicly accessible.

The investigation determined the practice had implemented few policies and procedures to comply with the HIPAA privacy and security rules and had limited safeguards in place to protect patients' information, according to an HHS statement. The investigation led to the healthcare practice signing an OCR resolution agreement, which included a corrective action plan and a $100,000 financial penalty.


The corrective action plan required the physicians practice, among other measures, to conduct arisk assessment and implement appropriate policies and procedures.

Measures to Take

Security and privacy expert Andrew Hicks, director and healthcare practice lead at the risk management consulting firm Coalfire, says that to avoid website-related mistakes that can lead toprivacy breaches, it's important that entities implement appropriate controls as well as follow the right systems development steps.


"Organizations should have a sound systems development life cycle - SDLC - in place to assess all systems in a production environment, especially those that are externally facing," he says. "Components of a mature SDLC would include code reviews, user acceptance testing, change management, systems analysis, penetration testing, and application validation testing."


Healthcare entities and business associates need to strive for more than just HIPAA compliance to avoid similar mishaps, he notes.

"Organizations that are solely seeking HIPAA compliance - rather than a comprehensive information security program - will never have the assurance that website vulnerabilities have been mitigated through the implementation of appropriate controls," he says. "In other words, HIPAA does not explicitly require penetration testing, secure code reviews, change management, and patch management, to name a few. These concepts are fundamental to IT security, but absent from any OCR regulation, including HIPAA."

Earlier Blue Shield Breach

About a year ago, Blue Shield of California reported a data breach involving several spreadsheet reports that inadvertently contained the Social Security numbers of 18,000 physicians and other healthcare providers.


The spreadsheets submitted by the plan were released 10 times by the state's Department of Managed Health Care. In California, health plans electronically submit monthly to the state agency a roster of all physicians and other medical providers who have contracts with the insurers. Those rosters are supposed to contain the healthcare providers' names, business addresses, business phones, medical groups and practice areas - but not Social Security numbers. DMHC makes those rosters available to the public, upon request.

No comment yet.
Scoop.it!

Hospital ID Theft Leads to Fraud

Hospital ID Theft Leads to Fraud | HIPAA Compliance for Medical Practices | Scoop.it

Eight alleged members of an identity theft ring, including a former assistant clerk at Montefiore Medical Center in New York, have been indicted on a variety of charges stemming from using stolen information on nearly 13,000 patients to make purchases at retailers.


Ann Patterson, senior vice president and program director of the Medical Identity Fraud Alliance, says that the incident points to the need for ongoing vigilance by healthcare organizations to prevent and detect ID theft and other related crimes.


Manhattan District Attorney Cyrus Vance Jr. alleges in a statement that members of the ID theft ring made up to $50,000 in purchases at retailers in Manhattan by opening up store credit card accounts using patient information stolen by former hospital worker, Monique Walker, 32.


Walker was an assistant clerk at Montefiore Medical Center, where her position gave her access to patients' names, dates of birth, Social Security numbers, and other personal information, Vance says.

Between 2012 and 2013, Walker allegedly printed thousands of patients' records on a near daily basis and supplied them to a co-defendant, Fernando Salazar, 28, according to Vance's statement.

Salazar is accused of acting as the ringleader of the operation. He allegedly purchased at least 250 items personal identifying information from Walker for as little as $3 per record, Vance says.


The stolen information was then allegedly provided to other defendants to open credit card accounts that were used for purchasing gift cards and merchandize at retailers, including Barneys New York, Macy's, Victoria's Secret, Zales, Bergdorf Goodman and Lord & Taylor.

Walker is charged with one count of felony grand larceny and one count of felony unlawful possession of personal identification information. The other defendants are charged with varying counts of grand larceny, identity theft and criminal possession of a forged instrument, among other charges.


All of the defendants have been arrested and arraigned in criminal court, and have various dates pending for their next court appearances.


"Case after case, we've seen how theft by a single company insider, who is often working with identity thieves on the outside, can rapidly victimize a business and thousands of its customers," Vance says. "Motivated by greed, profit and a complete disregard for their victims, identity thieves often feed stolen information to larger criminal operations, which then go on to defraud additional businesses and victims. In this case, a hospital employee privy to confidential patient records allegedly sold financial information for as little $3 per record."

Hospital Fires Worker

A Montefiore spokeswoman tells Information Security Media Group that the medical center was informed by law enforcement on May 15 of Walker's alleged crimes dating back to 2012 and 2013. As a result, Walker, who worked for the hospital for about three years, was fired, the spokeswoman says. "Montefiore is fully cooperating with law enforcement, including the Manhattan's District Attorney's office," a hospital statement says.


Law enforcement discovered the connection to Montefiore patient information while investigators were working on the ID theft case, the Montefiore spokeswoman says.


Of the 12,000-plus patient records that were compromised, it's uncertain how many individuals are victims of ID theft crimes, she says. But as a precaution, Montefiore is offering all impacted patients free identity recovery services, 12 months of free credit monitoring and a $1 million insurance policy to protect against identity theft-related costs.


Montefiore has reported the breach to the Department of Health and Human Services Office for Civil Rights, the spokeswoman says. While that incident as of June 22 was not yet listed on HHS'"wall of shame" tally of health data breaches affecting 500 or more individuals, three other breaches at Montefiore Medical Center appear on the federal website.


Those incidents, all reported in 2010, involved the theft of unencrypted computers. That includes the theft of a laptop in March 2010 which resulted in a breach impacting 625; and two July 2010 thefts of desktop computers that impacted 16,820 and 23,753 individuals.

Breach Prevention Steps

In a statement, Montefiore says that following the alleged crimes committed by Walker that were discovered in May, the hospital has expanded both its technology monitoring capabilities and employee training on safeguarding an accessing patient records to further bolster its privacy safeguards.


"The employee involved in this case received significant privacy and security training and despite that training, chose to violate our policies," the statement notes. "In response to this incident, Montefiore is also adding additional technical safeguards to protect patient information from theft or similar criminal activity in the future."


A hospital spokeswoman says the hospital has rolled out "sophisticated technology" to monitor for improper access by employees to the hospital's electronic patient records


The hospital also says it performs criminal background checks on all employees and "has comprehensive policies and procedures, as well as a code of conduct, which prohibits employees from looking at patient records when there is not a work-related reason to do so."

Steps to Take

Dan Berger, CEO of security consulting firm RedSpin, says it's not surprising the breach went undetected for so long because insider attacks are difficult to uncover. It's unclear if the Montefiore hospital clerk had "good reason to access so many records" as part of her job, he notes.


Patterson of the Medical Identity Fraud Alliance notes: "In addition to proper vetting of employees, the continued evaluation of employee education and awareness training programs and of your internal fraud detection programs is necessary. It's not something you do once and are done. Employees who are properly vetted upon initial hire may have changing circumstances that change their work integrity later on in their employ."


Additionally, security measures often need tweaking as circumstances within an organization change, she says.


"Fraud detection processes that worked when a specific type of workflow procedure was in place may need to be adjusted as that workflow process changes. An emphasis on continued evaluation of all components - people, process, technologies - for fraud detection is good practice."


Workforce training is important not only for preventing breaches, including those involving ID crimes, but also to help detect those incidents, she says. "Each employee must understand their role in protecting PHI. Equally important is regular and continued evaluation of the training programs to make sure that employees are adhering to the policies put in place, and that the 'red flags' detection systems are keeping pace with changing technologies and workplace practices."

No comment yet.
Scoop.it!

Physicians: Protect Your Data from Hackers in 5 Steps

Physicians: Protect Your Data from Hackers in 5 Steps | HIPAA Compliance for Medical Practices | Scoop.it

According to a recent CNBC report, hackers may have stolen personnel data and Social Security numbers for every single federal employee last December. If true, the cyberattack on federal employee data is far worse than the Obama administration has acknowledged.

J. David Cox, president of the American Federal of Government Employees Union, believes "hackers stole military records and veterans' status information, address, birth date, job and pay history, health insurance, life insurance, and pension information; [as well as] age, gender, race data," according to the report. This would be all that is needed for cybercriminals to steal identities of the employees, divert funds from one account to another, submit fake healthcare claims, and create fake accounts for everything from credit cards to in-store credit card purchases.


Although physicians maintain personal and professional data which is especially valuable to thieves, you are not the federal government. Make it hard enough on cybercriminals, and they will move on for lower-hanging fruit. Readers Digest offers good advice in five simple steps in its article, "Internet Security, How not to Get Hacked":


1. Be aware of what you share.


On Facebook, Twitter, or social media, avoid posting birth dates, graduation years, or your mother's maiden name — info often used to answer security questions to access your accounts online or over the phone.


2. Pick a strong password.


Hackers guess passwords using a computer. The longer your password and the more nonsensical characters it contains, the longer it takes the computer. The idea here is that longer, more complicated passwords could take a computer 1,000 years to guess. Give 'em a challenge


3. Use a two-step password if offered.


Facebook and Gmail have an optional security feature that, once activated, requires you to enter two passwords: your normal password plus a code that the companies text to your phone-to access your account. "The added step is a slight inconvenience that's worth the trouble when the alternative can be getting hacked,"  CNET tech writer Matt Elliot told Readers Digest. To set up the verification on Gmail, click on Account, then Security. On Facebook, log in, click on the down icon next to Home, and then click on Account Setting, Security, and finally Login Approvals.


4. Use Wi-Fi hot spots sparingly.


By now, you probably know that Internet cafés and free hotspots are not secure. You shouldn't be doing your online banking from these spots. However, the little button that turns off your laptops Wi-Fi so that your laptop cannot be accessed remotely is also handy. In Windows, right click on the wireless icon in the taskbar to it off. On a Mac, click the Wi-Fi icon in the menu bar to turn off Wi-Fi.


5. Back up your data.


Hackers can delete years' worth of e-mails, photos, documents, and music from your computer in minutes. Protect your digital files by using a simple and free backup system available on websites such as Crashplan and Dropbox


Take this basic instruction and build on it yourself. Google, for example offers advice expanding on the concept of "stong passwords." The worst thing you can do is use "dictionary words," the word "password," and sequential keystrokes, such as "1234" or "qwerty," because the hacker's computers will try these first. For e-mail, pick a phrase, such as "[m]y friends Tom and Jasmine send me a funny e-mail once a day" and then use numbers and letters to recreate it as a cryptic password. "MfT&Jsmafe1ad."

No comment yet.
Scoop.it!

Unencrypted Laptop Leads To US HealthWorks Data Breach

Unencrypted Laptop Leads To US HealthWorks Data Breach | HIPAA Compliance for Medical Practices | Scoop.it

U.S. HealthWorks, a California-based health care service provider specializing in urgent care and occupational medicine, recently alerted employees to a data breach after a password protected (but unencrypted) laptop was stolen in April.


According to its website, the company operates over 200 locations in 20 states and has 3,600 employees, but it was unclear in the notification of the breach exactly how many people may be affected.


The letter explains how an internal investigation began shortly after the company was notified on April 22, 2015, that a laptop issued to an employee was stolen from their vehicle overnight.


“On May 5, 2015, we determined that the employee’s laptop was password protected, but it was not encrypted. After conducting a thorough review, we determined that the laptop may have contained files that included your name, address, date of birth, job title, and Social Security number. Although we continue to work with law enforcement, at this time, the computer has not been located,” U.S. HealthWorks said in its notice letter to employees.


The company did not confirm whether any personal information has been accessed or used inappropriately, but it said it will offer employees free enrollment in identity protection services for one year as a precautionary measure. U.S. HealthWorks reported efforts to ensure compliance to its laptop encryption policy going forward, including an enhancement to deployment procedures for laptops and full disk encryption.


With the number of security breaches on the rise, the importance of organizations controlling and protecting data is critical.

“If you have laptops in your enterprise environment, and let’s face it who doesn’t, you need to address this issue. In this day and age there really isn’t a good reason to not encrypt the hard drives on your laptops,” wrote Forbes contributor Dave Lewis in a post Monday (June 1).


While the scope and effects of this particular breach are unclear, U.S. HealthWorks does not need to look far to see that data breaches can wreak havoc. Anthem Inc.TargetHome Depot and many others have learned the hard way about the ongoing financial impacts associated with data breaches. A recent study by Ponemon Institute found that the average cost of a data breaches is now more than $3.8 million on average, a 23 percent increase from the levels seen two years ago.

No comment yet.
Scoop.it!

Cost of data breach at all time high: $3.8 mn and climbing

Cost of data breach at all time high: $3.8 mn and climbing | HIPAA Compliance for Medical Practices | Scoop.it

Ponemon Institute released its annual Cost of Data Breach Study: Global Analysis, sponsored by IBM. According to the benchmark study of 350 companies spanning 11 countries, the average consolidated total cost of a data breach is $3.8 million representing a 23 percent increase since 2013.


The study also found that the average cost incurred for each lost or stolen record containing sensitive and confidential information increased six percent from a consolidated average of $145 to $154. Healthcare emerged as the industry with the highest cost per stolen record with the average cost for organizations reaching as high as $363. Additionally, retailers have seen their average cost per stolen record jump dramatically from $105 last year to $165 in this year’s study.

“Based on our field research, we identified three major reasons why the cost keeps climbing,” said Dr. Larry Ponemon, chairman and founder, Ponemon Institute. First, cyber attacks are increasing both in frequency and the cost it requires to resolve these security incidents. Second, the financial consequences of losing customers in the aftermath of a breach are having a greater impact on the cost. Third, more companies are incurring higher costs in their forensic and investigative activities, assessments and crisis team management.”

The following are key takeaways:


Board level involvement and the purchase of insurance can reduce the cost of a data breach. For the first time, we looked at the positive consequences that can result when boards of directors take a more active role when an organization had a data breach. Board involvement reduces the cost by $5.50 per record. Insurance protection reduces the cost by $4.40 per record.


Business continuity management plays an important role in reducing the cost of data breach. The research reveals that having business continuity management involved in the remediation of the breach can reduce the cost by an average of $7.10 per compromised record.

The most costly breaches continue to occur in the U.S. and Germany at $217 and $211 per compromised record, respectively. India and Brazil still have the least expensive breaches at $56 and $78, respectively.


The cost of data breach varies by industry. The average global cost of data breach per lost or stolen record is $154. However, if a healthcare organization has a breach, the average cost could be as high as $363, and in education the average cost could be as high as $300. The lowest cost per lost or stolen record is in transportation ($121) and public sector ($68).


Hackers and criminal insiders cause the most data breaches. Forty-seven percent of all breaches in this year’s study were caused by malicious or criminal attacks. The average cost per record to resolve such an attack is $170. In contrast, system glitches cost $142 per record and human error or negligence is $137 per record. The US and Germany spend the most to resolve a malicious or criminal attack ($230 and $224 per record, respectively).


Notification costs remain low, but costs associated with lost business steadily increase. Lost business costs are abnormal turnover of customers, increased customer acquisition activities, reputation losses and diminished good will. The average cost has increased from $1.23 million in 2013 to $1.57 million in 2015. Notification costs decreased from $190,000 to $170,000 since last year.


Time to identify and contain a data breach affects the cost. For the first time, our study shows the relationship between how quickly an organization can identify and contain data breach incidents and financial consequences. Malicious attacks can take an average of 256 days to identify while data breaches caused by human error take an average of 158 days to identify. As discussed earlier, malicious or criminal attacks are the most costly data breaches.


“The growing sophistication and collaboration of cybercriminals ties directly with the historic costs we’re seeing for data breaches,” said Marc van Zadelhoff, Vice President of Strategy, IBM Security. “The industry needs to organize at the same level as hackers to help defend themselves from these continuing attacks. The use of advanced analytics, sharing threat intelligence data and collaborating across the industry will help to even the playing field against attackers while helping mitigate the cost to commerce and society.”


Predicting the Likelihood of a Data Breach


For the second year, the research looks at the likelihood of a company having one or more data breaches in the next 24 months. Based on the experiences of companies participating in this research, the probability is based on two factors: how many records were lost or stolen and the company’s industry. According to the findings, Brazilian and French companies are more likely to have a data breach involving a minimum of 10,000 records. In contrast, organizations in Germany and Canada are least likely to have a breach. In all cases, it is more likely a company will have a breach involving 10,000 or fewer records than a mega breach involving more than 100,000 records.

No comment yet.
Scoop.it!

Your Cyber-Risk Policy: What it Covers and What it Doesn't

Your Cyber-Risk Policy: What it Covers and What it Doesn't | HIPAA Compliance for Medical Practices | Scoop.it

In healthcare, we deal with highly sensitive and very private electronic information, so of course our ears perk up every time we see headlines about the latest cyber threat or breach. The natural question is whether this could happen to us. This is constructive if it leads to cyber risk-prevention. But all too often, folks are responding with, "it could not happen to me," or "my insurance policy covers this so I'm prepared." These folks are ignoring the growing cyber threat around all of us. They are whistling past the "cyber" graveyard.

We live in a digital age where almost everything is accessible — even more now with the evolution of EHRs — so we have to run our businesses as though we are all at risk. To be prepared, we must first understand the common sources of cyber risk. Second, we must understand the basics of cyber insurance policies we may or may not have in place.


There are several ways breaches at small healthcare organizations may occur:


1. Disgruntled employees are one of the leading reasons for cyber attacks. They know your systems — likely better than you do — so keep a close watch on them and what type of data they have access to. Really pay close attention to new staff and those that may be on their way out. Also make sure they know they are monitored.

2. Cyber criminals are looking for remote Internet access services with weak passwords. Require and enforce more complex passwords and require employees to change their passwords regularly.


A smart form of cyber protection is a cyber-risk insurance policy. These provide bundled services designed to help you quickly respond to a data breach. However, there are many cyber insurance product options to consider. These range from standalone policies with high limits and comprehensive services to policy add-on coverages typically offering less coverage.


Rather than stumbling through a maze of complicated cyber-related insurance rhetoric, do yourself a favor and review your options with an experienced broker:


• Carefully scrutinize "free" cyber coverage or riders added onto your base coverage. While not totally worthless, the majority come nowhere near covering the exposure of a potential cyber breach (which explains why they are typically thrown in at no additional cost). In reviewing your insurance coverages with your broker, it's easy to brush by this one and mentally check off the fact that you have cyber coverage. Drill into the details of what's covered, as outlined below.

• Find out how much you are covered for and what out-of-pocket expenses you could expect. A data breach at a small physician practice could run into the hundreds of thousands of dollars or even higher. This type of uncovered damage could put a small practice out of business. Some expenses physicians can expect to incur when a breach occurs include legal fees, IT forensic costs, notification costs, credit monitoring costs, and public relations and advertising expenses to reclaim patient goodwill as well as making the public aware of the steps taken to address the breach.


Cyber risk is not just a technology issue. It affects all elements of the healthcare business and needs to be well-planned and mitigated through ongoing education and risk-management programs.

No comment yet.
Scoop.it!

House Committee OK's Bill Altering HIPAA

House Committee OK's Bill Altering HIPAA | HIPAA Compliance for Medical Practices | Scoop.it

The 21st Century Cure bill, which aims to advance medical research and innovation, has passed another Congressional hurdle without any revisions to controversial provisions that call for significant changes to the HIPAA Privacy Rule.


By unanimous vote, the full House Committee on Energy and Commerce on May 21 approved the legislation that calls for the Secretary of Health and Human Services to "revise or clarify" the HIPAA Privacy Rule's provisions on the use and disclosure of protected health information for research purposes.


A version of the bill containing with the same HIPAA-related provisions was approved on May 14 by the committee's health subcommittee. Next, the legislation will head to the full House of Representatives.

Relaxing HIPAA

Under the current HIPAA Privacy Rule, PHI is allowed be used or disclosed by a covered entity for healthcare treatment, payment and operations without authorization by the patient. If the proposed provision in the draft legislation is signed into law, patient authorization would not be required for PHI use or disclosure for research purposes if covered entities or business associates, as defined under HIPAA, are involved.


That provision - as well as many others in the bill - aim to help speed up research that could lead to the availability of promising medical treatments and devices, in part, by removing barriers.


But some privacy advocates are opposed to the HIPAA-related proposals because of the potential of watering down patient control over how sensitive health information is used or disclosed.

"There is good and bad in the provisions," says Michelle DeMooy, deputy director of the privacy project of the Center for Democracy & Technology.


"The bill addresses the big demand for researchers having access to PHI for medical research, and having their PHI used for research is something that many people generally want to do," she says. However, because the bill also promotes the advancement of personalized medicine, also known as precision medicine, that research undoubtedly will involve use of "highly sensitive, highly identifiable genetic data," which some patients will want more control over in terms of its use and disclosure, she says. "The patient control is being relaxed, yet it's unclear to me where the data will go," she says.


DeMooy says she'd like to see data de-identification requirements added to the legislation so that researchers "cannot identify or contact the people whose data they have."

A Closer Look?

Currently, there is no Senate version of the bill. However, if the House passes the legislation and sends it to the Senate, DeMooy says she anticipates that "some privacy champions" will take a closer look at the HIPAA privacy changes being proposed and their possible ramifications.


In addition to the privacy provisions, the bill calls for penalizing vendors of electronic health records systems and other health IT systems that fail to meet standards for interoperable and secure information exchange. Plus, the bill also contains provisions for potential civil monetary penalties against healthcare entities that inappropriately block information sharing.


No comment yet.
Scoop.it!

Shocks and surprises in new breach trend studies

Shocks and surprises in new breach trend studies | HIPAA Compliance for Medical Practices | Scoop.it

Since 2010, HHS has documented more than 1,000 major data breaches (where each incident involved the compromise of more than 500 patient records). Now we’re starting to see some in-depth analyses of those breaches.


In the new issue of the Journal of the American Medical Association (JAMA), there’s a study that concludes that 29 million medical records were compromised between 2010 and 2013.

The JAMA study also found that six of the breaches involved at least one million records each – and more than one third of all breaches occurred in just five states: California, Texas, Florida, New York and Illinois.


The study was accompanied by an earnest editorial subtitled “The Importance of Good Data Hygiene.” The authors called for a total overhaul of HIPAA, which they described as “antiquated and inadequate.” They noted that HIPAA doesn’t adequately regulate the use of Protected Health Information (PHI) by “digital behemoths” like Apple, Google, Facebook and Twitter.


In addition to the JAMA report, our company did an extensive analysis of 2014 data breach trends summarized here. We thoroughly documented 89 of those breaches, and we excluded the huge Community Health Systems breach so it wouldn’t skew the other data. Here are the most important trends we spotted:

Non-digital breaches still a problem

In the 89 incidents, paper breaches accounted for 9 percent of compromised records in the first half of 2014 – and 31 percent in the second half. Nearly 200,000 paper records were compromised, plus about 60,000 pieces of individually identifiable health information ranging from lab specimens to x-rays. Obviously, it’s still vitally important to safeguard the confidentiality of non-digital health records. Organizations must clarify and enforce policies and procedures to achieve that goal.

Theft of portables still a concern

We confirmed the loss or theft of 12 portable computing devices last year – and the lack of appropriate physical safeguards was a major contributing factor. In addition to taking greater common-sense precautions, organizations should use whole-disk encryption and other technical safeguards to render PHI unusable, unreadable or indecipherable to unauthorized people. Policies and procedures for portable device security need to be clearly communicated to all employees – and workforce training needs to involve much more than a dry online tutorial.

Watch out for rogue employees and business associates

We uncovered 45 incidents involving company insiders that resulted in the compromise of nearly half a million records. In other words, about half of all the data breaches were the result of mistakes or malice by an organization’s own people. It’s impossible to prevent every workforce-related breach, but everyone in the organization needs to be on the lookout for unusual activities that could spell trouble. All employees and BAs need to know that the hammer will come down – swiftly and consistently – on insiders who intentionally compromise patient data.

No organization should shout “hooray” simply for avoiding an Anthem-scale breach. There are many other incidents – improper disposal of paper records, misplaced x-rays, employee snooping, and more – that can still do a lot of financial and reputational damage. Those are the types of breaches that even a HIPAA tech-fix can’t solve.

These breach trend summaries agree on one main point: healthcare organizations need to constantly assess the maturity of their information risk management programs – and not view them as a narrowly defined “HIPAA compliance” duty.


No comment yet.
Scoop.it!

Data Breach Costs to Soar to $2.1 Trillion

Data Breach Costs to Soar to $2.1 Trillion | HIPAA Compliance for Medical Practices | Scoop.it

The rapid digitization of consumers’ lives and enterprise records will increase the cost of data breaches to $2.1 trillion globally by 2019.

That’s according to Juniper Research, which found in a recent study that breach costs will increase to almost four times the estimated cost of breaches in 2015. And, the average cost of a data breach will exceed $150 million by 2020, as more business infrastructure gets connected.


The research, entitled ‘The Future of Cybercrime & Security: Financial and Corporate Threats & Mitigation’, has found that the majority of these breaches will come from existing IT and network infrastructure. While new threats targeting mobile devices and the Internet of Things (IoT) are being reported at an increasing rate, the number of infected devices is minimal in comparison to more traditional computing devices.

The report also highlights the increasing professionalism of cybercrime, with the emergence of cybercrime products (i.e. sale of malware creation software) over the past year, as well as the decline in casual activist hacks. Hacktivism has become more successful and less prolific—in future, Juniper expects fewer attacks overall, but more successful ones.


“Currently, we aren’t seeing much dangerous mobile or IoT malware because it’s not profitable,” noted report author James Moar. “The kind of threats we will see on these devices will be either ransomware, with consumers’ devices locked down until they pay the hackers to use their devices, or as part of botnets, where processing power is harnessed as part of a more lucrative hack. With the absence of a direct payout from IoT hacks, there is little motive for criminals to develop the required tools.”


In terms of geography, nearly 60% of anticipated data breaches worldwide in 2015 will occur in North America, the firm said, but this proportion will decrease over time as other countries become both richer and more digitized.



No comment yet.
Scoop.it!

Don’t Forget the Paper: Records and Policies

Don’t Forget the Paper: Records and Policies | HIPAA Compliance for Medical Practices | Scoop.it

Another HIPAA breach settlement announcement and another lesson from the Department of Health and Human Services Office for Civil Rights (“OCR”). Cornell Prescription Pharmacy (“Cornell”) is a single location pharmacy located in Colorado that will pay OCR $125,000 to resolve allegations of a variety of HIPAA violations. When the facts of the circumstances are described, it will likely raise questions as to why the settlement was so low.


The issues at Cornell were revealed to OCR by a local new station. The news station found paper-based protected health information disposed of in unsecure dumpster generally accessible to the public. After receiving the report, OCR investigated Cornell. OCR’s investigation revealed that Cornell had no written policies in place to implement the HIPAA Privacy Rule, no training regarding Privacy Rule requirements was conducted, and protected health information was not reasonably safeguarded.


Despite all of these findings, as indicated above, Cornell only faces a $125,000 settlement amount in addition to the usual requirement to enter into a corrective action plan. It is interesting to note that on April 27, 2015 when the settlement was announced, the first Resolution Agreement posted showed a resolution payment of $767,520. No information has been provided to explain the reduction. One possible answer is that Cornell is a very small entity and may not have been able to afford the higher resolution amount. It would be beneficial to monitor for more information on this account.


As set forth in the settlement announcement, OCR wants every entity to know that it may be subject to HIPAA enforcement, including fines and penalties. A quote from OCR Director Jocelyn Samuels says it all: “Regardless of size, organizations cannot abandon protected health information or dispose of it in dumpsters or other container that are accessible by . . .unauthorized persons.” It is incumbent upon all organizations to implement appropriate policies and procedures to satisfy HIPAA requirements.


One of the more stunning aspects of the Cornell settlement was the revelation that Cornell had no written policies or procedures to comply with the Privacy Rule. This is slightly different from other settlements where OCR found inadequate or non-existent security policies. Arguably, privacy policies are easier to implement because the Privacy Rule provides a pretty comprehensive and clearcut guide with regard to what policies and procedures need to be put into place. Additionally, there is not a need to do an equivalent of a risk analysis to determine what security policies to put into place.


While the statement about no policies being in place should be shocking, multiple surveys recently have found that a lack of knowledge about HIPAA is still fairly widespread. HIPAA in its original form has been around for almost 20 years at this point. Why is it that organizations still do not know what they need to do to comply? Is it unintentional lack of awareness or something more deliberate? No matter the reason, the government is clearly monitoring and looking for organizations that are not in compliance. The resolution amounts remain wildly unpredictable, but many statements have suggested that recent fines will pale in comparison to fines that will be levied in the future. It is better for organizations to get their houses in order at this point rather than having an audit uncover deficiencies. It will be a safe bet that any problems found in an audit will result in higher fines being assessed.


AACS Atlanta's comment, October 18, 2019 2:27 AM
If you have been charged with a DUI, or if the DUI charge was reduced to reckless driving, the state of Georgia will most likely require you to attend a 20-hour Risk Reduction Program. For detail https://www.aacsatlanta.com/dui-school/ for directions https://g.page/aacs-dui-school?share
Scoop.it!

Cybercrime price tag to reach $2 trillion

Cybercrime price tag to reach $2 trillion | HIPAA Compliance for Medical Practices | Scoop.it

If you haven't gotten serious about data cyberattacks at your organization, now's the time to do so. Because they're about to hit companies worldwide with a $2.1 trillion price tag.


At least that's according to new research published by Juniper Research, which took a closer look at the costs associated with cybercrime and what they'll end up costing companies on a global scale. And the numbers are staggering.


Going digital will increase the cost of data breaches to almost four times the cost estimated for this year, reaching $2.1 trillion (yes, that's trillion with a "t") in 2019. Breaking that down to the average cost of one of the breaches? Corporations can count on paying more than $150 million per breach by 2020.


The report, which focuses on both corporate and financial threats, underscored that the lion's share of these breaches will not come from targeting mobile devices. Rather, cybercriminals are still going after traditional IT and network infrastructure.


"Currently, we aren't seeing much dangerous mobile or IoT malware because it's not profitable," said James Moar, research analyst at Juniper Research and author of the report, in a press statement. "The kind of threats we will see on these devices will be either ransomware, with consumers' devices locked down until they pay the hackers to use their devices, or as part of botnets, where processing power is harnessed as part of a more lucrative hack. With the absence of a direct payout from IoT hacks, there is little motive for criminals to develop the required tools."


Juniper analysts also say some 60 percent of these global cyberattacks will target North American companies. 


No comment yet.
Scoop.it!

Hospital Slammed With $218,000 HIPAA Fine

Hospital Slammed With $218,000 HIPAA Fine | HIPAA Compliance for Medical Practices | Scoop.it

Federal regulators have slapped a Boston area hospital with a $218,000 HIPAA penalty after an investigation following two security incidents. One involved staff members using an Internet site to share documents containing patient data without first assessing risks. The other involved the theft of a worker's personally owned unencrypted laptop and storage device.


The Department of Health and Human Services' Office for Civil Rights says it has entered a resolution agreement with St. Elizabeth's Medical Center that also includes a "robust" corrective action plan to correct deficiencies in the hospital's HIPAA compliance program.

The Brighton, Mass.-based medical center is part of Steward Health Care System.


Privacy and security experts say the OCR settlement offers a number of valuable lessons, including the importance of the workforce knowing how to report security issues internally, as well as the need to have strong policies and procedures for safeguarding PHI in the cloud.

Complaint Filed

On Nov. 16, 2012, OCR received a complaint alleging noncompliance with the HIPAA by medical center workforce members. "Specifically, the complaint alleged that workforce members used an Internet-based document sharing application to store documents containing electronic protected health information of at least 498 individuals without having analyzed the risks associated with such a practice," the OCR statement says.


OCR's subsequent investigation determined that the medical center "failed to timely identify and respond to the known security incident, mitigate the harmful effects of the security incident and document the security incident and its outcome."


"Organizations must pay particular attention to HIPAA's requirements when using internet-based document sharing applications," says Jocelyn Samuels, OCR director in the statement. "In order to reduce potential risks and vulnerabilities, all workforce members must follow all policies and procedures, and entities must ensure that incidents are reported and mitigated in a timely manner."


Separately, on Aug. 25, 2014, St. Elizabeth's Medical Center submitted notification to OCR regarding a breach involving unencrypted ePHI stored on a former hospital workforce member's personal laptop and USB flash drive, affecting 595 individuals. The OCR "wall of shame" website of health data breaches impacting 500 or more individuals says the incident involved a theft.

Corrective Action Plan

In addition to the financial penalty - which OCR says takes into consideration the circumstances of the complaint and breach, the size of the entity, and the type of PHI disclosed - the agreement includes a corrective action plan "to cure gaps in the organization's HIPAA compliance program raised by both the complaint and the breach."

The plan calls for the medical center to:


  • Conduct a "self-assessment" of workforce members' familiarity and compliance with the hospital's policies and procedures that address issues including transmission and storage of ePHI;
  • Review and revise policies and procedures related to ePHI; and
  • Revise workforce training related to HIPAA and protection of PHI.


Lessons Learned

Other healthcare organizations and their business associates need to heed some lessons from OCR's latest HIPAA enforcement action, two compliance experts say.


Privacy attorney Adam Greene of the law firm Davis Wright Tremaine notes: "The settlement indicates that OCR first learned of alleged noncompliance through complaints by the covered entity's workforce members. Entities should consider whether their employees know how to report HIPAA issues internally to the privacy and security officers and ensure that any concerns are adequately addressed. Otherwise, the employees' next stop may be complaining to the government."

The settlement also highlights the importance of having a cloud computing strategy, Greene points out. That strategy, he says, should include "policies, training and potential technical safeguards to keep PHI off of unauthorized online file-sharing services."


The enforcement action spotlights the continuing challenge of preventing unencrypted PHI from ending up on personal devices, where it may become the subject of a breach, he notes.


The case also sheds light on how OCR evaluates compliance issues, he says. "The settlement highlights that OCR will look at multiple HIPAA incidents together, as it is not clear that OCR would have entered into a settlement agreement if there had only been the incident involving online file sharing software, but took action after an unrelated second incident involving PHI ending up on personal devices."


Privacy attorney David Holtzman, vice president of compliance at security consulting firm CynergisTek, says the settlement "serves as an important reminder that a covered entity or a business associate must make sure that the organization's risk assessment takes into account any relationship where PHI has been disclosed to a contractor or vendor so as to ensure that appropriate safeguards to protect the data are in place."


The alleged violations involving the document sharing vendor, he says, "involve failure to have a BA agreement in place prior to disclosing PHI to the vendor, as well as failing to have appropriate security management processes in place to evaluate when a BA agreement is needed when bringing on a new contractor that will handle PHI."

St. Elizabeth's Medical Center did not immediately respond to an Information Security Media Group request for comment.

Previous Settlements

The settlement with the Boston-area medical center is the second HIPAA resolution agreement signed by OCR so far this year. In April, the agency OK'd an agreement with Cornell Prescription Pharmacyfor an incident related to unsecure disposal of paper records containing PHI. In that agreement, Cornell was fined $125,000 and also adopted a corrective action plan to correct deficiencies in its HIPAA compliance program.


The settlement with St. Elizabeth is OCR's 25th HIPAA enforcement action involving a financial penalty and/or resolution agreement that OCR has taken since 2008.


But privacy advocate Deborah Peel, M.D., founder of Patient Privacy Rights, says OCR isn't doing enough to crack down on organizations involved in HIPAA privacy breaches.


"Assessing penalties that low - St. Elizabeth will pay $218,400 - guarantees that virtually no organizations will fix their destructive practices," she says. "Industry views low fines as simply a cost of doing business. They'll take their chances and see if they're caught."

The largest HIPAA financial penalty to date issued by OCR was a $4.8 million settlement with New York-Presbyterian Hospital and Columbia University for incidents tied to the same 2010 breach that affected about 6,800 patients. The incidents involved unsecured patient data on a network.

No comment yet.
Scoop.it!

Premera Blue Cross Data Breach Results in Several Lawsuits, Class Actions

Premera Blue Cross Data Breach Results in Several Lawsuits, Class Actions | HIPAA Compliance for Medical Practices | Scoop.it

Premera is the third largest health insurer in Washington State, and was hit with a cyber attack initiated on May 5 of last year. The Premera attack exposed the personal information of as many as 11 million current and former clients of Premera across the US. While Premera noted on January 29 of this year - the day the data breach was discovered - that according to best information none of the personal data had been used surreptitiously, the fact remains that the data mined by cyber attackers is exactly the kind of information useful for perpetrating identity theft.

To that end, it has been reported that the cyber attackers targeted sensitive personal information such as names, dates of birth, Social Security numbers, mailing addresses, e-mail addresses, phone numbers, member identification numbers, bank account information, and claims and clinical information.

As for why the attack was not discovered for some eight months, Premera has said little. However, the breadth of the attack - affecting some 11 million people - and the delay in discovering the breach (initiated May 5, 2014 and revealed January 29, 2015) will likely provide much fodder for Premera cyber attack lawsuits.

According to the Puget Sound Business Journal, the New York Times had suggested the Premera cyber attack may have been perpetrated by the same China-based hackers who are suspected of breaching the federal Office of Personal Management (OPM) last month. However, the VP for communications at Premera, Eric Earling, notes there is no certainty the attack originated in China.

“We don’t have definitive evidence on the source of the attack and have not commented on that,” he said. “It continues to be under investigation by the FBI [Federal Bureau of Investigation] and we would leave the speculation to others.”

That said, it has been reported that the US government has traced all of these attacks to China.

Recent data breach attacks, including the Vivacity data breach and Connexion data breach, are reflective of a shift in targets, according to cyber attack experts. The attacks to the data systems of the federal OPM notwithstanding, it seems apparent that hackers are increasingly shifting their targets to health insurers in part due to the breadth of information available from the health records of clients.

The goal of cyber attackers in recent months, according to claims appearing in the New York Times, is to amass a huge trove of data on Americans.

Given such a headline as “Premera Blue Cross Reports Data Breach of 11 Million Accounts,” it appears they have a good start. While it might be a “win” for the hackers involved acquiring such data surreptitiously and illegally, it remains a huge loss in both privacy and peace of mind for millions of Americans who entrust their personal information to insurance providers, who, in turn, require such information in order to provide service. Consumers and clients also have historically assumed that such providers have taken steps to ensure their personal information is secure.

When it isn’t - and it takes eight months for a cyber attack to be identified - consumers have little recourse than to launch a Premera cyber attack lawsuit in order to achieve compensation for the breach, and as a hedge for the possibility of ample frustration down the road were the breach to evolve in a full-blown identity theft.

To that end, five class-action data breach lawsuits have been filed in US District Court for the District of Seattle. According to reports, two of the five lawsuits allege that Premera was warned in an April 2014 draft audit by the OPM that its IT systems “were vulnerable to attack because of inadequate severity precautions,” according to the text of the lawsuits.

Tennielle Cossey et al. vs. Premera asserts that the audit in question, “identified… vulnerabilities related to Premera’s failure to implement critical security patches and software updates, and warned that ‘failure to promptly install important updates increases the risk that vulnerabilities will not be.’

“If the [OPM] audit were not enough, the events of 2014 alone should have placed Premera on notice of the need to improve its cyber security systems.”

Moving forward, Premera Blue Cross data breach lawsuits are being consolidated into multidistrict litigation, given the number of Americans affected and their various locations across the country. An initial case management conference has been scheduled for August 7.

No comment yet.
Scoop.it!

243 Charged in Medicare Fraud Schemes

243 Charged in Medicare Fraud Schemes | HIPAA Compliance for Medical Practices | Scoop.it

Federal authorities announced their largest national Medicare fraud takedown to date, involving criminal charges against 243 individuals allegedly responsible for false billing totaling approximately $712 million.


In a June 18 joint announcement, officials at the Department of Health and Human Services, Department of Justice and FBI said a "nationwide sweep" led by the Medicare Fraud Strike Force in 17 districts has resulted in charging 243 individuals, including 46 physicians, nurses and other licensed medical professionals, for their alleged participation in Medicare fraud schemes. As of June 18, 184 defendants had been taken into custody, a DOJ spokesman says.


Officials called "the coordinated takedown" the largest in strike force history, both in terms of the number of defendants charged and the loss amount.


The sweep also resulted the Centers for Medicare and Medicaid Services using its authority under the Affordable Care Act to suspend a number of healthcare providers from participating in the Medicare program.

Variety of Charges

The defendants in the takedown are charged with various healthcare fraud-related crimes, including conspiracy to commit healthcare fraud, violations of the anti-kickback statutes, money laundering and aggravated identity theft. The charges are based on a variety of alleged fraud schemes involving various medical treatments and services, including home healthcare, psychotherapy, physical and occupational therapy, durable medical equipment and pharmacy fraud.

More than 44 of the defendants are charged with fraud related to the Medicare prescription drug benefit program known as Part D, which regulators say is the fastest-growing component of the Medicare program.


"This takedown adds to the hundreds of millions we have saved through fraud prevention since the Affordable Care Act was passed," said HHS Secretary Sylvia Mathews Burwell. "With increased resources that have allowed the Strike Force to expand and new tools, like enhanced screening and enrollment requirements, tough new rules and sentences for criminals, and advanced predictive modeling technology, we have managed to better find and fight fraud as well as stop it before it starts."


The Medicare Fraud Strike Force, a multi-agency team of federal, state and local investigators and prosecutors designed to combat Medicare fraud through the use of Medicare data analysis techniques, coordinated the investigation. Since the program's inception in March 2007, Strike Force operations in nine locations have charged more than 2,300 defendants who collectively are alleged to have falsely billed the Medicare program for more than $7 billion, according to federal authorities.


Among the large Medicare busts was the May 2014 arrest of 90 individuals in six states who were allegedly tied to Medicare fraud schemes responsible for $260 million worth of false billings. Also, in October 2012, federal authorities announced a Medicare fraud crackdown that involved charges against 91 individuals in fraud schemes allegedly involving approximately $492 million in false billing.

A Wake-Up Call

Security expert Mac McMillan, CEO of the consultancy CynergisTek, says the magnitude of the most recent Medicare takedown is significant. "This should be a wake-up call to those healthcare professionals who think it is OK to fudge around the edges, or in some cases just outright steal from the system, that their days are numbered and the feds are serious about curbing this very important problem," he says. "Hopefully it will have some impact, but frankly, right now, it seems like someone declared open season on healthcare between this [type of fraud] and the hacks we've seen lately."


Healthcare entities can help in the battle against fraud by monitoring for criminal behavior within their own organizations, he says. "One of the simplest ways is to perform periodic audits of what workforce members involved in preparing or handling claims are doing, as well as audits of patients receiving discharge summaries and bills."


Additionally, more commercial health insurers should follow CMS's lead and implement analytical tools that can help detect suspicious activities, he says. "They are the only really effective tools for proactive monitoring and detection," he says. "Those committing fraud may not cause a compliance trigger to be activated, but generally fraud requires an abnormal event to occur. Monitor for those, and you have a better chance of detecting inappropriate behavior."

Fraud Scams Busted

Among those charged in the latest Medicare fraud takedown were individuals in six states:


  • Seventy-three defendants in Miami were charged with offenses relating to their alleged participation in various fraud schemes involving approximately $263 million in false billings for home healthcare, mental health services and pharmacy fraud. In one case, administrators in a mental health center billed close to $64 million between 2006 and 2012 for purported intensive mental health treatment to beneficiaries and allegedly paid kickbacks to patient recruiters and assisted living facility owners. Medicare paid approximately half of the claimed amount.
  • Twenty-two individuals in Houston and McAllen, Texas, were charged in cases involving more than $38 million in alleged fraud. One of these defendants allegedly coached beneficiaries on what to tell doctors to make them appear eligible for Medicare services and treatments and then received payment for those who qualified. The company that paid the defendant for recruiting patients to bill for medically unnecessary services submitted close to $16 million in claims to Medicare, more than $4 million of which was paid.
  • Seven people in Dallas were charged in connection with home healthcare schemes. In one scheme, six owners and operators of a physician house call company allegedly submitted nearly $43 million in billings under the name of a single doctor, regardless of who actually provided the service. The company also allegedly significantly exaggerated the length of physician visits, often billing for 90 minutes or more for an appointment that lasted only 15 or 20 minutes.
  • Eight individuals in Los Angeles were charged for their alleged roles in schemes to defraud Medicare of approximately $66 million. For example, a physician is charged with causing almost $23 million in losses to Medicare through his own fraudulent billing and referrals for durable medical equipment, including more than 1,000 power wheelchairs and home health services that were not medically necessary and often not provided.
  • Sixteen defendants in Detroit were charged for their alleged roles in fraud, kickback and money laundering schemes involving approximately $122 million in false claims for services that were medically unnecessary or never rendered, including home healthcare, physician visits and psychotherapy, as well as pharmaceuticals that were billed but not dispensed. Among those charged are three owners of a hospice service who allegedly paid kickbacks for referrals made by two doctors who defrauded Medicare Part D by issuing medically unnecessary prescriptions.
  • Five individuals in Tampa were charged with participating in a variety of alleged scams, ranging from fraudulent physical therapy billings to a scheme involving millions of dollars worth of clams for physician services and tests that never were provided. In one case, a licensed pain management physician sought reimbursement for nerve conduction studies and other services that he allegedly never performed. Medicare paid the defendant more than $1 million for these purported services.
  • Nine individuals in Brooklyn, N.Y., were charged in two separate criminal schemes allegedly involving physical and occupational therapy. Three of those defendants face charges for their roles in a previously charged $50 million physical therapy scheme.
  • Eleven people in New Orleans were charged in connection with $110 million worth of alleged home healthcare and psychotherapy schemes. In one case, four individuals who operated two companies - one in Louisiana and one in California - that mass-marketed talking glucose monitors across the country allegedly sent the devices to Medicare beneficiaries regardless of whether they were needed or requested. The companies billed Medicare approximately $38 million for the devices, and Medicare paid the companies more than $22 million.
No comment yet.
Scoop.it!

HIPAA Could Hurt, Not Help, Data Privacy and Security

HIPAA Could Hurt, Not Help, Data Privacy and Security | HIPAA Compliance for Medical Practices | Scoop.it

By now, you have probably heard about the theft of more than 14 million dossiers on federal employees and the theft of the personal health information (PHI) of 80 million people from Anthem-Blue Cross. You may not have heard about many of the other computer security flaws and breaches that are reported almost daily.

Here are a few from the last couple of weeks:


• A vulnerability in Samsung's Android keyboard installed on over 600m devices worldwide could allow hackers to take full control of the smartphone or tablet.


• Security researchers have uncovered a flaw in the way thousands of popular mobile applications store data online, leaving users' personal information, including passwords, addresses, door codes, and location data, vulnerable to hackers.


• Macs older than a year are vulnerable to exploits that remotely overwrite the firmware that boots up the machine, a feat that allows attackers to control vulnerable devices from the very first instruction.


• Professor Phil Koopman , an expert who testified at one of the Toyota "sticking throttle" trials, detailed a myriad of defects in the software of the throttle control system and in Toyota's software development process. Michael Barr, another expert, cited a heavily redacted report that suggests the presence of at least 243 violations of the " Power of 10—Rules for Developing Safety Critical Code," published in IEEE Computer in 2006 by NASA team member Gerard Holzmann.


• The Boeing 787 aircraft's electrical power control units shut down if powered without interruption for 248 days. As a result, the FAA is telling the airlines they have to do a maintenance reboot of their planes every 120 days.


I've always assumed, as I imagine that you have, that, if any organizations could be expected to use "best practices" and thereby avoid flaws and breaches, it would be Anthem, the feds, Google, Samsung, Apple, Boeing, and Toyota. The only reasonable conclusion is that impenetrable, flaw-free systems are simply not possible and this will not change any time soon. Keep that in mind during the upcoming discussion.


The government, at the behest of lawmakers, loves to tell people what to do. Feasibility and relevance are annoying details =best dispensed with. Even vocal conservatives and libertarians, who should be staying out of other people's business on principle, love to tell people what to do. These folks got together in 1996 and enacted HIPAA (in full disclosure, I testified before a congressional subcommittee on this bill before it was enacted).


Among other things HIPAA tells people what to do about privacy and security of patient data, but without much evidence that they needed telling.


I always wondered:


1. Were privacy and security a huge, out-of-control problem before HIPAA?


2. What was the evidence that existing laws regarding inappropriate release of PHI were not sufficient to induce people to exercise due diligence? If they were adequate, were they being enforced? If they were inadequate could they not have been strengthened?


3. Has HIPAA helped?


4. Do billions of signed statements acknowledging privacy policies actually protect anyone's privacy?


5. If there was an incremental improvement as a result of HIPAA, was it worth the billions that have been spent?


6. Do the penalties reduce the chances of a breach?


7.  And finally, is there any chance that the technical measures that are demanded will be effective, given the state of the art.


The approach to the first six questions has basically been one of "don't ask, don't tell," so we will never be able to judge whether the whole thing was worth the trouble or not. The answer to the last question, based on the material presented in the introduction, is: No. The technical expectations embodied in HIPAA are little more than someone's dream. There is no evidence that even the most capable, best resourced organizations in the country are capable of satisfying them (that doesn't mean they shouldn't try). A great deal of time and money could be saved or redirected to patient care if a more realistic approach was taken toward privacy and security. The magnitude and prevalence of breaches has been growing steadily. As it stands, HIPAA may actually harmful because it distracts attention and diverts resources away from those actions that might actually improve privacy and security.

No comment yet.
Scoop.it!

Privacy Workgroup Prepares ‘Big Data’ Recommendations

Privacy Workgroup Prepares ‘Big Data’ Recommendations | HIPAA Compliance for Medical Practices | Scoop.it
The Privacy and Security Workgroup of the Health IT Policy Committee is preparing a set of recommendations about how the Office of the National Coordinator for Health IT should approach “big data” issues for both HIPAA-covered entities as well as for the marketplace outside the HIPAA sphere. 
 
At a June 8 meeting, Deven McGraw, a partner in the healthcare practice of Manatt, Phelps & Phillips, LLP and the workgroup’s chair, led a discussion of draft recommendations to identify gaps in law and regulation around issues including data de-identification and security as well as areas for further inquiry.
 
McGraw noted that outside the HIPAA-covered space, there is not a clearly defined right for patients to access data collected about them. She said there has been a debate with respect to medical devices, such as one patient who made a public argument that he had the right to access data from his pacemaker. The workgroup proposes to remind ONC that outside the HIPAA space, voluntarily adopted codes of conduct can be enforced by the Federal Trade Commission, and many of those codes are under development. 
 
During the meeting there was discussion of, but not agreement about, what it would mean to ask for greater transparency about the algorithms healthcare organizations use to make decisions about individuals and populations, and whether provisions of the Federal Credit Reporting Act could be applied to give consumers more access and help promote trust. Several committee members mentioned that the algorithms themselves could be accurate and valid, yet still be used for discriminating against specific populations or individuals. They also said there would be resistance to opening up proprietary analytics systems for inspection. 
 
“All of this rests on a presumption of data quality,” said Gil Kuperman, director of interoperability informatics at New York-Presbyterian Hospital. “If you have poor quality data, your model could be wrong. Or the model could be good, but if the input data is wrong, you get a poor prediction. To me the quality of the data is still a challenge around ‘big data’ approaches.”
 
McGraw admitted the workgroup has more questions than obvious answers and no consensus about areas of potential harm to consumers. She said there is a need for more inquiry to understand the scope of the issue and where there are gaps in legal protections. There was a general reluctance among workgroup members to suggest that Congress act, given its questionable track record legislating about complex health IT issues. 
 
The workgroup is drafting language to call on the HHS Office for Civil Rights to be a better “steward” of HIPAA de-identification standards and conduct ongoing review of the methodologies and policies and seek assistance from third-party experts, such as NIST. But it is still not clear how big a problem data re-identification is. Noting that the workgroup was not made aware of any HIPAA de-identified data set that has been re-identified, McGraw said, “It is never good to regulate a problem that doesn’t exist yet.”
No comment yet.
Scoop.it!

Here's how healthcare can guard against data breaches in the "year of the hack"

Here's how healthcare can guard against data breaches in the "year of the hack" | HIPAA Compliance for Medical Practices | Scoop.it

Protected Health Information, or PHI, is increasingly attractive to cybercriminals. According to PhishLabs, health records can fetch as much as 10 times the value of credit card data on the black market.

Stolen healthcare records can be used for fraudulent billing which, unlike financial fraud, can go undetected for long periods of time. The rising price of healthcare records on the market is attracting more cybercriminals, who are exploiting any vulnerability they can find, be it an unpatched system or an insecure endpoint device.


We’ve all heard about several devastating data breaches in the healthcare industry this year – Anthem’s breach of more than 78 million records and the Premera Blue Cross breach of 11 million records. In the first quarter of 2015 alone, there have been 87 reported data breaches affecting 500 or more individuals, according to data from US Department of Health and Human Services Office for Civil Rights. These breaches affected a combined total of 92.3 million individuals, up 3,709 percent from Q1 2014.


Given the mega breaches experienced by Anthem and Premera, one could consider them as outliers. In terms of comparison, excluding the aforementioned breaches would still leave us with a 4.9 percent increase in individuals affected in the first quarter of 2015 versus the same quarter in 2014. Although the first three months of 2014 saw three more data breaches than what has occurred in 2015, it is clear that the number of individuals affected per breach is on the rise.

2015 is the year of the “hack”, but people are still the root cause.

In the first quarter of the year, 33 percent of data breaches were attributed to hacking or an “IT incident,” but the methods by which cybercriminals have successfully penetrated corporate networks are quite telling. These breaches have originated from unencrypted data, unpatched systems, or compromised passwords. In 2015, several hacking incidents have been tracked back to the compromise of a single set of credentials.


The Verizon 2015 Data Breach Investigations Report analyzed nearly 80,000 security incidents including 2,122 confirmed data breaches. Its findings reveal that despite the rise in cyberattacks, 90 percent of security incidents are tied back to people and their mistakes including phishing, bad behavior, or lost devices. The report notes that, even with a detailed technical report of a security incident, the “actual root cause typically boils down to process and human decision-making.” This is frightening but also good news, as there are measures that can be taken to reduce these risks by improving upon process and education, complemented by the right data security solutions.


It’s not all about the network


Healthcare organizations reacting to data breach headlines may focus efforts on protecting the network, leaving data vulnerable to other attack vectors and overlooking the people and process risks that ultimately result in most data breaches.


Cyberattacks come from many different vector points. It only takes one missing device, one use of unsecured WiFi, one compromised password, one click of a phishing email to compromise the entire corporate network. Many of these risks, which originate on the endpoint, put corporate network at risk. Current data security strategies in healthcare cannot be network versus endpoint, nor can they ignore the “people” risk that is only amplified by such trends as BYOD, mobile work, the cloud, and the Internet of Things.


A holistic approach to healthcare security


If we don’t adopt a different approach – one that addresses the multitude of options available to cybercriminals – breaches will continue to occur. Healthcare organizations that want to get ahead of cybercriminals need to create a holistic approach to data security that incorporates threat prevention, incident detection, and efficient response.


Reduce “the attack surface”


Every point of interaction with PHI puts that data at risk. Reducing the sum total of these points of interaction – the attack surface – can reduce the risk to the data. I suggest a layered approach to data security which decreases the attack surface across endpoints as well as the network, including:

  • A foundation of tight controls and processes;
  • Encryption is a must, but on its own is often circumvented;
  • Supplement encryption with a persistent technology that will provide a connection with a device, regardless of user or location while defeating attempts to remove the technology;
  • Network segmentation is key — granular access controls and tools for continuous monitoring offer real-time intelligence about the devices on the network and the security status of these systems;
  • Automate security remediation activities such as setting new firewall rules or locking down a suspicious device in the case of suspicious activities.


Minimize the “people” risk


You can have the best firewalls, encryption and network access controls, but your employees are still your weakest link. Using a combination of process (education and interactive ongoing training) and technology (such as mobile device management), employees should be aware of their part in protecting corporate data on endpoints.


Know how to detect anomalies


Conduct regular security audits on the network and endpoints. Know where your sensitive data resides and how it’s being used (or misused, in the case of employees) with the aid of a data loss prevention (DLP) tool. Most DLP and endpoint security tools can create automated alerts for suspicious activity.


Develop and maintain an incident response plan


With clear procedures in place to pursue anomalies and to escalate breach situations, potential risks can be addressed promptly and effectively. With many false positives, skilled IT personnel need to connect the dots (such as a user name change, unauthorized physical changes to the device or the device location, software vulnerabilities, registry changes or unusual system processes) and spot a true security incident quickly. Ensure your endpoint security supports remote actions such as data delete and device freeze.


With data regulations tightening, and healthcare data breaches escalating, don’t give cybercriminals an easy “in” to your organization. Trim the sails and batten the hatches to weather the oncoming storm of cyberattacks with a holistic approach to data security.


The more layers of protection you have in place, the better chance you have of avoiding a breach. Just as sailors can make or break a ship’s success in a storm, your employees are your first line of defense in preventing and detecting a data breach incident. If an incident is discovered, an efficient response plan can help your organization stay afloat in the muddy and complex waters of compliance.

No comment yet.
Scoop.it!

Beacon Health Is Latest Hacker Victim

Beacon Health Is Latest Hacker Victim | HIPAA Compliance for Medical Practices | Scoop.it

Yet another large hacker attack has been revealed in the healthcare sector. But unlike three recent cyber-attacks, which targeted health insurers, this latest breach, which affected nearly a quarter-million individuals, involved a healthcare provider organization.


South Bend, Ind.-based Beacon Health System recently began notifying 220,000 patients that their protected health information was exposed as a result of phishing attacks on some employees that started in November 2013, leading to hackers accessing "email boxes" that contained patient data.


The Beacon Health incident is a reminder that healthcare organizations should step up staff training about phishing threats as well as consider adopting multi-factor authentication, shifting to encrypted email and avoiding the use of email to share PHI.

"Email - or at least any confidential email - going outside the organization's local network should be encrypted. And increasingly, healthcare organizations are doing just that," says security and privacy expert Kate Borten.


Unfortunately, in cases where phishing attacks fool employees into giving up their email logon credentials, encryption is moot, she says. "Although encryption is an essential protection when PHI is sent over public networks, and stored somewhere other than within IT control, it is only one of many, many security controls. There's no silver bullet."

At the University of Vermont Medical Center, which has seen an uptick in phishing scams in recent months, the organization has taken a number of steps to bolster security, including implementing two-factor authentication "for anything facing the Web, because that can pretty much render phishing attacks that are designed to steal credentials useless," says CISO Heather Roszkowski.

The Latest Hacker Attack

On March 26, Beacon Health's forensic team discovered the unauthorized access to the employees' email accounts while investigating a cyber-attack. On May 1, the team determined that the affected email accounts contained PHI. The last unauthorized access to any employee email account was on Jan. 26, the health system says.


"While there is no evidence that any sensitive information was actually viewed or removed from the email boxes, Beacon confirmed that patient information was located within certain email boxes," Beacon Health says in a statement posted on its website. "The majority of accessible information related only to patient name, doctor's name, internal patient ID number, and patient status (either active or inactive). The accessible information, which was different for different individuals, included: Social Security number, date of birth, driver's license number, diagnosis, date of service, and treatment and other medical record information."


The provider organization says it has reported the incident to the U.S. Department of Health and Human Services, various state regulators, and the FBI.

Hospital Patients Affected

A Beacon Health spokeswoman tells Information Security Media Group that the majority of those affected by the breach were patients of Memorial Hospital of South Bend or Elkhart General Hospital, which combined have more than 1,000 beds. The two facilities merged in 2012 to form the health system. Individuals who became patients of Beacon Health after Jan. 26 were not affected by the breach, she says.


The breach investigation is being conducted by the organization's own forensics team, the spokeswoman says.

Affected individuals are being offered one year of identity and credit monitoring.


The news about similar hacker attacks earlier this year that targeted health insurers Anthem Inc. and Premera Blue Cross prompted Beacon's forensics investigation team to "closely review" the organization's systems after discovering it was the target of a cyber-attack, the Beacon spokeswoman says.


In the wake of the incident, the organization has been bolstering its security, including making employees better aware of "the sophisticated tactics that are used by attackers," she says. That includes instructing employees to change passwords and warning staff to be careful about the websites and email attachments they click on.

The Phishing Threat

Security experts say other healthcare entities are also vulnerable to phishing.


"The important takeaway is that criminals are using fake email messages - phishing - to trick recipients into clicking links taking them to fake websites where they are prompted to provide their computer account information," says Keith Fricke, principle consultant at consulting firm tw-Security. "Consequently, the fake website captures those credentials for intended unauthorized use. Or they are tricked into opening attachments of these fake emails and the attachment infects their computer with a virus that steals their login credentials."

As for having PHI in email, that's something that, while common, is not recommended, Fricke notes. "Generally speaking, most employees of healthcare organizations do not have PHI in email. In fact, many healthcare organizations do not provide an email account to all of their clinical staff; usually managers and directors of clinical departments have email," he says. "However, for those workers that have a company-issued email account, some may choose to send and receive PHI depending on business process and business need."

Recent Hacker Attacks

As of May 28, the Beacon Health incident was not yet posted on the HHS' Office for Civil Rights'"wall of shame" of health data breaches affecting 500 or more individuals.


OCR did not immediately respond to an ISMG request to comment on the recent string of hacker attacks in the healthcare sector.

Other recent hacker attacks, which targeted health insurers, include:


  • An attack on Anthem Inc. , which affected 78.8 million individuals, and is the largest breach listed on OCR's tally.
  • A cyber-assault on Premera Blue Cross announced on March 17, that resulted in a breach affecting 11 million individuals.
  • An "unauthorized intrusion" on a CareFirst BlueCross BlueShield database disclosed on May 20. The Baltimore-based insurer says the attack dated back to June 2014, but wasn't discovered until April 2015. The incident resulted in a breach affecting 1.1 million individuals.


But the recent attack on Beacon Health is yet another important reminder to healthcare provider organizations that it's not just insurers that are targets. Last year, a hacking assault on healthcare provider Community Health System affected 4.5 million individuals.

Smaller hacker attacks have also been disclosed recently by other healthcare providers, includingPartners HealthCare. And a number of other healthcare organizations in recent months have also reported breaches involving phishing attacks. That includes a breach affecting nearly 760 patients at St. Vincent Medical Group.


"Healthcare provider organizations are also big targets - [they have] more complex environments, and so have more vulnerabilities that the hackers can exploit," says security and privacy expert Rebecca Herold, CEO of The Privacy Professor. "Another contributing factor is insufficient funding for security within most healthcare organizations, resulting in insufficient safeguards for PHI in all locations where it can be stored and accessed."

Delayed Detection

A delay in detecting hacker attacks seems to be a common theme in the healthcare sector. Security experts say several factors contribute to the delayed detection.


"Attacks that compromise an organization's network and systems are harder to detect these days for a few reasons," says Fricke, the consultant. "Criminals wait longer periods of time before taking action once they successfully penetrate an organization's security defenses. In addition, the attack trend is to compromise the accounts of legitimate users rather than gaining unauthorized access to a system via a brute force attack."


When criminals access a system with an authorized account, it's more difficult to detect the intrusion, Fricke notes. "Network security devices and computer systems generate huge volumes of audit log events daily. Proactively searching for indicators of compromise in that volume of log information challenges all organizations today."

As organizations step up their security efforts in the wake of other healthcare breaches, it's likely more incidents will be discovered and revealed, says privacy attorney Adam Greene of the law firm Davis Wright Tremaine.


"The challenge that many healthcare entities face is that oftentimes, the better they do at information security, the more likely it is they find potential problems. Implementing new information security tools sometimes can detect problems that may be years old," he says. "But the alternative - keeping your head in the sand - can lead to far worst results for patients and the organization."


However, as more of these delayed-detection incidents are discovered, "regulators and plaintiffs may question why any particular security issue was not identified and corrected earlier," he warns.

Accordingly, organizations should consider if there were reasonable issues that led to any delays in identifying or correcting any security lapses and maintain any related documentation supporting the cause of any delays, he suggests.


"Hindsight is 20-20, and it is always easy for regulators to question why more wasn't done sooner, and it could be challenging for the organization if it is asked to justify why it spent resources on other projects," Greene says.

No comment yet.
Scoop.it!

Data Breach Insurance: Does Your Policy Have You Covered?

Data Breach Insurance: Does Your Policy Have You Covered? | HIPAA Compliance for Medical Practices | Scoop.it

Recent developments in two closely watched cases suggest that companies that experience data breaches may not be able to get insurance coverage under standard commercial general liability (CGL)policies. CGLs typically provide defense and indemnity coverage for the insured against third-party claims for personal injury, bodily injury or property damage. In the emerging area of insurance coverage for data breaches, court decisions about whether insureds can force their insurance companies to cover costs for data breaches under the broad language of CGLs have been mixed, and little appellate-level authority exists.

No comment yet.
Scoop.it!

Hacker Attacks: Not Just Insurers at Risk

Hacker Attacks: Not Just Insurers at Risk | HIPAA Compliance for Medical Practices | Scoop.it

The recently revealed breach of a database at CareFirst BlueCross BlueShield containing information on more than 1.1 million individuals is the latest evidence that hackers are targeting health insurers, and especially Blue Cross and Blue Shield organizations, for the vast amount of protected health information they hold. Security experts warn, however, that other types of organizations, including health information exchanges and large integrated delivery systems, as well as hospitals with electronic health records systems, could be the next targets.


Health insurers "are known to have very large databases of rich personal data that can be sold for identity theft purposes and fraud," says privacy and security expert Kate Borten, founder of The Marblehead Group consultancy. "Midsize and large healthcare provider organizations should also be on high alert for the same reason."


Baltimore-based CareFirst BlueCross BlueShield disclosed on May 20 that an "unauthorized intrusion" into a database dating back to June 2014 resulted in a breach affecting 1.1 million individuals. Other Blues plans that have recently reported cyber-attacks are Anthem Inc., which says its breach impacted 78.8 million individuals, and Premera Blue Cross, which says 11 million were affected by its hacking incident.

Other Targets

Katherine Keefe, who heads breach response at the cyber-insurance company Beazley plc, predicts that health information exchange organizations, due to the large volume of data they handle, as well as electronic health record systems at hospitals - which are often configured to provide easy access to harried clinicians in healthcare settings, could be the next targets for hackers.


"The goal of EHRs in a hospital setting is to help make clinical decision-making more efficient and effective, and provide access to clinicians who need this information quickly," she says. Also, role-based access controls, advanced authentication, and encryption aren't typically part of the equation for many of these systems, she says. "That technology is perceived to slow down access for clinicians, who'd rather err on the side of good clinical decisions," rather than worry about data breaches, she adds.

M&A Risks?

One reason why health insurers have proven to be prime targets for hackers, Keefe says, is that many of these companies have grown rapidly through mergers and acquisition, with a patchwork of systems and security practices and "treasure troves" of data.


That's also true for many large integrated healthcare delivery systems, she adds. "There's been a lot of consolidation in the healthcare industry," she notes. For instance, Community Health System, a provider organizations that last August revealed a hacker breach affecting 4.5 million individuals, has also grown in recent years through mergers and acquisitions, she says.


Meanwhile, some health insurers also boast about the tens of millions of enrollees they cover, which also catches the attention of cybercriminals, Keefe says. "It's like saying, 'come and get us'," she says. Data security needs to be "more front and center" at many healthcare organizations, she stresses.


While Blue Cross and Blue Shield affiliates, such as Anthem and Premera Blue Cross, are independent companies, they are linked together through the Blue Card program, in which these plans process each other's members' insurance claims, Keefe says.


"The Blue Cross Blue Shield [network] is simply so large that they are a 'rich' environment filled with some of the most valuable data when it comes to identity theft," says Brad Cyprus, chief of security and compliance at Netsurion, a provider of cloud-based services. "It is also possible that by being one of their affiliates, there is some common technology that has an issue that has not been identified or fixed.

"However, hackers are very much like sharks smelling blood. When one successful attack happens and sensitive data is exposed, every other hacker starts focusing on those systems in an effort to reap some rewards before things are fixed while potential vulnerabilities are still exposed. In BCBS's case, that leads to a perfect storm for continued attention from the hacker community."

Data Segmentation

In the CareFirst breach, it appears that segmentation of information helped minimize the amount of data the hackers were able to access. And that's an important lesson for others to learn, security specialists say.


"Segmentation of information is the name of the game in our modern threat landscape," says Marcin Kleczynski, CEO of Malwarebytes, a provider of anti-malware solutions. "Attackers are constantly increasing their ability to compromise secure networks, be it through new technologies or old- fashioned social engineering. To that end, treating a breach less like an 'it won't happen to me' scenario in favor of a stance that expects it can help those who are charged with securing the information make a more effective battle plan."


CareFirst, in a statement on its breach information website, says the attackers gained "limited, unauthorized access to a single CareFirst database." It notes: "Evidence suggests the attackers could have potentially acquired member-created user names created by individuals to access CareFirst's website, as well as members' names, birth dates, email addresses and subscriber identification number. However, CareFirst user names must be used in conjunction with a member-created password to gain access to underlying member data through CareFirst's website.


"The database in question did not include these passwords because they are fully encrypted and stored in a separate system as a safeguard against such attacks. The database accessed by attackers contained no member Social Security numbers, medical claims, employment, credit card or financial information."

Delayed Discovery

CareFirst said the intrusion occurred in June 2014, but wasn't discovered until April 2015 after the insurer commissioned forensics vendor Mandiant to do a security review of the health plan's systems. Keefe of Beazley notes, however, that delayed breach discovery is common.


"Security technology is trying valiantly to keep up with hackers. Malware has the ability to cover its tracks, and often morphs into something that's hard to detect," she says. Nonetheless, many healthcare sector entities, "need to re-order their priorities" and allocate more resources to breach prevention and detection, she adds.

Security and privacy expert Rebecca Herold, CEO of The Privacy Professor, notes: "I believe it is almost a certainty that many covered entities and business associates are hacked and don't know it. From what I've seen in the largest of hospital systems down to the one-doctor healthcare clinic, and in many healthcare insurance companies, there are often large numbers of PHI repositories that do not have access logs established."


Too many organizations have little to no network monitoring, a lack of comprehensive risk management practices, and too few security tools, including those for detecting security problems and logging access for everywhere PHI is stored, she says.


"Also, a lack of proper funding for security, and lack of ongoing training for information security staff," contribute to the problem, she notes. "Health insurance executives need to realize that is it significantly less expensive to invest more in information security than it is to continually clean up after privacy breaches; information security cost is a fraction of the costs of breaches."




No comment yet.
Scoop.it!

Office for Civil Rights Launches Phase 2 HIPAA Audit Program with Pre-Audit Screening Surveys

Office for Civil Rights  Launches Phase 2 HIPAA Audit Program with Pre-Audit Screening Surveys | HIPAA Compliance for Medical Practices | Scoop.it

Health Insurance Portability and Accountability Act of 1996 (HIPAA) covered entities have reported that the U.S. Department of Health and Human Services Office for Civil Rights (OCR) recently sent pre-audit screening surveys to a pool of covered entities that may be selected for a second phase of audits (Phase 2 Audits) of compliance with the HIPAA Privacy, Security and Breach Notification Standards, as required by the Health Information Technology for Economic and Clinical Health Act (HITECH Act). OCR had originally planned to issue these screening surveys in the summer of 2014.

Unlike the pilot audits conducted in 2011 and 2012 (Phase 1 Audits), which focused on covered entities, OCR is conducting Phase 2 Audits of both covered entities and business associates. The Phase 2 Audit program will focus on areas of greater risk to the security of protected health information (PHI) and on pervasive non-compliance based on OCR’s Phase I Audit findings and observations, rather than a comprehensive review of all of the HIPAA Standards. OCR also intends for the Phase 2 Audits to identify best practices and uncover risks and vulnerabilities that OCR has not identified through other enforcement activities. OCR has stated that it will use the Phase 2 Audit findings to identify technical assistance that it should develop for covered entities and business associates. In circumstances where an audit reveals a serious compliance concern, OCR may initiate a compliance review of the audited organization that could lead to civil money penalties.

The following sections describe the Phase 2 Audit program and identify steps that covered entities and business associates should take to prepare for Phase 2 Audits.

Selection of Phase 2 Audit Recipients


Based on prior statements from OCR about the Phase 2 Audits, the surveys recently issued to covered entities appear to indicate that OCR has randomly selected a pool of 550 to 800 covered entities through the National Provider Identifier database and America’s Health Insurance Plans’ databases of health plans and health care clearinghouses. The survey requests organization and contact information.  

OCR has said that based on the survey responses, it will select approximately 350 covered entities, including 232 health care providers, 109 health plans and 9 health care clearinghouses, for Phase 2 Audits. OCR will then notify and send data requests to the 350 selected covered entities. The data requests will ask the covered entities to identify and provide contact information for their business associates. OCR will select the business associates that will participate in the Phase 2 Audits from this pool. OCR had previously indicated that compliance audits of business associates would begin in 2015 and continue into 2016, but this timeframe will likely be pushed back based on the delay in the Phase II Audits of covered entities.

Audit Process


OCR will audit approximately 150 of the 350 selected covered entities and 50 of the selected business associates for compliance with the Security Standards, 100 covered entities for compliance with the Privacy Standards, and 100 covered entities for compliance with the Breach Notification Standards. 

Covered entities and business associates will have two weeks to respond to OCR’s audit request. The data requests will specify content and file organization, file names and any other document submission requirements. OCR will only consider current documentation that is submitted on time. OCR has indicated that auditors will not have an opportunity to contact the entity for clarifications or to request additional information, so it is critical that the documents accurately reflect the program. Failure to respond to a request could lead to a referral to the applicable OCR Regional Office for a compliance review. The Phase 2 Audits are expected to take place over three years.

OCR previously stated that the Phase 2 HIPAA Audits would be conducted as “desk audits” rather than onsite visits. In more recent statements, however, OCR has indicated that while most Phase 2 Audits will be desk audits, OCR will also conduct some onsite, comprehensive audits. OCR has said that it will make the Phase 2 Audit protocol available on its website so that organizations may use it for internal compliance assessments.

The Phase 2 Audits will target HIPAA Standards that were frequent sources of non-compliance in the Phase 1 Audits, including risk analysis and risk management, content and timeliness of breach notifications, notice of privacy practices, individual access, the Privacy Standards’ reasonable safeguards requirement, workforce member training, device and media controls, and transmission security. OCR projects that later Phase 2 Audits will focus on the Security Standards’ encryption and decryption requirements, facility access control, breach reports and complaints, and other areas identified by earlier Phase 2 Audits. Phase 2 Audits of business associates will focus on risk analysis, risk management and breach reporting to covered entities.

OCR will present the organization with a draft audit report to allow management to comment before the report is finalized. OCR will then take into account management’s response and issue a final report.

What Should You Do to Prepare for the Phase 2 Audits?


Covered entities and business associates should take the following steps to ensure that they are prepared for a potential Phase 2 Audit:

  • Confirm that the organization has recently completed a comprehensive assessment of potential security risks and vulnerabilities to the organization (Risk Assessment)

  • Confirm that all action items identified in the Risk Assessment have been completed or are on a reasonable timeline to completion

  • Ensure that the organization has a complete inventory of business associates and their contact information for purposes of the Phase 2 Audit data requests

  • If the organization has not implemented any of the Security Standards’ addressable implementation standards for any of its information systems, confirm that the organization has documented (1) why any such addressable implementation standard was not reasonable and appropriate, and (2) all alternative security measures that were implemented

  • Ensure that the organization has implemented a breach notification policy that accurately reflects the content and deadline requirements for breach notification under the Breach Notification Standards

  • For health care provider and health plan covered entities, ensure that the organization has a compliant Notice of Privacy Practices and not only a website privacy notice

  • Ensure that the organization has reasonable and appropriate safeguards in place for PHI that exists in any form, including paper and verbal PHI

  • Confirm that workforce members have received training on the HIPAA Standards that are necessary or appropriate for workforce members to perform their job duties

  • Confirm that the organization maintains an inventory of information system assets, including mobile devices (even in a bring-your-own-device environment)

  • Confirm that all systems and software that transmit electronic PHI employ encryption technology, or that the organization has a documented risk analysis supporting the decision not to employ encryption

  • Confirm that the organization has adopted a facility security plan for each physical location that stores or otherwise has access to PHI, in addition to a security policy that requires a physical security plan

  • Review the organization’s HIPAA security policies to identify any actions that have not been completed as required (physical security plan, disaster recovery plan, emergency access procedures, etc.)


No comment yet.
Scoop.it!

Are wearables violating HIPAA?

Are wearables violating HIPAA? | HIPAA Compliance for Medical Practices | Scoop.it

With the development of wearable technologies such as the Nike Fuel Band, Fitbit, and Apple Watch, consumers suddenly have more options to monitor their fitness performance than ever before. These devices are also making inroads into medicine as physicians begin to experiment with using Google Glass  to connect ER doctors to specialists in order to reduce patients’ wait times.


Whether it’s for the weight room or the emergency room, manufacturers and software developers are collaborating to draw health further into the digital realm.


And the way these devices capture data poses serious privacy and security issues to individually-identifiable health information that must be addressed.


Real world privacy concerns


The central challenge devices such as Google Glass and Jawbone UP pose stems from the fact that they employ cloud-based data storage. By purchasing these products, customers agree to a company’s Terms of Service, and in some cases, these terms can be fairly permissive in what they allow companies to do with that data.


According to Google Glass’s current Terms of Sale, for instance, the product falls under the company’s general Terms of Service. Although these grant the user intellectual property rights over data they store on Google servers, the company can still reproduce, modify, publicly display, distribute, and generally use this data to promote and enhance existing products and create new ones. Thus, although users may not be relinquishing ownership of their IP rights, it is clear that they are giving up a substantial degree of control over their data.


Google’s shift to a unified privacy policy in March 2012 further bolstered its ability to improve services through the collection and analysis of customer data. This new policy enabled the company to consolidate data on individual users from across its product portfolio and create unique user profiles, giving Google a fuller picture of individuals’ preferences and activities.


All personal health data is not created equal


Not all personal data is equal in the eyes of the law. That is the central issue when applying these practices to health information. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) permits the analysis and sharing of individually-identifiable health information when directly related to patient care, but it is more restrictive. The law permits health information to be used in assessments of physician and hospital performance, but allows patients to request that their data not be shared with third parties. HIPAA also requires consent before a healthcare provider uses health information for advertising purposes.

In a medical context that means: mining individually-identifiable health information could constitute a breach of patient privacy if the analysis falls outside of the scope of HIPAA. It is not clear whether using patient data to improve products, as opposed to health outcomes, is allowed under this law. And an even more concerning scenario could take shape if health information were combined with other personal, non-medical data for the purposes of user profiling.


HIPAA and wearables: What’s next?


If wearable device manufacturers want to store health information in the cloud, they must bring their Terms of Service and privacy policies in line with HIPAA privacy and security requirements.

The vendors making wearables should take several steps to achieve this goal.

  • Analyzing health data: Where privacy is concerned, companies must only analyze health data within the confines of what is permissible under HIPAA. If companies want to mine customer data for other purposes, they should keep health information separate from non-medical data.
  • Sharing health data: Companies would also need to grant patients and consumers greater transparency into how their data is being used as well as who has access to it. HIPAA would also require obtaining a patient’s consent before using their health information in any part of the advertising process.
  • Securing health data: When it comes to HIPAA-mandated security controls, companies should also protect health information with baseline access control and encryption measures, in addition to maintaining an “audit trail” of who has edited a patient’s information and when.


These measures would make the manufacturers of wearable health tech more accountable to the patients and consumers that their products serve — and it follows that any consumers, doctors and healthcare organizations using wearables in any capacity should seek out vendors will to adhere to those tenets moving forward.

No comment yet.
Scoop.it!

HIPAA rules apply to most workplace wellness programs

HIPAA rules apply to most workplace wellness programs | HIPAA Compliance for Medical Practices | Scoop.it

Wellness programs are great ways for employers to provide guidance on ways employees can improve their health through fitness, diet and various other means. But oftentimes, employers forget that wellness programs may be an extension of a company’s heath care plan. As such, the Health Insurance Portability and Accountability Act (HIPAA) rules apply equally to these wellness programs as they do health care plans.

The U.S. Department of Health and Human Services (HHS) recently released a list of questions and answers to remind employers of their HIPAA obligations with regard to wellness programs.


In the release, titled “HIPAA Privacy and Security and Workplace Wellness Programs,” HHS clarifies which wellness programs are subject to HIPAA rules. That is, any workplace wellness program a company offers as part of a group health plan for employees. “Where a workplace wellness program is offered as part of a group health plan, the individually identifiable health information collected from or created about participants in the wellness program is [protected health information (PHI)] and protected by the HIPAA Rules,” HHS says.

The department also said that workplace wellness programs which do not provide any health benefits and are not connected to the health plan are not subject to HIPAA. However, it warns that “other Federal or state laws may apply and regulate the collection and/or use of the information” collected through that program.


The Q&A also addresses the HIPAA protections that are in place when a workplace wellness program is offered through the group health plan for plan sponsor employees regarding their access to individually identifiable health information about participants in that program. HHS clarifies when an employer should have access to PHI and what it may and may not do with that information.


Because this is the time of year that employers are beginning to develop their wellness programs for the following year, the Q&As serve as an excellent reminder that HIPAA rules apply for most wellness programs. As companies begin their new programs, it’s essential they keep this in mind.


No comment yet.