HIPAA Compliance for Medical Practices
82.6K views | +33 today
HIPAA Compliance for Medical Practices
HIPAA Compliance and HIPAA Risk management Articles, Tips and Updates for Medical Practices and Physicians
Your new post is loading...
Your new post is loading...

Unencrypted Devices Still a Breach Headache

Unencrypted Devices Still a Breach Headache | HIPAA Compliance for Medical Practices | Scoop.it

While hacker attacks are grabbing most of the health data breach headlines so far in 2015, a far more ordinary culprit - the loss or theft of unencrypted computing devices - is still putting patient data at risk.

Incidents involving unencrypted laptops, storage media and other computing devices are still popping up on the Department of Health and Human Services' "wall of shame," which lists health data breaches affecting 500 or more individuals. Among the largest of the most recent incidents is a breach at the Indiana State Medical Association.

That breach involved the theft of a laptop computer and two hard drives from a car parked for 2-1/2 hours in an Indianapolis lot, according to local news website, The Star Press. Information on more than 38,000 individuals, including ISMA employees, as well as physicians, their families and staff, was contained in the ISMA group health and life insurance databases on those devices.

The incident occurred on Feb. 3 while ISMA's IT administrator was transporting the hard drives to an offsite storage location as part of ISMA's disaster recovery plan, according to The Star Press. An ISMA spokeswoman declined Information Security Media Group's request to comment on the breach, citing that there are "ongoing civil and criminal investigations under way."

A breach notification letter sent by ISMA indicates that compromised data included name, address, date of birth, health plan number, and in some cases, Social Security number, medical information and email address. ISMA is offering those affected one year's worth of free credit monitoring.

Common Culprit

As of Feb. 27, 51 percent of major health data breaches occurring since 2009 involved a theft while 9 percent involved a loss, according to data presented by an Office for Civil Rights official during a session at the recent HIMSS 2015 Conference in Chicago. Of all major breaches, laptop devices were involved in 21 percent of the incidents, portable electronic devices in 11 percent and desktop computers in 12 percent, according to the OCR data.

Two of the five largest breaches to date on the Wall of Shame involved stolen unencrypted computing devices:

  • A 2011 breach involving the theft of unencrypted backup computer tapes containing information on about 4.9 million individuals from the car of a Science Applications International Corp. employee who was transporting them between federal facilities on behalf of military health program TRICARE.
  • The 2013 theft of four unencrypted desktop computers from an office of Advocate Health and Hospital Corp. in Chicago, which exposed information on about 4 million patients.

Many smaller breaches affecting less than 500 individuals also involve unencrypted computing devices, according to OCR.

Safe Harbor

The thefts and losses of encrypted computing devices are not reportable breaches under HIPAA. That's why security experts express frustration that the loss and theft of unencypted devices remains a common breach cause.

"It is unfortunate that [encryption] is considered an 'addressable' requirement under HIPAA, as many people don't realize that this does not mean optional," says Dan Berger, CEO of security risk assessment firm Redspin, which was recently acquired by Auxilio Inc.

Under HIPAA, after a risk assessment, if an entity has determined that encryption is a reasonable and appropriate safeguard in its risk management of the confidentiality, integrity and availability of e-PHI, it must implement the technology. However, if the entity decides that encryption is not reasonable and appropriate, the organization must document that determination and implement an equivalent alternative measure, according to HHS.

Attorney David Holtzman, vice president of compliance at the security consulting firm CynergisTek, says he's expecting to see soon an OCR resolution agreement with a healthcare provider that suffered several breach incidents caused by their failure to manage the mobile devices used by their employees on which electronic protected health information was stored or accessed.

"Install encryption on laptops that handle PHI," he advises. "Don't store patient information on a smartphone or other mobile device."

Concerns about the cost and complexity of encryption are unfounded, Berger contends, because encryption has become more affordable and the process has been made easier.

"There have been arguments that encrypting backup media sent offsite is technically problematic," says privacy and security expert Kate Borten, founder of the consultancy The Marblehead Group. "While it's true that encryption can add overhead, this has become a weaker argument in recent years."

But Borten acknowledges that organizations must look beyond encryption when safeguarding patient information. "Encryption is not a silver bullet," she notes. "For example, if a user leaves a laptop open, the otherwise-encrypted hard drive is accessible. But for portable devices and non-paper media, there is no equivalent security measure."

Borten notes that the most common reason cited for a lack of device encryption is a lack of adequate support and resources for overall security initiatives. "While all an organization's laptops might be encrypted - the easy part - there are mobile devices running on multiple platforms and personally owned devices and media that are harder to control," she notes. "It takes management commitment as well as human and technical resources to identify all those devices and bring them under the control of IT."

Room for Improvement

The 2015 Healthcare Information Security Today survey of security and privacy leaders at 200 healthcare entities found that encryption is being applied by only 56 percent of organizations for mobile devices. The survey, conducted by Information Security Media Group in December 2014 and January 2015, found that when it comes to BYOD, about half of organizations require encryption of personally owned devices; nearly half prohibit the storage of PHI on these devices. Only 17 percent of organizations say they don't allow BYOD.

Complete results of the survey will be available soon, as well as a webinar that analyzes the findings.

"Personally owned devices are definitely the Achilles heel," Berger says. "Healthcare organizations have to address BYOD head-on. It is a complicated and thorny issue, but 'looking the other way' is not an acceptable approach. We recommend clear decisions regarding acceptable use, reflected in policy and backed up by enforcement," he says.

"We have also seen [breaches] happen when an organization makes the decision to encrypt but then has a long roll-out plan and the lost/stolen devices had yet to be encrypted," he adds.

Steps to Take

To help reduce the risk of breaches involving mobile computing devices, Berger says organizations should make sure they have a mobile device use policy that's "clear, comprehensive and well-understood. We suggest calling it out as a separate policy that must be signed by employees. Back up policy with ongoing security awareness training and strong enforcement."

In addition, OCR advises covered entities and business associates to make use of guidance it has released with its sister HHS agency, the Office of the National Coordinator for Health IT. OCR also offers free online training on mobile device security.

No comment yet.

Misplaced USB drive leads to county health department breach

Misplaced USB drive leads to county health department breach | HIPAA Compliance for Medical Practices | Scoop.it

The Denton County (Texas) Health Department began notifying tuberculosis (TB) clinic patients of a breach that occurred in February when a health department employee left a USB drive containing PHI at a printing store, according to a press release.

The USB drive contained the names, dates of birth, addresses, and test results of 874 patients seen at a TB clinic associated with the county health department. The employee left the USB drive unattended at the printing store for approximately one hour, according to the press release.

The department launched an internal investigation after the employee voluntarily reported the potential breach. The press release states that the department does not believe the records were accessed during the time the USB drive was left unattended. However, it is notifying affected patients by mail and recommending that they obtain a credit report and monitor financial statements.

No comment yet.

Another Day…Another Healthcare Breach

Another Day…Another Healthcare Breach | HIPAA Compliance for Medical Practices | Scoop.it

We all know about the Anthem Healthcare breach of millions of patient records. That’s been followed by an announcement by Premera Blue Cross that they’ve had 11 million records breached as well. Plus, I’m sure we’re just at the start of healthcare data breaches that are going to occur.

What’s astonishing to me is that many seem to be playing this up as a new thing. I remember about 15 years ago when I was in college and a guy I knew told stories about hacking through an entire hospital system. In fact, he casually made the comment, “You don’t want to hack the government cause they’ll come after you, but hospitals and universities you can easily hack and nothing will happen.”

This story illustrates two points. First, breaches of healthcare organizations have been happening for a long time. This isn’t something new. Second, we’re just now starting to put in place the technology that will detect breaches. That’s a good thing. In fact, in some ways we should applaud the fact that we actually know these breaches are happening now. I’m certain that many of these breaches happened before and we just never knew about it because you don’t have to report a breach you don’t know about.

Now that we know about these breaches, will that spur action? I think it will in some organizations. It certainly won’t be a bad thing for security and privacy. Unless we’ve become so callous to the breaches (like the title of this post suggests) that we stop caring about breaches because “they’re bound to happen.”

I hope that this post doesn’t encourage apathy on the part of healthcare organizations security and privacy. I assure you that no hospital wants to go through a breach of healthcare data. While impossible to guarantee it won’t happen, a sincere effort to create a culture of compliance in your hospital can go a long way to preventing many breaches.

As my college hacker friend told me many years ago, “You can never make something 100% secure, but you can make it hard enough for someone to hack that it’s not worth their time.” If it’s not worth their time, they’ll usually move on to someone easier.

No comment yet.

Two More Health Insurers Report Data Breach

Two More Health Insurers Report Data Breach | HIPAA Compliance for Medical Practices | Scoop.it

Today, medical insurance providers LifeWise and Premera Blue Cross each reported, separately, that they had been the target of sophisticated cyberattacks, which initiated May 5, 2014. Premera will be notifying approximately 11 million affected customers; LifeWise 250,000. Neither organization has evidence that any customer data has been used fraudulently, and has not yet confirmed that any patient data has indeed been compromised.

They say attackers "may have gained unauthorized access to" members' information, including name, date of birth, Social Security number, mailing address, email address, telephone number, member identification number, bank account information, and claims information, including clinical information.

Individuals who do not have medical insurance through these companies, but do other business with them, might have had their email addresses, banking data, or Social Security numbers exposed.  

These attacks, when combined with the Anthem Healthcare breach reported last month and the Community Health Systems breach in the summer, clearly indicate that health insurance providers have become a popular new target -- and Chinese cyberespionage groups are being implicated.

Anthem first detected suspicious activity Jan. 27 and confirmed on Jan. 29 that an attack had occurred, over the course of several weeks in December 2014.

LifeWise and Premera also say they discovered their breaches Jan. 29 -- possibly as a result of Anthem sharing information about their own intrusion with HITRUST's Cyber Threat Intelligence and Incident Coordination Center. However, after investigations by Mandiant -- the same organization conducting the investigation at Anthem -- both Premera and LifeWise report that their first intrusions occurred several months earlier, in May.

Both Premera and LifeWise are providing two years of free credit monitoring and identity theft protection to affected individuals. More information is available at premeraupdate.com and lifewiseupdate.com.

No comment yet.

US cops charge suspects in 'world's largest data breach'

US cops charge suspects in 'world's largest data breach' | HIPAA Compliance for Medical Practices | Scoop.it

US law enforcement has charged three men believed to have been behind "the largest data breach in US history".

The US Department of Justice (DoJ) reported charging Vietnamese citizens Viet Quoc Nguyen, 28, and Giang Hoang Vu, 25, and Canadian citizen David-Manuel Santos Da Silva, 33.

The charges allege that Nguyen hacked into and stole confidential information from at least eight US email service providers between February 2009 and June 2012.

The information included over one billion email addresses from the companies' marketing departments, and was listed by the DoJ during a Congressional inquiry in June 2011 as the largest data breach in US history.

Vu reportedly helped Nguyen use the stolen information to send "tens of millions" of malicious spam messages.

Da Silva, who was also indicted by a federal grand jury on 4 March 2015 for conspiracy to commit money laundering, reportedly helped Nguyen and Vu to monetise the scheme and hide incoming revenue.

Vu was arrested by Dutch law enforcement in 2012 and extradited to the US in March 2014. He pleaded guilty to conspiracy to commit computer fraud in February and is scheduled to be sentenced on 21 April.

Da Silva was arrested at Fort Lauderdale-Hollywood International Airport on 12 February, and is scheduled to be arraigned on 9 March in Atlanta. Nguyen remains at large.

US assistant attorney general Leslie R. Caldwell listed the charges as a major step in bringing "international" cyber criminals to justice.

"These men, operating from Vietnam, the Netherlands, and Canada, are accused of carrying out the largest data breach of names and email addresses in the history of the internet," said Caldwell.

"The defendants allegedly made millions of dollars by stealing over a billion email addresses from email service providers. This case again demonstrates the resolve of the DoJ to bring accused cyber hackers from overseas to face justice in the US."

Reginald Moore, special agent in charge of the US Secret Service Atlanta Field Office, explained that the charges prove the need for increased collaboration between departments when combating cybercrime.

"Our success in this case and other similar investigations is a result of our close work with our law enforcement partners," he said.

"The Secret Service worked closely with the DoJ and the FBI to share information and resources that ultimately brought these cyber criminals to justice.

"This case demonstrates that there is no such thing as anonymity for those engaging in data theft and fraudulent schemes."

The charges have been welcomed by members of the security community. Imperva CTO Amichai Shulman said he expects the move to set off alarm bells in cybercrime circles.

"I think the most important lesson here is that law enforcement agencies are able to point out specific individuals involved in specific acts of cybercrime even when they are in distant locations around the globe," he said.

"My belief is that, if enough resources are put up against small breaches as well as large breaches in a ‘zero tolerance' policy against cyber violation, we'd see the number of attacks decrease significantly over a short period of time."

Mark James, security specialist at ESET, hopes to see similar operations in the near future.

"Hopefully this will turn out to be a success and will go on to many more cases showing that the fight against cybercrime is not always a losing battle," he said.

The latest developments come during a global push by law enforcement to combat cyber crime.

No comment yet.

DOJ Charges Suspect in Largest Known Data Breach

DOJ Charges Suspect in Largest Known Data Breach | HIPAA Compliance for Medical Practices | Scoop.it

Justice may not always be swift, but the U.S government has proven itself to be tenacious in tracking down alleged cyber-criminals to the ends of the Earth. The U.S Department of Justice (DOJ) announced Feb. 17 that Russian national Vladimir Drinkman appeared in a federal court in New Jersey in connection with cyber-attacks that occurred between 2007 and 2009 and affected up to 160 million credit cards.

Drinkman has pleaded not guilty and is being detained without bail ahead of a trial scheduled for April 27, 2015. Before being extradited to the United States to stand trial, Drinkman had been in detention by authorities in the Netherlands since he was first arrested June 28, 2012.

According to the indictment, Drinkman did not act alone in his activities and there were other co-conspirators, including Alexandr Kalinin of St. Petersburg, Russia; Roman Kotov, of Moscow; Mikhail Rytikov of Odessa, Ukraine; and Dmitriy Smilianets of Moscow. The Justice Department noted that Smilanets is currently in U.S. federal custody, while Kalinin, Kotov and Rytikov remain at large.

The Justice Department previously identified Drinkman and Kalinin as "Hacker 1" and "Hacker 2" in a 2009 indictment in which Albert Gonzalez was also charged. That indictment involved the corporate data breach that impacted Heartland Payment Systems, Hannaford Brothers and 7-Eleven.

All told, the Justice Department claims that Drinkman and his co-conspirators acquired at least 160 million credit card numbers by way of various hacking activities. Those activities include SQL injection attacks against the victims, whereby the attackers were able to inject malware.

"This malware created a back door, leaving the system vulnerable and helping the defendants maintain access to the network," the U.S Department of Justice noted in a statement. "In some cases, the defendants lost access to the system due to companies' security efforts, but were allegedly able to regain access through persistent attacks."

Though Drinkman was first identified back in 2009 as Hacker 1 in the Gonzalez indictment, it took until 2015 for the U.S. government to bring him before a federal court. That six-year gap is not uncommon, said Phil Smith, senior vice president, Government Solutions and Special Investigations, at security specialist Trustwave. The extradition process is lengthy and can be cumbersome, he added.

"Criminals will often flee to countries where extradition to the U.S. or NATO countries is lengthy or can be subverted," Smith told eWEEK. "We have even seen cases where the U.S. has pending criminal charges and requested to extradite individuals only to see them tried, convicted and jailed in a foreign country and then extradited back to their home countries to serve out their sentences."

Smith added that, in some cases he is aware of, once criminals have been returned to their home countries, the charges were thrown out and the criminals have been released. "It is very frustrating. So when you are able to get one of these individuals extradited to the U.S., it's a great victory and I applaud the efforts of the prosecutors and agents," he said.

No comment yet.

5 scary ways your business is vulnerable to a cyber security breach

5 scary ways your business is vulnerable to a cyber security breach | HIPAA Compliance for Medical Practices | Scoop.it

The Internet has changed the way that you do business.

No matter what industry you are in, you value what your cyber network does for you in terms of connecting with clients and staying efficient.

But, with advances in cyber technologies come more cybercriminals. No matter how sophisticated cyber security technologies and firewalls get, it seems that there is still a more sophisticated hacker capable of breaching your systems and stealing sensitive data.

Believe it or not, three-quarters of businesses surveyed have reported that they have experienced a security breach in the last 12 months.

As you can see, you are more vulnerable than you might think, and here’s how:

You Fail to Invest in Encryption

Hackers attempt to break through firewalls in an effort to steal information. From bank accounts and routing numbers, to social security and credit card numbers, businesses have a lot of sensitive data that they have to protect.

When these attackers steal information, they can affect your reputation and cost you money. If you have failed to encrypt your data with full-disk encryption tools, your data may be vulnerable.

If you have failed to encrypt your data with full-disk encryption tools, your data may be vulnerable. You Are Not Wi-Fi Protected

You Are Not Wi-Fi Protected

Did you know that it is much easier for cyber attackers to gain access into a network when you have a Wi-Fi network?

Most security experts recommend that businesses connect to the Internet with a wired network, but if you do have a Wi-Fi network, then you need to have a complex password complete with special characters, numbers, and capital letters.

Leaving Computer and Mobile Devices Vulnerable

Not all cyber attacks involve hacking into the network. Actually, a large portion of businesses who are targeted by “cyber” criminals are those who have had their computing devices stolen.

If your business laptops, cell phones, tablets, and other devices are stolen, it is easy for the burglar to gain access into your network and find important personal and account information on you and your clients.

Having special physical locks to secure devices can deter burglars looking for a quick score.

Failure to Focus on Mobile Security

The cyber infrastructure is turning mobile, and many companies have not developed a strategic plan to keep up with the growing popularity of mobile computing.

If you use smartphones for conferences or tablet devices for estimates, your network could be at risk of an attack.

Mobile threats are becoming so common that accredited institutions like Norwich University have developed an online master’s in information security that trains MS graduates how to stay ahead of these damaging threats. Mobile security needs to go to the forefront of your security planning

Employees Are Not Properly Trained

You do not have to be a large corporation just to implement employee training programs that will prepare everyone to follow good security practices.

You should teach your employees how to make strong passwords, how often to change passwords, how to spot a threat and how to avoid sites that make the company network vulnerable.

By doing this, you can prevent potential attacks.

There will always be the threat of cyber attackers as long as the Internet is around.

While the threat is there, there are also ways to make your business more secure and less vulnerable. Brush up on security and be sure your company is equipped to survive.

Via Roger Smith, Paulo Félix
No comment yet.

Is Healthcare particularly vulnerable to Hacking?

Is Healthcare particularly vulnerable to Hacking? | HIPAA Compliance for Medical Practices | Scoop.it

There are a lot of people saying that; most of them stand to profit if you believe them (including me, in fact).  The Anthem breach gives an opportunity for a bunch of news articles on just this point.  Let's consider this for a moment.

Much hacking and phishing is aimed at access to quick-value money: credit card numbers that can be used right away (with the victim perhaps not knowing about the use until the bill comes, or perhaps not even noticing it when the bill comes), actual bank account or financial acount data so current funds can be withdrawn, phony checks written, etc.  In this type of hacking, the reward comes quickly to the hacker, but might be small change and is usually not a long-term proposition.

Some hacking is designed to allow for real identity theft: the hacker acquires a social security number and other information, impersonates the individual to obtain credit cards, car loans, even house loans, runs up big debts, and when the credit card company or bank tries to collect, the impostor is gone with the loot and the victim is left to try to prove that it wasn't him that got/used the credit card, loan, etc.  The reward takes longer, but can be much bigger than snatching a credit card number.

With regard to both of these types of hacks, the victim, the bank or credit card company, and the vendor at which the stolen credit card is used are all incentivized to prevent the hack, since all of them stand to suffer substantial harm: the victim's credit might be ruined (or he might pay for something he didn't get), and the bank, the credit card company, or the later vendor might be left with the bill.

Health records sometimes contain credit card numbers, but often don't, making them not particularly useful for the first type of hack.  On the other hand, health records usually contain social security numbers and other demographic data that can be useful for the second type of hack.  Thus, medical records might be useful for traditional identity theft schemes.

The much bigger risk, and what medical records are particularly well suited for, is medical identity theft.  This type of hack targets patients with good insurance, and allows someone to impersonate the insured and receive the insured's health benefits.  The impostor gets free or reduced cost healthcare, but unlike most other hacks, the "victim" (the person whose data was stolen) doesn't necessarily suffer (or at least doesn't suffer immediately); in fact, the victim might benefit, since the impostor might actually pay a part of the victim's annual deductible.  Additionally, the person whose data was stolen is not in a very good position to know it was stolen, unless he regularly checks his EOBs (frankly, even if he scrupulously checks his EOBs, they can be hard enough to understand that the medical identity theft might not even be noticed).  Rather, the immediate victim is the insurer, who pays for care for someone who did not buy insurance.  And if the insurer discovers the identity theft, the care provider becomes the victim, since the insurer may try to recover the funds paid to the provider for the imposter's care.

Unlike a stolen credit card number, which can be used to purchase almost anything (including cash cards), a stolen medical identity is not as easy to immediately monetize.  However, the lower level of vigilance by the potential victim makes medical identity theft easier to pull off.

More importantly, however, the risks of medical identity theft far outweigh the risk of credit card theft or regular identity theft.  An impostor who receives care while posing as the insured will leave behind a medical record that might be relied upon by some future healthcare provider.  Perhaps the impostor is not allergic to penicillin, but the insured is; the impostor receives care at a hospital and the medical record says the patient may have penicillin.  When the real insured shows up, tragedy might occur.  Thus, while regular identity theft might cause financial ruin to its victims, medical identity theft can kill.

Does the Anthem hack indicate that an epidemic of medical identity theft is on its way?  Most criminals are looking for quick cash, and medical identity theft doesn't offer as quick a reward as access to a bank account or credit card number.  However, given that there is profit to be made in medical identity theft, and the risks are much greater, healthcare providers, insurers, and patients should all be on high alert for signs of it, and be prepared to quickly respond.

No comment yet.

Healthcare Mobile Apps, the Cloud, and HIPAA Compliance

Healthcare Mobile Apps, the Cloud, and HIPAA Compliance | HIPAA Compliance for Medical Practices | Scoop.it
Google Fit, Apple Health Kit, and even the Affordable Care Act have companies scrambling to build healthcare-focused mobile apps and/or upgrade existing medical devices. However, the process of bringing a new product to market in the healthcare industry brings about a whole other set of challenges. Not only do you have to worry about a product’s design and functionality, but now there’s the issue of HIPAA compliance and whether your product meets the criteria for FDA regulation. If you’re interested in building a healthcare-focused mobile app or medical device, don’t let these things deter you from doing so. Instead, let’s go over a few things you’ll need to be aware of before you jump in with both feet.
What is HIPAA?

The Health Insurance Portability and Privacy Act, also known as HIPAA, was first signed into law in 1996. HIPAA was written with the intent to protect individuals from having their healthcare data used or disclosed to people or agencies that have no reason to see it. It has two basic goals:

1.) Standardize the electronic exchange of data between health care organizations, providers, and clearinghouses.
2.) Protect the security and confidentiality of protective health information.

There are four rules of HIPPA, but today we’ll focus on the HIPAA Security Rule.
What is PHI?

Protected Health Information (PHI) includes medical records, billing information, phone records, email communication with medical professionals, and anything else related to the diagnosis and treatment of an individual. Examples of non-PHI include steps on your pedometer, calories burned, or medical data without personally identifiable user information (PII).

When building a healthcare app or medical device with the intent to collect, store, and share PHI with doctors and hospitals, it is absolutely mandatory make sure you’re HIPAA-compliant (or else you’ll face some hefty fines). Additionally, if you’re planning on storing data in the cloud, you must take appropriate measures to ensure you’re properly securing the data and working with a HIPAA-compliant cloud storage service, too.

Here are some steps you’ll need to take:
Determine if your mobile app or medical device must be HIPAA-compliant.

Are you collecting, sharing, or storing personally identifiable health data with anyone who provides treatment, payment and operations in healthcare (aka a covered entity)? If yes, then you must be HIPAA-compliant.
Determine if your mobile app or medical device must FDA-regulated.

The U.S. Food and Drug Administration (FDA) regulates medical devices to ensure their safety and effectiveness. If you plan to market your product as a medical device, then it may be subject to the provisions of the Federal Food Drug & Cosmetic (FD&C) Act. Find out if your product meets the definition of a medical device as defined by section 201(h) (or a radiation-emitting product as defined in Section 531) on the FDA website. (Visit Is This Product a Medical Device? for more information.) You can also contact the FDA directly if you are unsure whether your mobile app is considered a “Mobile Medical App” and will need to be FDA-regulated. (See Mobile Medical Applications.)
Work with a HIPAA-compliant cloud storage service provider.

Storing data in the cloud is appealing to the healthcare industry because of the amount of data that needs to be stored and easily accessible yet remain secure. The cloud allows individuals and businesses to store large amounts of information in massive data centers around the globe, rather than on internal servers and software. That data can be accessed from anywhere, anytime. Depending on the amount of data (which in healthcare can be A LOT), it can be more cost-effective to store data in the cloud when you account for the costs of hardware, maintenance, staff, and energy when storing locally.

That being said, you need to make sure you’re working with a HIPAA-compliant cloud storage service provider, like Amazon Web Services or Google Apps, though there are several others you can consider.
Get a signed Business Associate Agreement.

Just because you’re working with a HIPAA-compliant cloud storage service provider doesn’t mean you’re covered. Any vendor or subcontractor who has access to PHI is considered a Business Associate, and therefore must sign a Business Associate Agreement. That includes your cloud storage service provider.
Secure sensitive data.

Developers should take appropriate safeguards to ensure that PHI is secure and cannot be accessed by unauthorized individuals. People lose their smartphones and iPads or don’t enable passcodes at all, so it’s even more important to make sure the app or medical device is HIPAA-compliant. Things like data encryption, unique user authentication, strong passwords, and mobile wipe options are just a few requirements. See InformationWeek’s article about developers and HIPAA compliance for additional information.

Finally, there is no official certification process to ensure that you’re in compliance with HIPAA’s Security Rule. The U.S. Department of Health and Human Services website states:

“The purpose of the Security Rule is to adopt national standards for safeguards to protect the confidentiality, integrity, and availability of electronic protected health information (e-PHI) that is collected, maintained, used or transmitted by a covered entity. Compliance is different for each organization and no single strategy will serve all covered entities.” (HHS.gov)

That means that it is up to the organization to implement its own strategy and follow the requirements, or else face those hefty fines.

So that’s an overview of HIPAA compliance. Have you gone through this process? What obstacles did you face? Are you interested in building a mobile app or medical device but concerned about the regulations? Leave a comment below, or send us an email with your questions.
No comment yet.

How responsible are employees for data breaches and how do you stop them?

How responsible are employees for data breaches and how do you stop them? | HIPAA Compliance for Medical Practices | Scoop.it

Data breaches have very quickly climbed the information security agenda and that includes the data breach threat posed by employees and IT professionals.

Now a new report says the insider problem is far worse than we had previously imagined. The Verizon Data Breach investigations report claims that 14% of breaches are due to insiders and that’s not counting the further 12% of breaches that come from IT itself.

Examining the motives of employees with malicious intent, the Verizon report identified two main reasons insiders choose to cause so much trouble:

  1. They are looking for financial gain, perhaps via selling confidential data; or
  2. It’s an act of revenge by disgruntled workers or angry ex-employees who still have network privileges.

On the other hand, CompTIA, an association representing the interests of IT resellers and managed service providers, has a far different point of view. It says more than half of all breaches – some 52% – are due to human error or malice, and the rest arise from technology mistakes. Research from the SANS Institute reaches the same conclusion – employee negligence is a huge source of data breaches. Social engineering is one such element, so this once again shows the importance of training employees in basic IT security.

According to CompTIA, technical solutions are not enough. IT vigilance is always necessary as too many organisations don’t even know there is an insider threat. Resigning yourself to the fact that the human error factor is a problem with no solution is neglectful, especially when it accounts for such a high percentage of breaches. Ultimately, employees are the strongest security layer. Of course, it is just as important to make sure all updates and patches are installed, firewalls are turned on and anti-malware is up to date.

Organisations also need to consider adding tools that can spot and stop data leakage amongst other breaches. Email security too is a top measure to take as many breaches and leaks come through or from the employee’s inbox.

What precautions can you take?

But what should an organisation do when users, whose roles require access to sensitive data, misuse that access? What precautions can they take to reduce both the risk of this happening, and the damage that can result from insider activity?

There is no single answer to these questions, and there is no silver bullet that can solve the problem. A layered approach that includes policy, procedure and technical solutions is the right approach to take. GFI Software has identified 10 precautions in particular that organisations should consider.

1.Background checks

Background checks should be carried out on every employee joining the organisation, even more so if those employees will have access to privileged data. While not foolproof (Edward Snowden had security clearance) they can help to identify potential employees who may have a criminal record or had financial problems in the past. They may also uncover some details of their employment history that bear closer inspection and further checks.

2.Acceptable Use

Acceptable Use Policies (AUP) do more than simply define what users should and should not do on the Internet. They also define what is acceptable and unacceptable when using customer and business proprietary data. While it will not stop those with clear intent, it will warn employees that there are consequences if they are caught including disciplinary action and possibly dismissal.

3.Least Privilege

The principal of least privilege states that users should only be granted the minimum amount of access necessary to complete their jobs. This should include both administrative privileges and access to data. By limiting access, the amount of damage an insider can cause is limited.

4.Review of Privileges

Users’ access to systems and data should be reviewed regularly to ensure that such access is appropriate and is also still required. As users change roles and responsibilities, any access they no longer need should be revoked.

5.Separation of Duties

When possible, administrative duties should be divided up so that at least two users are required for key access or administrative functions. When two users must be involved, any malicious or inappropriate access requires collusion, reducing the likelihood of inappropriate actions and increasing the likelihood of detection.

6.Job Rotation

Many insider threats develop over time and may go undetected for months or years. Often boredom is a cause. One way to counter both problems and at the same time improve the skills and value of key employees, is to rotate users through different roles. Job rotation also increases the likelihood that inappropriate activities will be detected as the new role holder must by definition examine what the previous role holder was doing.

7.Mandatory Time Away

All users need a holiday, a break and time away to recharge. This is not only good for users, it’s good for the organisation. Just like job rotation, when a privileged user is on leave, another person must cover their duties and has the opportunity to review what has been done.

8.Auditing and Log Review

Auditing is imperative. All actions and access must be audited, both for successes and failures. You will want to investigate failures as they may indicate attempts to access data, but you will also want to review successes and ensure that they are in support of appropriate actions, rather than inappropriate ones. While log review only detects things “after the fact”, they can detect repetitive or chronic actions early, and hopefully before too much damage is done.

9.Data Loss Protection

Data Loss Protection (DLP) technologies cannot prevent a determined attacker from taking data, but it can prevent many of the accidental data leakages that can occur.

10.Endpoint Protection

Endpoint protection technologies can greatly reduce the risk of data loss and also detect inappropriate activities by privileged users. Endpoint protection can help you secure BYOD devices, and search files for key data like account numbers. The technology also helps to enforce policies that restrict users from transferring data to unapproved USB devices and encrypt those devices that are approved.

Insider threats can be prevented if a detailed and layered strategy is adopted. Every organisation needs HR, legal and IT to work together to cast a protective net that will proactively identify threats or at least minimise the impact of insider threat. No organisation is safe but we can all lower the risk by acknowledging that the problem exists and taking a range of simple precautions.

No comment yet.

Breaking Down HIPAA: Health Data Encryption Requirements

Breaking Down HIPAA: Health Data Encryption Requirements | HIPAA Compliance for Medical Practices | Scoop.it
Health data encryption is becoming an increasingly important issue, especially in the wake of large scale data breaches like Anthem, Inc. and Premera Blue Cross. The HIPAA Omnibus Rule improved patient privacy protections, gave individuals new rights to their health information, and strengthened the government’s ability to enforce the law. However, health data encryption is considered an “addressable” aspect rather than a “required” part of HIPAA.

With close to 90 million Americans potentially having their personally identifiable information exposed in the last few months alone, including PHI in some cases, more people are wondering if enough is being done to keep that data safe. Should health data encryption be required? What exactly determines if an entity incorporated encryption methods into its privacy and security measures?2015-02-05-hhs-budget

We’ll take a closer look at what health data encryption is, why it’s beneficial, and how covered entities are currently required to use it.

What is health data encryption?

Health data encryption is when a covered entity converts the original form of the information into encoded text. Essentially, the health data is then unreadable unless an individual has the necessary key or code to decrypt it. This is a good way for electronic PHI (ePHI) to remain secure and ensure that unauthorized individuals are not able to “translate” the data for their own use.

In relation to the HIPAA Privacy Rule and the HIPAA Security Rule, data encryption is a method to protect PHI. In particular, the Security Rule was designed to protect all data that “a covered entity creates, receives, maintains or transmits in electronic form,” according to the Department of Health & Human Services’ (HHS) site.

Why would it be beneficial?

Theft continues to be one of the major causes of healthcare data breaches, including incidents that involve PHI. If a laptop or smartphone falls into the wrong hands, that individual could potentially cause major damage to patients if he or she had access to medical information or financial information. However, if that unauthorized user was unable to read the information on the devices, then some issues could potentially be avoided.

Health data encryption could be an important step in the privacy and security process. However, by itself, it will not be enough. For example, strong malware could break through a covered entity’s database security. From there, cyber attackers could get access to sensitive information, including PHI. Or, if an employee’s login credentials was stolen, an unauthorized user could gain access that way. In either of those examples, it would not necessarily matter if the health data was encrypted or not.

It is also important to consider if data is being encrypted at rest or in motion. For example, using a virtual private network (VPN) or a secure browser connection can be helpful for protecting data in motion. Or, using Transport Layer Security (TLS) could also work in this situation. This is a protocol ensuring there are mechanisms in place to protect and provide authentication, confidentiality and integrity of sensitive data during electronic communication.

Overall, a covered entity needs to ensure that it has comprehensive technical safeguards – that may include data encryption – along with strong administrative safeguards and physical safeguards. One of those measures by itself will not be enough. Health data encryption could be a beneficial addition to a security program, but it would need to be working with other protection measures.

Is data encryption required?

According to HIPAA, encrypting health data is “addressable” rather than “required.” However, this does not mean that covered entities can simply ignore health data encryption. Instead, healthcare organizations must determine which privacy and security measures will benefit its workflow.

“…it permits covered entities to determine whether the addressable implementation specification is reasonable and appropriate for that covered entity,” according to HHS. “If it is not, the Security Rule allows the covered entity to adopt an alternative measure that achieves the purpose of the standard, if the alternative measure is reasonable and appropriate.”

There are many different encryption methods available as well, so it’s important for covered entities to review their systems and policies to determine if encryption is appropriate, and what kind of encryption to use.

For example, the HHS HIPAA Security Series suggests that covered entities ask themselves the following two questions to help determine if data encryption is appropriate:

Which EPHI should be encrypted and decrypted to prevent access by persons or software programs that have not been granted access rights?
What encryption and decryption mechanisms are reasonable and appropriate to implement to prevent access to EPHI by persons or software programs that have not been granted access rights?

To that same extent, covered entities should determine who is accessing the data, and how they might be doing so. For example, if a facility has a BYOD policy, and employees can access ePHI through their phone, mobile data encryption might be appropriate.

It remains to be seen if the government will make adjustments on its requirements for health data encryption. Until then, facilities need to be thorough in their risk assessments so they can properly determine if data encryption is a necessary measure and then how best to incorporate it into their security. If a covered entity decides that data encryption is not necessary, it is essential to document the reasons why and then provide an acceptable alternative. Data breaches are unlikely to stop happening, so it is important that healthcare organizations remain diligent in making the necessary adjustments to remain as secure as possible.
Rêve's curator insight, March 24, 2015 12:16 AM



Data Encryption Is Key for Protecting Patient Data

Data Encryption Is Key for Protecting Patient Data | HIPAA Compliance for Medical Practices | Scoop.it

According to the HIPAA Final Omnibus Rule, section 164.304 sets forth the following definition: "Encryption means the use of an algorithmic process to transform data into a form in which there is a low probability of assigning meaning without use of a confidential process or key." Although encryption is considered an "addressable" issue, and not "required" or "standard," it really should be accounted for as "required." But why? Encrypting mobile devices, laptops, hard drives, servers, and electronic media (e.g., UBS drives and CD-ROMs) can prevent the practice from paying a large fine for a HIPAA breach.

As a reminder, both Concentra and QCA Health Plan paid over $2 million in combined fines to the Department of Health and Human Services, Office for Civil Rights. The "investigation revealed that Concentra had previously recognized in multiple risk analyses that a lack of encryption on its laptops, desktop computers, medical equipment, tablets and other devices containing electronic protected health information (PHI) was a critical risk," the Office for Civil Rights said. "While steps were taken to begin encryption, Concentra's efforts were incomplete and inconsistent over time, leaving patient PHI vulnerable throughout the organization. OCR's investigation further found Concentra had insufficient security-management processes in place to safeguard patient information."

The problems with not encrypting data and failing to conform to the other requirements associated with HIPAA and the HITECH Act can have further reaching consequences. According to a recent article by Absolute Software, "Protected health information is becoming increasingly attractive to cybercriminals with health records fetching more than credit card information on the black market. According to Forrester, a single health record can sell for $20 on the black market while a complete patient dossier with driver's license, health insurance information, and other sensitive data can sell for $500."

Any physician who has had their DEA number compromised or been involved in a government investigation involving Medicare fraud knows firsthand about the importance of implementing adequate security measures and internal audits. Investing in encryption is one way to mitigate financial, reputational, and legal liability.

Justin Boersma's curator insight, March 27, 2015 7:28 AM

Data encryption is vital in the protection of private consumer data collected by companies, especially medical records. Innovation in data encryption is required to prevent breaches of sensitive information as The Information Age grows in the coming years.


Stolen hard drives bring more data breach pain for US health services

Stolen hard drives bring more data breach pain for US health services | HIPAA Compliance for Medical Practices | Scoop.it

The Indiana State Medical Association (ISMA) has warned 39,090 of its clients that their private data may be at risk of leakage, after the "random" theft of a pair of backup hard drives.

The drives were being transported to an offsite storage location when the theft occurred, on 13 February. ISMA went public with the breach on Monday, having apparently sent out letters to those affected a few days earlier, three weeks after the incident.

Data on the drives includes at least the standard set of personal details, such as names, dates of birth, health plan ID numbers, and physical and email addresses. In some cases it also includes Social Security Numbers and/or details of medical history.

Those affected should already have been told what level of information about them may have been leaked.

ISMA's statement claims the data on the drives "cannot be retrieved without special equipment and technical expertise", although it's not clear if that equipment and know-how means anything more than a computer to connect the drives to and the skills to plug them in and mount them.

There's certainly no mention of strong encryption being applied to the records, implying that they were stored relatively insecurely.

ISMA has posted a detailed FAQ for those affected, and will provide credit monitoring services for those who want them - the deadline to apply for this is 8 June 2015.

Many of them may already have availed themselves of ID protection, as there's likely to be a considerable overlap with the epic Anthem breach, which affected huge numbers of people across the US.

As Paul Ducklin recently pointed out, medical information is highly sensitive, opening up all sorts of opportunities for social engineering and identity theft.

All such data needs to be properly secured, to protect it not just from hackers as in the Anthem case, but also from inadequate anonymisation when referenced online, and of course from the many dangers of the physical world.

Backups are of course a vital part of any security and integrity regime, but it's worth remembering that they also bring some added security risks of their own. Backed-up data needs to be stored securely, ideally in a separate location from the master copies, and transporting data is always a fragile part of the chain.

We routinely hear of data being lost in the post, devices being mislaid in trains, planes and taxis, and even records simply falling off the back of trucks.

In this case, the incident is described as a "random criminal act". The proper tactic to mitigate this risk is not heavily-armed security guards escorting couriers to backup storage locations, but something much simpler and cheaper.

All data considered sensitive or important should be strongly encrypted as a matter of routine when immediate access is not required.

Off-site backups in particular should be locked down as strongly as possible, given that decryption time will not add significantly to the restore process.

Keeping data well encrypted adds another layer on top of the security of storage facilities, and minimises the danger from "random criminal acts", and even carelessness, when data is in transit.

Via Paulo Félix
No comment yet.

Data Breach Reporting Requirements for Medical Practices

Data Breach Reporting Requirements for Medical Practices | HIPAA Compliance for Medical Practices | Scoop.it

By now, almost everyone who watches the news or reads any major newspaper has heard about the Anthem, Inc. data breach. Anthem, the nation's second-largest health insurer, is considered a covered entity under HIPAA and, in turn, must comply with the federal laws and regulations governing such entities.

On Feb. 4, the company announced that it was the target of a cyber attack that enabled hackers to penetrate its data system and access members' identifying factors and personal information including: names, dates of birth, employers, and social security numbers. In the aftermath of this announcement, class action lawsuits were filed around the country. This means that in accordance with Rule 23 of the Federal Rules of Civil Procedure, "one or more members of a class may sue or be sued as representative parties on behalf of all members" with certain conditions such as the number of claimants, commonality among questions of law and fact, as well as defenses.

The suit filed in the U.S. District Court for the Southern District of Indiana, Meadows v. Anthem, Inc., indicated that the data breach exposed the information of up to 80 million consumers. The suit alleges that people would not have obtained health insurance and relied on the representations of Anthem had they have known that their data was at risk. Hence, numerous contractual issues were raised. In light of this occurrence, physicians should evaluate the own contracts, HIPAA compliance, and what they are indicating in their attestations and assurances to patients and business partners.

The new Office of Civil Rights HIPAA breach protocol

With the upgrade to the HHS' Breach Portal, additional information is required there, too.

45 CFR §164.408 and the alterations to the Breach Portal, may impact certain entities, who are planning on submitting their 2014 breach notification reports for incidents impacting fewer than 500 people within 60 days of the end of the calendar year, pursuant to 45 CFR §164.408(c). So, what do these new report requirements entail?

• Disclosure of a "breach end date" and "discovery end date" are required.

• The "Safeguards in Place Prior to the Breach" now utilizes general categories (i.e., none and privacy rule safeguards) instead of specifics (i.e., strong authentication and encrypted wireless).

• "Actions Taken in Response to Breach" are much more detailed and included "adopted encryption technologies, security rule risk analysis, and revised policies and procedures."

It is important to note that in the event of an investigation, any identified area may be delved into in greater detail. The March 2, 2015, 60-day, deadline for reporting 2014 breaches is coming shortly. These changes are a signal that close attention should be given to HIPAA, the HITECH Act, and the related rules. It can save a lot of time, money and reputational costs

No comment yet.

HIPAA Compliance and Windows Server 2003 | EMR and HIPAA

HIPAA Compliance and Windows Server 2003 | EMR and HIPAA | HIPAA Compliance for Medical Practices | Scoop.it

Last year, Microsoft stopped updating Windows XP and so we wrote about how Windows XP would no longer be HIPAA compliant. If you’re still using Windows XP to access PHI, you’re a braver person that I. That’s just asking for a HIPAA violation.

It turns out that Windows Server 2003 is 5 months away from Microsoft stopping to update it as well. This could be an issue for many practices who have a local EHR install on Windows Server 2003. I’d be surprised if an EHR vendor or practice management vendor was running a SaaS EHR on Windows Server 2003 still, but I guess it’s possible.

However, Microsoft just recently announced another critical vulnerability in Windows Server 2003 that uses active directory. Here are the details:

Microsoft just patched a 15-year-old bug that in some cases allows attackers to take complete control of PCs running all supported versions of Windows. The critical vulnerability will remain unpatched in Windows Server 2003, leaving that version wide open for the remaining five months Microsoft pledged to continue supporting it.

There are a lot more technical details at the link above. However, I find it really interesting that Microsoft has chosen not to fix this issue in Windows Server 2003. The article above says “This Windows vulnerability isn’t as simple as most to fix because it affects the design of core Windows functions rather than implementations of that design.” I assume this is why they’re not planning to do an update.

This lack of an update to a critical vulnerability has me asking if that means that Windows Server 2003 is not HIPAA compliant anymore. I think the answer is yes. Unsupported systems or systems with known vulnerabilities are an issue under HIPAA as I understand it. Hard to say how many healthcare organizations are still using Windows Server 2003, but this vulnerability should give them a good reason to upgrade ASAP.

No comment yet.

5 things all Anthem customers should do after the massive data breach

5 things all Anthem customers should do after the massive data breach | HIPAA Compliance for Medical Practices | Scoop.it

Anthem customers, we feel your pain. The Anthem data breach revealed last week could affect up to 80 million people, and the investigation into the scope of the crime is just starting. 

Any breach that exposes data from millions of customers (or even thousands really) is bad, but the Anthem breach is actually worse than retail breaches like Target because of the type of information that was compromised. The fallout and damage from a compromised credit card can be undone. You can’t easily "undo" your name, birthdate, or Social Security number. Now that attackers have your personal information, your identity could be stolen tomorrow, next week, or three years from now.

There's no way any individual Anthem customer could have protected themselves in this incident, but now that it's happened, there are some steps you can (and should) take to monitor and protect your identity, in the likelihood that attackers have your information.

1. Visit AnthemFacts

The AnthemFact.com FAQ site is the best source of current information about the data breach. 

2. Take advantage of free credit monitoring

Anthem states, “All impacted members will receive notice via mail which will advise them of the protections being offered to them as well as any next steps.”

When you receive that notice, follow the instructions and take advantage of any protection or services Anthem is providing. 

3. Use your free annual credit report

Federal law requires each of the three nationwide consumer credit reporting companies—Equifax, Experian and TransUnion—to give you a free credit report every 12 months if you ask for it. Use that as another monitoring weapon over time.

Set a reminder on your calendar to visit AnnualCreditReport.com and request your free credit reports. Take the time to study them. Even if you don’t find evidence of fraudulent activity, you might find incorrect or outdated information that is adversely impacting your credit score.

4. Monitor your children

Kevin Duggan, CEO of Camouflage Software, says, “Anthem should be of interest because of who will likely bear the brunt of the damage: young children and the elderly."

Children in particular don't actively use credit or apply for loans, so they're less likely to discover fraudulent activity. Duggan points out, "a five-year-old child today will not likely realize their credit has been destroyed by fraudulent activity until it comes time for them to use it to apply for student loans in about 13 years."

This Experian site contains a lot of helpful explanations and information about how to manage and protect a minor’s credit history. The challenge is that there is nothing to monitor until or unless the child has a credit history, and according to Experian there are reasons that you would not want to preemptively establish and freeze a credit history in your child’s name.

In any case, you have everything to gain by checking regularly to make sure no credit history has been created using your child’s Social Security number.

5. Be alert as you file your income taxes 

Armed with personal information like your name, address, and Social Security number, attackers could file a fraudulent income tax return in your name. The IRS and state tax authorities aren’t really equipped to determine whether the person filing is legitimate. The variety of methods for receiving and cashing a tax refund make it possible for someone to file in your name and have the money spent before the fraud is discovered. At that point, you'll have to jump through some hoops to prove the initial return was fraudulent so you can file a valid tax return.

Anthem won’t be the last

You don't have to be an Anthem customer to take lessons from this incident. Your own insurer could be the next victim. “A sad fact is that the healthcare industry by and large has never been seen as a leading edge security consumer because the historical threat has been more focused on financial services,” said Mark Kraynak, Chief Product Officer for Imperva. “This is a broad generalization but healthcare targets are probably a little bit softer targets than financial targets.”

Jeremiah Grossman, CEO of WhiteHat Security, cautions that there’s a good chance other health insurance organizations are already compromised and just don’t know it yet. “As these things happen, it would not be surprising if other healthcare institutions reveal that they’ve been compromised. Often enough, cyber-criminals work in coordinated teams and target market segments, and not just a single entity.”

Staying vigilant to identity theft around a data breach is a bit like trying to prevent a house fire by spraying it with a squirt gun after it’s already burning, but if you follow these steps you can at least minimize the potential damage.

No comment yet.

Can You Keep a Secret? Tips for Creating Strong Passwords

Can You Keep a Secret? Tips for Creating Strong Passwords | HIPAA Compliance for Medical Practices | Scoop.it

The computers in your office are veritable treasure chests of information cyber pirates would love to get their hands on. Only authorized personnel in a practice should have the keys to unlock what’s inside. Passwords as those keys. They play an important role in protecting Electronic Health Records (EHR) and the vital information those records hold.

The HIPAA Security Rule says that “reasonable and appropriate . . . procedures for creating, changing, and safeguarding passwords” must be in place. But the rule doesn’t stop there. It goes on to say that “In addition to providing passwords for access, entities must ensure that workforce members are trained on how to safeguard information. Covered entities must train all users and establish guidelines for creating passwords and changing them during periodic change cycles.”

Regardless of the type of computers or operating system your office uses, a password should be required to log in and do any work. Today’s blog will focus on how to create strong passwords – the kind that aren’t easily guessed. And since attackers often use automated methods to try to guess a password, it is important to choose one that doesn’t have any of the characteristics that make passwords vulnerable.

How to stay ahead of the hackers

They’re a clever bunch, those hackers. And they seem to know a lot about human nature, too. They’ve figured out the methods most people use when choosing a password. And they’ve turned that knowledge to their advantage.

To outsmart them, create a password that’s:

NOT a word found in any dictionary, even foreign ones
NOT a word any language — including its slang, dialects, and jargon
NOT a word spelled backwards
NOT based on recognizable personal information — like names of family and friends
NOT a birthdate
NOT an address or phone number
NOT a word or number pattern on the keyboard — for instance, asdfgh or 987654

A strong password should:

Be at least 8 characters in length
Include a combination of upper and lower case letters, at least on number and at least one special character, like an exclamation mark

Examples of strong passwords

With their weird combinations of letters, numbers, and special characters, passwords can be a challenge to remember. Starting with an easy-to-remember phrase and then tweaking it to fit the guidelines for strong passwords is one way around that problem.

For instance:

1h8mond@ys! (I hate Mondays!)

5ayBye4n@w (Say bye for now)

Safety first

The importance of having strong passwords — the longer, the better — and changing them on a regular basis can’t be overstated. And it goes without saying that writing a password on a Post-It note and attaching it to a computer monitor should never be done. Do everything you can to make your passwords strong, and store them somewhere safe. These steps will help ensure the security of your PHI and give those hackers fits.

No comment yet.