Oct 15th, 2019
If you work in the healthcare industry you have a huge responsibility when it comes to managing sensitive patient information, whether you’re a big software vendor or a single physiotherapist, everyone needs to follow the same rules, and there are some pretty serious consequences for not doing it properly. Do you know what your obligations are and if you’re doing a good job? Check this episode out to find out!
Who is Anna Johnston
Anna Johnston is one of Australia’s most respected experts in privacy law and practice.
She has qualifications in law, public policy and management, and 26 years’ experience in legal, policy and research roles. Anna has a breadth of perspectives and a wealth of experience to dealing with privacy and data governance issues.
She is the former Deputy Privacy Commissioner for NSW, so she knows the regulator’s perspective and since 2004 is the Director for consulting firm “Salinger Privacy”.
Anna has been called upon to provide expert testimony before various Parliamentary inquiries and the Productivity Commission, spoken at numerous conferences, and is regularly asked to comment on privacy issues in the media.
Anna holds a first class honours degree in Law, a Masters of Public Policy with honours, a Graduate Certificate in Management, a Graduate Diploma of Legal Practice, and a Bachelor of Arts, plus a number of other relevant and well regarded certificates and industry associations.
In this Episode you’ll learn
2:08 - About Salinger Privacy
4:55 - Privacy Concerns in Data (with a focus on health tech)
8:15 - All about, privacy reviews, data flows, data governance, and privacy design
14:28 -AI - How does it fit ethically, legally and is policy keeping up with innovation
16:40 - AI - GDPR, challenges for AI with diagnostic decisions
20:10 - AI - Transparency, Accountability and Consent
26:00 - Legal Obligations with Data Privacy
- When it comes to privacy law in Australia, the same laws and consequences apply to everyone dealing with healthcare information - whether they are a big institution of a single doctor.
- While Data Privacy breaches do happen they are often the result of lack of education and or the best intentions in mind, not so much because of malicious intent
- Often AI is trained on data that was collected not for the intention of training the machine, so the concept of informed consent is a tricky one
- The simple “tick this box to agree” actually isn’t enough and more emphasis needs to be put on clearly communicating clearly with the person who’s data is being collected
- The expectations of patients data privacy holds the health and medical industries to the highest levels of scrutiny meaning that breaches are to be reported to the Price Commissioners office and the patients whose privacy has been breached
Anna Johnston Twitter - @SalingerPrivacy
Anna Johnston LinkedIn - https://www.linkedin.com/in/anna-johnston-ba188410a/
Notifiable Data Breaches Scheme - https://www.oaic.gov.au/ndb
MSIA - https://msia.com.au/
Salinger Privacy - https://www.salingerprivacy.com.au/
My Health Record (Formerly PCEHR) - https://www.myhealthrecord.gov.au/
NDIS - https://www.ndis.gov.au/
National Health and Medical Research Council - https://www.nhmrc.gov.au/
[00:00:00] Pete: With me today is Anna Johnson. Anna is one of Australia's most respected experts in Privacy Law and practice.
She has qualifications in law, public policy and management and 26 years experience in legal policy and research roles. Anna has a breadth of perspectives and a wealth of experience in dealing with privacy and data governance issues. She's the former deputy privacy commissioner for New South Wales.So she really knows regulatory perspective well, and since 2004 is the director for consulting firm Salinger Privacy Anna holds a first-class honours degree in law, a masters of public policy and honors a graduate certificate in management a graduate diploma of legal practice Anna Bachelor of Arts plus a number of other relevant and well-regarded certificates and Industry associations, Anna no longer practices as a solicitor so I am allowed to tell the occasional lawyer joke apparently which is great because that's what I'll probably do Anna thanks so much for joining.
[00:01:06] Anna: Thanks Peter great to be here.
[00:01:07] Pete: I think we came across each other because you were doing some stuff with MSIA a before the Medical Software Industry Association.
[00:01:15] Anna: Yes, I presented at their annual conference recently and then also ran a workshop about privacy by Design so for anyone in that space of Designing health-related technology how to understand the kind of the skills and strategies that will help you build privacy compliance into the design upfront rather than trying to retrofit later.
[00:01:39] Pete: Love to get into more of that detail a bit later on in the conversation too. So, you know you're well well primed for the health Tech space and it's kind of cool to have someone on the show that you know is involved in many different Industries. You're not a vendor you’re another player in this kind of big space in an area that's super important these days in our area of Health Tech being data privacy and security and whatnot.So I'm super excited about this conversation. So tell me a little bit more about Salinger Privacy what you guys do and where your clients operate?
[00:02:13] Anna: Sure. So well basically we do all things privacy, so we do consulting, training, and we offer resources and one of the things I love about working in the Privacy space is It's just a fascinating intersection between law ethics and Technology. There's you know, there's always something new. There's always a new technology you coming around the corner that we have to get our heads around and help our clients manage that intersection between their Legal obligations ethics customer expectations and then you know what the technology can and what the technology should be allowed to do so, we work across as I said Consulting, training, and resources and we are an Australian business, we've got clients across Australia occasionally we dip our toe into the waters of New Zealand as well. But our clients come from their quite the mix. So, quite a lot of government clients but also businesses from the big end of town, to the nonprofits and also the small and very much Tech startup space.So we have clients everywhere from the kind of you know top ASX companies down to you know, one person’s got a great new tech idea with working out of their spare bedroom at the moment kind of space.
[00:03:28] Pete: Nice as to how much of it do you reckon is in that Health space?
[00:03:34] Anna: Yeah health is really common as probably the second biggest sector after government. Although of course, you know often government is also in the health sector. So sometimes our clients will be the health service provider. So someone directly in that Health Service provision space and they just want to make sure they're dotting their I’s crossing their T’s in the way that they're collecting and using their patients data, but more typically where, not so much that direct service provision, but all the organizations that use and collect and hold and store health information. So sometimes that's insurance companies for example, sometimes it's governments working in public policy organizations getting into the data analytics space so focusing particularly on you know health and disability data for example, and then there's been some really big-ticket kind of projects we've worked on. So we worked on the Privacy impact assessment on the original design for My Health Record, back when it was originally called the Personally Controlled Electronic Health Record the original setup of the National Disability Insurance Scheme. So we've been involved in privacy impact assessments very early on in those very very big-ticket government projects which touch on health and disability data in particular.
[00:04:55] Pete: So in health in particular then what are some of the biggest privacy concerns you see today that the pop-up.
[00:05:02] Anna: So what I think is quite interesting about the health sector and it makes it different to other sectors is the health sector is a standout but in a bad way, unfortunately, so the health sector consistently tops the list of sectors reporting notifiable data breaches in Australia.So and when we talk about a notifiable data breach we're talking about when personal information has either been lost. Subject to unauthorized access or subject to an unauthorized disclosure
[00:05:33] Pete: because it was those relatively recently wasn't it that want kind of recently that was something change that meant that companies needed to be more transparent with that kind of thing.
[00:05:43] Anna: Yeah absolutely so the law was changed in February 2018. To make notification of it. So if you have this kind of data breach and if it's likely to result in serious harm to one or more individuals. It's now the law in Australia that you need to notify both price commissioner's office and those affected individuals, so your patient.
[00:06:03] Pete: It's not just big companies or small companies.
[00:06:05] Anna: So in the health sector at covers any health service provider regardless of their size.
So you might be a one-person physiotherapy business, you know or an independent Locum you uncovered by the federal privacy act. So regardless of your size all Health Service Providers are covered. Outside the health sector, there is an exemption for small businesses. But that exemption does not apply to health service providers.
So the health sector is already called out for I guess expectations of a high level of privacy protection for businesses no matter their size in the health sector just because of you know, patient's expectations. And so I think one of the things that makes the health sector different is patient expectations, so it's not that the type of privacy risks or privacy issues are different for health technology, for example, technology design as for any other type of Technology design, but the difference is that patients expectations about the protection of their Health Data are much higher. There's just this sort of intuitive if it's my health information. It must be kept absolutely private, but also the consequences of privacy breaches tend to be higher when you're talking about health information compared with say, you know, The Accidental disclosure of someone's credit card details. Yeah, there are some financial risks. But those risks can be resolved, you know relatively straightforward way. I don't want to minimize those risks, but it's quite a different story in terms of the repercussions individuals can face if their health information is disclosed without Authority. So that might be it could be discrimination embarrassment implications for their employment implications for insurance and all the rest.
That's what makes the challenges for people working in technology into the health sector and technology so much higher not that as I said, not that. The nature of the Privacy risks themselves are terribly different. It's just that the expectations are higher and the consequences are worse if you have a data breach.
[00:08:17] Pete: So you mentioned that you guys do privacy reviews. What is a privacy review exactly?
[00:08:24] Anna: So we did two different kinds so one is called a privacy impact assessment and the other is generally called a privacy audit or a privacy compliance review and the difference really is where you're at in the design process for what we're reviewing. So if you are at the design stage of a new project new technology project, for example, we get in at the design stage and do what's called a privacy impact assessment. If you want us to review something that's already up and running.
So your business as usual. We basically call that a privacy audit but regardless of which one of those we doing. We ask the same kind of questions and regardless of whether its the design of the software. It might be the design of a business process. It might be the design of a paper form. It doesn't have to be, you know, a high-tech project to need this kind of review.
So regardless of the nature of the project we tend to ask the same questions so you know can and should we collect this data can and should we use it for this particular purpose who can we disclose it to? How do we keep it safe? So when we look at a new project, for example, we look at two broad things one is data flows and the other is data governance.
So when what I describe as data flows what we're looking at is. What personal information is being collected? How is it going to be used? Who will it be disclosed to so those three points collection use and disclosure and for each of those we then ask is this going to be appropriate meaning is it going to be lawful?
So is it going to comply with the Privacy principles that govern collection use and disclosure but not just is it going to be lawful? Is it going to meet your customers? You know your patients expectations. Is it going to be proportionate to a legitimate business need and is there critically, is there a more privacy-protective way you can achieve that business objective? Yeah, so always trying to come up with you know, helping our clients come up with the most privacy-protective design of a technology of a form of whatever it is but in a way that still achieves the businesses objectives.
So once we've settled those questions about authorizing the data flows and making sure that there are lawful and appropriate then we look at data governance. So we usually start with looking at transparency. So have you communicated clearly to your customers about those data flows? You know how their personal information is going to be collected used and disclosed so that they actually understand what's going to happen.
The design practices often companies will jumble the three all together into one long legalistic confusing document and then they make users just tick agree
[00:11:55] Pete: Tick a box and you can and you can click the link. Click the link to go read it that you it's not down the bottom.
[00:12:02] Anna: Yeah, and we know no one ever reads it, I don't even read them. So we so in terms of data governance. We look importantly transparency. And then finally we look at other data governance questions, like have your staff being trained. Do you have a clear pathway for managing any requests you get for patients to access their data or correct it do you have a clear pathway for managing privacy complaints. Do you have a data breach response plan in place to your staff know what to do in the event of a data breach, so. All of those things that of data flows and data governance form part of whether we're doing a privacy impact assessment of a new project or a privacy audit of an existing business process and again, whether its software or something else, we look at both data flows and data governance as part of our privacy review.
[00:12:53] Pete: And if I think about it from my experience. Often, you know, if I'm thinking as a health Tech vendor not many of them go out with any kind of massive intention on I don't know to steal patients information or doing something cynical with the data, but I've seen in the past two, it's not about the intention of what they're going to do with it, but it's almost the perception of what's going to happen or so having that kind of review or someone outside of the business to do that sounds like a pretty sensible thing to do.
[00:13:23] Anna: Yeah, absolutely and certainly my experience having worked in you know, in a regulatory role in the primes Commissioner's Office the vast majority of privacy complaints and the vast majority of privacy breaches and data breaches are not coming from a point of malicious conduct or deliberately people doing the wrong thing.
It's accidents and it's oversights and its people simply not understanding what their obligations are. Understanding that there are alternative ways to design things. So absolutely. Yeah. I very very rarely see privacy breaches arising from deliberate misconduct. Yeah. It's much more coming from a place of ignorance and sometimes people trying to do the right thing, you know trying to be helpful in trying to help the clients but accidentally doing the wrong thing.
[00:14:20] Pete: Yeah, that can happen in health care too. Can you just send this across to me? I really need it because of this particular situation or something. Yeah.
[00:14:27] Anna: Yeah. Absolutely.
[00:14:28] Pete: It seems to be the right thing to do. It's a balance. So I'm thinking about that In our world AI artificial intelligence that's a big point of discussion regarding privacy for me anyway at the moment. How well do you think policies keeping up with the rate of pace of innovation in Australia more broadly as AI is really Innovative space and there are other things going on too, how’s policy keeping up.
[00:14:50] Anna: I think there's a constant challenge whether it's AI or any other kind of new technology.
There's always this challenge of Law and policy keeping up. The first point I'd make is that privacy laws are designed deliberately. They're drafted deliberately to be technology-neutral and format neutral. So the idea is that they shouldn't actually be always playing catch-up. We've tried to anticipate in the drafting of our privacy laws technologies that haven't even been thought of yet and our starting point with those laws is Broad framed general kind of principles and it's all about respecting humans autonomy and dignity. So sort of one answer is the law is keeping up because it's it was already anticipating new technologies and that those new technologies should be being managed Under the Umbrella of existing laws and policies.
But at the same time obviously the law is constantly being challenged in terms of how workable it is in practice and certainly with artificial intelligence the ethical and legal implications are something that not just in Australia but governments around the world are grappling with right at the moment. So there are projects trying to come up with legal and ethical frameworks to cover AI here in Australia the federal department of innovation and industries been working on something there are projects in the EU there are projects in the US There's a lot of activity going on at the moment and lots of those projects around the world are focusing on things like the fairness of AI as well as transparency.
So in particular in Europe some of your listeners. May have heard of the GDPR are already. So that's a privacy law in Europe that was recently reformed the general data protection regulation and one of the reforms that was introduced is what you might call a right to algorithmic transparency. So that means that's kind of the laws way of trying to ensure that algorithms developed from AI from machine learning and from AI will be fair and accountable in terms of the impact of decision making that is made or decisions made based on those algorithms. So there's kind of a right to human review of computers decisions and there are rights to ask companies to pause or stop the processing and we would call that using or disclosing someone's personal information in order to ask for an explanation of well you know, how is this algorithm? Working so why you know, why was I denied health insurance or why is it why my premiums going up and my next-door neighbors are going down for example.
[00:17:49] Pete: and its even more like as we're moving to space where artificial intelligence is assisting the process of Diagnostics and looks at an image and says this patient has cancer or not.
You know that having that in a black box is not you know, and then just you know, let's ask the computer and wait to see what. Is it so much ambiguity there?
[00:18:12] Anna: Yeah, absolutely. And in a legal sense, I think courts will increasingly struggle with this as well. If someone is challenging a decision, so it might not be the you know, the diagnosis but maybe it's the health insurers decision based on the diagnosis.
You know, we're going to pay your claim or we're not going to pay your claim or whatever it is. You know based on some kind of calculation of risk of that disease developing for example, or you know, if the algorithm can't be explained to a court if it can't be explained to a judge. How is anyone going to be able to determine whether that algorithm was working in a fair and accurate way so one of the really critical privacy principles is it's called the data quality principle or the accuracy principle and it says that each of us has the right to ensure that only accurate relevant up-to-date complete not misleading information is used in decision-making about us and that obviously.
Becomes more critical, you know the rubber hits the road where the decision is going to impact us negatively. So the decision is going to be you don't get the insurance or we don't pay your claim. You don't get the job. You don't get access to housing you don't get access to credit for example, and so if you've got decisions made in a black box and no one can explain how they're made because. Yeah, there was some machine learning going on in the AI system came up with its own algorithm. How can anyone test how can a court test whether or not that decision making and the data on which it was based was, you know accurate Fair relevant up-to-date Etc. So that's certainly one big challenge for AI that the sort of the transparency and the accountability for it and I think the other Big Challenge or the other area where AI poses a challenge in terms of compliance with Privacy Law is the lawfulness of the data flows in the first place. So, you know, it's when I was talking about when we do a privacy review we're looking at the data flows meaning what personal information is collected, how it's used who it's disclosed to and in the world of AI your ability to lawfully collect use or disclose data. It's extremely hard to rely on consent as your lawful mechanism consent isn't is by no means not this by no means the only lawful mechanism. There are lots of ways under the Privacy principles that allow companies and governments to collect use and disclose personal information. But quite often consent is what organizations try to rely on but in AI it's really challenging. So if you think about do you do example AI is being used to diagnose some health conditions? Yeah, much of the data. Used in the first place to train the machine learning that will create the AI will create the algorithm that training data what we call a training data will have been collected for some other purpose. So it will have been years worth of data collecting about real hospitals being treated in real patients being treated in real hospitals. That and that becomes the training data set for the machine learning. So it's fairly likely that the patients in the past were not asked to consent but that time to sometime in the future use of their data for this quite different purpose.
[00:21:56] Pete: That’s something that wasn't even thought of at the time.
[00:21:59] Anna: So it’s not just about treating you at some point in the future a machine will use your data to train another machine to recognize patterns in data, so but even now if we started to ask patients for their consent, you know as well as us treating you in hospital today. Do you consent to your information being used for AI development in the future? How could a patient today possibly give informed consent? Because the whole point of machine learning and AI is to kind of throw all the data in the mix and just see what pops out it's not a kind of if you like old-fashioned kind of you know, he's a research by hypothesis. This is the question we're asking here's exactly how we're going to conduct the experiment. Yes. So it's not like a clinical trial whereas a patient. I know what my disease is. I'm being offered a new kind of medicine. I've been warned about the possible side effects, and I've had the chance to say yes or no AI and machine learning at based on quite different kinds of research practices, which don't usually involve. That kind of one-on-one sit-down discussion with an individual. It's based on very very large data sets to create those training data sets.
It's based on historical data. And typically you don't go back and you don't have the ability to go back and ask for everyone's consent. It's very difficult to rely on patient consent as the lawful basis for health information to be collected used or disclosed for AI purposes. As I said, it's not the only possibility but quite often companies work on the assumption that consent is going to be their legal mechanism and it turns out not to be.Kind of the pragmatic solution for them, but I don't think that that's something that's particularly. Well understood yet.
[00:23:52] Pete: What is the solution then like if consent isn't it? Like how does a company doing AI in health or any area I guess operate?
[00:24:01] Anna: so there are other legal mechanisms and one of them is and it depends, you know, which Privacy Law you're talking about which jurisdiction you're in but there's usually some kind of research exemption and that usually, again it differs kind of from state to state and federal and Country to country but the research exemptions usually have some role for human research Ethics Committee which gets to weigh up the ethical considerations. Think about where the public interest lies and that committee usually has the power to waive the requirement for consent.
There is this kind of structured way to work through thinking about those issues and the National Health and Medical Research Council has guidelines on you know how to set up a human research Ethics Committee and what a properly constituted committee looks like and all of the factors that they need to, you know, there are guidelines about how they need to reach their kind of decision making, so it's not as simple as simply you know, those the tick a box mandatory terms and conditions.
That's not going to constitute a valid consent in Privacy Law. So that's just not the right legal mechanism in the most in the majority of cases for artificial intelligence kind of development.
[00:25:25] Pete: Wow so much complexity to factor in and you can going through even just the tip of the iceberg of all of that you can see a lot of work underneath it and questions and kind of vagueness that kind of speak to the reasons why the rate of innovation moves so much faster than other areas that are important like Policy. That's really interesting. Hey look so moving on what should Australian health tech software vendors be most concerned about when developing a solution today then.
[00:26:01] Anna: I think first of all make sure you're thinking about both your legal obligations and your customers expectations, you know, the law is by the law. I'm talking about the Privacy principles built into Privacy Law the law tries to codify your basic ethical obligations, but it really sets the minimum kind of standard and often your customers expectations will set a higher standard than just legal compliance. So legal compliance is obviously necessary, but it really should just be considered the minimum Baseline not the entire set of things that you need to think about. I mentioned before the role of consent is in reality quite fraught so if you are relying on your patients consent to do something with their health information you absolutely need to make sure that that consent is actually going to be valid under privacy law you know, it will hold up to scrutiny. So you can't under Privacy Law. You can't say that you're relying on a patient's consent if they actually had no choice to say. No, it has to be voluntary. It has to be informed it has to be specific
So it can't be included in mandatory terms and conditions, for example, an opt-out model is not consent. For example, so as I said consent is not the only legal mechanism. There are plenty of other mechanisms. But if that's the one you're relying on you need to be really careful to get that right and another thing is to make sure that your technology has been designed with privacy in mind. So we talked about this concept of privacy by design which is all about baking your privacy controls into the design of systems from. The beginning rather than trying to you know retrofit them in later and I think well what I find usually is a lot of effort goes into the cybersecurity side of things, you know, keeping out the external Bad actors and that's obviously incredibly important but our particular kind of expertise and our skill set is focused more on the internal actors so when you are whether you're designing tech your configuring it implementing it you need to think about your customers but also about your staff or your trusted users your trusted insiders. So making sure that Tech is designed so that its staff or other authorized users only see the absolute minimum amount of personal information about your customers or your patients that they really need to do their job, you know, the legislation says that you have to do this a lot of people come back and say oh, we've got a code of conduct for our employees. We make the more sign it so that's okay. The law says that that is not enough and you know case law comes basically the law that comes from Court decisions and tribunal decisions backs that up that just having you know, letting all staff see all patient records but saying oh, but they signed a code of conduct that's not going to be enough
You won't be complying with your privacy legal obligations if that's all you're doing. So you need the same things like role-based access controls, but there's a whole bunch of other privacy controls that can be built into Tech design and it will depend on the kind of product you're or service that you're designing
But depending on what it is you're doing, you know, if you're if you've got a data analytics project and using a data warehouse, for example, we would look at filtering out certain data fields. And then we'd look at masking other data fields from the view of particular user groups. If you think about something like an E-health record system, you would limit the search functionality to prevent misuse, you know, the kind of scenario we’re usually looking at is, you know, could a staff member look up health information about their partner or their ex-partner or their next-door neighbour so you might put in a test that users need to pass before they can even access customer records. For example, rather than just enabling any user to do a global search against any customer or patient name, so there’s plenty of different things you can do. So we use 8 privacy design strategies to help guide Our advice to our clients when we're reviewing technology design software design and sometimes the solution lies in the design of their technology itself, but quite often it's outside the technology so the solution might be or a mix of you know, staff training policies and procedures back to that transparency issue. So how you communicate with your customers. There are lots of different angles we can come from when we're trying to mitigate privacy risks.
[00:30:50] Pete: Wow there's a lot to cover I'm sure there are many people listening and thinking this probably a few things that could be applied in their business in the healthcare space, whether it's they're providing the service or the software that sits behind it.
I think it's evident that it that it's something that's important to everyone from that single physio, you mentioned right through to the big organizations have got a lot more structure and process to handle this stuff and even they get it wrong a lot too. So having a dedicated focus in that like you guys is particularly interesting so
Thank you for sharing your thoughts and insights on that particular topic.
[00:31:30] Anna: Great. Thanks for having me on the show.