Data Governance, Data Privacy & Enabling a Framework of Trust: An Interview with Eric Sutherland

Eric Sutherland Interview (1)

In this expert interview, we speak with chief data strategist Eric Sutherland — Executive Director for Data Governance Strategy at the Canadian Institute for Health Information (CIHI) and Ivan Tsarynny, CEO and Co-founder of Feroot Privacy. 

Eric Sutherland  About Eric Sutherland

Eric has been called “a data guy with a personality”. A highly respected chief data strategist and technology leader with deep experience in healthcare and finance, Eric is currently facilitating a national framework for data and information governance for health in Canada. 

Eric Sutherland (1) About Ivan Tsarynny

Ivan Tsarynny has centered his path on helping companies turn privacy compliance from a liability into a competitive advantage. A member of the GDPR Advisory Committee at the Standard Council of Canada, Ivan is dedicated to helping companies and organizations build a cohesive standard for privacy management.

In this 35 minute interview, we discuss:

  • why you need a data strategy
  • the difference between "data" and "information"
  • how to implement privacy controls
  • how PrivacyOps can help with cross-departmental operations
  • the role of de-identification in making data more valuable
  • what a perfect, data-protected world looks like

For the full experience, listen to the interview on Soundcloud (35 mins). Otherwise, read the interview highlights below!


Question: Eric, you describe yourself as a chief data strategist with a passion for unlocking the sustained value from trusted information. What does that mean exactly? Why does data need a “strategy” and how do you unlock the sustained value of information over time?


[ERIC] I would start by saying that data and information permeate all sectors and avenues of society. Increasingly, we find that in order to generate new value and new innovations, we require the effective use of trusted information. In order to do that, you need to (a) have access to information and (b) you need to trust it.

From those two perspectives, that’s why having a specific and conscious approach to how you manage your data and information is so essential as you move forward.

Ironically, when you collect data and information, which are essentially facts and associated facts, those increase in value over time because you can suddenly generate insights tomorrow based on the facts you collect today.

So, the question is how do you maintain the fidelity and trust in that information as it moves between platforms and as the value changes through time?

For example, as you collect data for things like sex — five years ago, ten years ago, many people would have said, sex and gender are of equivalent value. Whereas today we are now engaging in sex and gender as two different concepts all together. Frankly, neither field is binary in their nature. So how do you (a) start to understand what values you want to collect? And (b) how do you propagate those values through legacy technologies that have been living in a binary mode for the last 50 years? Finally, how do you sustain the fidelity of the past data you've collected while integrating it with the new daily data you’re starting to collect?

It’s a very hard thing to solve, but a data strategy helps you determine how to approach the data you’re collecting, how you're going to use it tomorrow, and how to integrate the changing nature of data into your overall data lifecycle.

Question: Eric, in the past, you’ve mentioned there’s a difference between data (i.e., bits and bytes) vs. information. Can you explain the difference between data and information? What's the same? And when does data governance and privacy come into the equation?


[ERIC] I tend to default to the term data governance because it's really easy to say, but realistically, I think of it as data and information governance, because I see them as distinct disciplines with a high degree of overlap.

Data governance is focused on the collection of data in the first place and ensuring the data adheres to standards over time through data governance processes. Information is the point where data becomes useful.

By that I mean, data is really kind of boring and dull on its own. It just kind of sits there and isn't actually useful at all. Data is only useful when it is presented through channels — whether those are digital channels, data dumps or whatever other ways in which case when it is used — it turns into information.

The information governance angle is really focused on how do we provide access to the data in a privacy sensitive manner; and how do we ensure that when we provide access, that it can be interpreted and understood in an appropriate way, and frankly, in the way for which it was intended in the first place.

Question: How do you practically implement data and information governance on a day-to-day basis? What are the challenges?


[ERIC] The most practical thing you can do is to have a living document that enables the use of information in a more effective manner.

From an operations perspective, it's really thinking about your own local environment, how you do your business operations, your technology operations, and how your data and information operations interact together. Being conscious about what are the processes that you actually have in place for the collection, storage, cleansing and distribution of the information. Who is accountable for making decisions about ensuring the fidelity of the information, has it adhered to standards? Is it of quality? Is it timely? And also the release of the information — who has the authority to say yes, who you actually give the information to, and what are the conditions under which I will give the data.

In the health space, there is also the discipline of de-identification to ensure that the data is sufficiently de-identified to render it private, while at the same time making sure that the data has usefulness for the person who's receiving it on the other end.

Finally, we want to make sure that the data is available for the people who actually produce the data in the first place — ie., the citizens or the patients in my case. We want to make sure that they have a relatively easy path, without going through tons of hoops, to actually get access to their own data.

Question: Ivan, you’ve been working on developing a framework for Privacy Operations. Can you explain what PrivacyOps is and how it will help companies manage day-to-day privacy issues, as it relates to their data and information?


[IVAN] Thank you for asking. Privacy operations or PrivacyOps is becoming a new functional group and an emerging department that manages the full range of privacy operations across all silos, including marketing, sales, analytics, customer service, HR, back office finance and all kinds of other data within departments of a company. PrivacyOps, what it does, mainly, is brings together all areas of data governance policies and IT operations in the data silos. It operates across privacy and access governance, on-premise operations, and third- party applications that are processing the data.

PrivacyOps has one primary role — to drive competitive advantage through, what we believe, is privacy operational effectiveness and efficiency across the entire data lifecycle. Essentially it helps transform privacy from a risk avoidance activity into a business increasing, revenue increasing and market-share increasing driver by becoming embedded into the fabric of the organization.

Question: Ivan, can you explain how PrivacyOps will improve one's competitiveness exactly? How will it make companies perform better?


[IVAN] Yes, PrivacyOps is a holistic approach that has three key benefits. First, it harmonizes and aligns the organization, because the PrivacyOps framework helps keep departments on the same page across marketing, sales, analytics and other areas. Secondly, it removes overhead, and helps focus operations on key objectives and key performance indicators, so you don't have to redo and duplicate privacy controls across multiple departments. And thirdly, it helps with planning, operations and predicting potential problems.

Question: Thanks Ivan. Eric, back to you, in your previous answer you mentioned the de-identification of data. That's something that we're hearing more about these days. What's happening right now in this area and where do you see the potential?


[ERIC] I think it's being increasingly recognized, even beyond the health sphere, that de-identification is a key way to make data useful.

The challenge that we have, is that many people feel that without some form of de-identification, their job is to keep the data private and the best way to keep it private is to not give it to anyone. Whereas, we haven't really calibrated what is sufficiently “de-identified”.

Some technologies and tools are being developed in that space and some of the biggest thought-leaders in the world are here in Ontario, such as Dr. Khaled El Emam.

Basically, we’re recognizing that there are different levels and strata of de-identification which are appropriate given both the type of data that you're collecting, as well as the potential uses for which you're using it. For example, if I'm publishing some sort of data on a public website, I want to make sure it's very, very, very unlikely that anybody's going to be able to identify it. Whereas if I'm giving it to somebody who's in a trusted circle, then I might have a less stringent level of data identification. But really, it's up to the participants of the process and the people that are actually involved, as well as the various other people along the data value chain, to assert what is sufficient in that context.

It ties back to understanding what your data governance models are around data identification, what your tools and technologies are, in order to actually affect the governance, and then also your due practices and business processes to make sure that the proper levels of consent, practice, calibration and verification are substantiated in the end.

Question: Eric, in your bio, you say you want to enable a framework to ensure trust. Can you explain what you mean by that?


[ERIC] Trust has two different modes in the data life-cycle. First, is the data a fair representation of what was intended to be created in the first place? For instance, is it timely, is it accurate as was intended, is it at the right level of granularity for the people who are going to be using it? See my prior example of sex and gender.

A second aspect of trust is whether am I sharing this in a way that is appropriate and inline with what the intention of the collection of the data in the first place. For example, if I intended the data to be collected for weather, or weather predictions, not for, you know, building cities.

It's really a reflection of how you're actually sharing that information with others and whether it is inline with the people who allow the data to be provided in the first place. And part of that is ensuring that if I'm sharing identifiable information that people are comfortable with or if I’m sharing de-identified information, then is it still inline with ethical considerations?

I think that we are still very much in an exploratory phase about what are some of the ethical considerations for the sharing of information. For instance, when is it that individuals should have the absolute right to say, “no, you cannot share that data with that individual or with that organization”. And where are there situations that individuals actually have an obligation to share their data because it is in the public interest. There are lots of debates that are starting to emerge in this space.

Question: Dr. Ann Cavoukian talks a lot about the concept of a trust-deficit, i.e., that there's not enough trust in the institutions that have our data, to keep it safe and use it properly. Have you heard of this concept? And if so, do you agree?


[ERIC] I have not yet heard of the term trust-deficit, but I do believe we cannot assume that we have trust. To the extent that we can't assume that we have trust, I suppose that means that there is a trust-deficit but realistically, it's more so that we need to build the trust in our institutions in a way that treats data responsibly. Part of that actually goes back to empowering people and listening to what people are actually saying and how they're interested in having their data used.

Also, the trust framework shifts over time. For instance, online banking 20 years ago would be like the Brave New World. People were initially very nervous about logging in. Now, we give our banking credentials to third parties so that we can log into revenue Canada and other providers. So, there's ways for people to get more comfortable with using online interfaces. But we also need to acknowledge that not 100% of the population is totally comfortable with this new digital era. We need to acknowledge that and account for that in whatever we do.

Question: Ivan, I know trust, namely consumers trust, is a topic you are passionate about. With privacy on the minds of a lot of consumers these days, have you seen the trust-deficit that Dr. Ann Cavoukian talks about? If yes, what are the things we can do to address it?


[IVAN] Thank you, Lori. Definitely what is happening is a significant increase of trust and privacy impacting the purchasing decisions of consumers. There was a 2018 study done by IBM that shows 75% of consumers will not buy a product from a company, no matter how great the products are, if they do not trust the company to protect their data and their information, which is a significant increase from the years before.

We are also seeing a privacy battle for purchasing decision being played out, even between the telecom giants. For instance, there was a lot of news recently about Apple taking a really firm stance on protecting consumer and user data and other giants not being as firm as Apple — that is definitely making impact on consumers buying decisions.

Question: Eric, if you could wave a magic wand and figure this all out, what would the future look like? What would a perfect data governance allow us to achieve?


[ERIC] From my perspective, as an individual, I want to have some level of control over the data that is collected about me and contributed by me. I would love to see a world where my data is enabled in a way where advancements and scientific knowledge can be gathered based on my own and other people’s data. And then based on the insights that they gather, they were able to identify quickly, here are specific things that would benefit me as an individual that we recommend that you do. Like in health care they would say, okay based on your latest blood testing, your weight and your height, your age and your diet and what your smartwatch told me in terms of your heart rate and your blood pressure and other such things — here’s a better course of diet for you. A personalized prescription that will materially improve longevity and your ability to interact with your grandchildren.

The capabilities of getting to that space are not that far away from a technical perspective. The challenges tend not to be centered around technology. A lot of the the barriers to advancement are in how we better use data and information. I think we fundamentally need to focus our efforts in that space and then we’ll be able to advance. 

Conclusion & Key Takeaways

Takeaway 1: A data governance strategy helps you determine how to approach the data you’re collecting, how you're going to use it tomorrow, and how to integrate the changing nature of data into your overall data lifecycle.

Takeaway 2: Data governance is focused on the collection of data in the first place and ensuring the data adheres to standards over time through data governance processes. Information is the point where data becomes useful.

Takeaway 3: De-identification of data is being increasingly recognized as a key way to make data useful. But we haven't really calibrated what is sufficiently “de-identified”.

Takeaway 4: The most practical thing you can do for privacy operations is to have a living document that enables the use of information in a more effective manner.

Takeaway 5: The trust framework shifts over time.

Takeaway 6: There was a 2018 study done by IBM that shows 75% of consumers will not buy a product from a company, no matter how great the products are, if they do not trust the company to protect their data and their information — a significant increase from the years before.

What’s Next

  • Have questions about this article? Feel free to email
  • Are you a privacy expert & want to get interviewed in our series? Email
  • Want to learn more about Feroot and how we can help you operationalize privacy in a day-to-day way? Contact our sales team for a demo!
  • Subscribe to our blog using the form below to get updates on new expert interviews, webinars and more!

Picture of Lori Smith
Lori Smith
Lori is marketing lead at Feroot Privacy

More articles