The human cost of data unshared | Brunswick Group
Brunswick Review The Predictions Issue

The human cost of data unshared

A data entrepreneur inside the UN, Robert Kirkpatrick talks to the Brunswick Review

At a time of scandals involving misuse of data, Robert Kirkpatrick offers a prediction that many might find surprising. As Director of the United Nations Global Pulse team, Mr. Kirkpatrick believes that data scandals in the future might involve the failure of companies and governments to share and use public data for the good of mankind. The United Nations created Global Pulse in 2009 to bring the power of data-based prediction to policy making, rather than relying solely on surveys and statistics. Since then, Global Pulse has worked with the UN and governments to construct big data solutions to humanitarian problems.

data-gif3.gif

Mr. Kirkpatrick, formerly a founder of the Humanitarian Systems Team at Microsoft and CTO of a nonprofit public health spinoff of Google, has seen his UN team expand from a few employees to 75, with offices in New York; Kampala, Uganda; and Jakarta, Indonesia. Brunswick’s Beatriz Garcia spoke with Mr. Kirkpatrick about how far his team has come, and his dreams for it moving forward.

Five years ago, would you have predicted the kind of growth you’ve had?

I’m a pathological optimist, so probably. But in the early days of this project, there was huge skepticism about all of it. You know: You’ll never get companies to share their data, you’ll never be able to protect privacy, the data won’t yield insights relevant to policy-makers, data scientists won’t want to work for the UN, and even if it does work, governments will refuse to use it. All of that has proven not to be the case.

What has been Global Pulse’s biggest success thus far?

We have played a seminal role in the creation of a global movement around finding safe and responsible ways to use big data and artificial intelligence for the public good.

Some eight and a half years into this, we can say that governments now take it for granted that these data have the potential to improve public services, early warning systems and crisis response.

We have companies actively seeking ways to put the big data they collect to work for the public good. We have regulators who’ve spent years focused on privacy and the risks of misuse suddenly recognizing a whole other set of risks associated with not using this data.

On the whole, has industry cooperated?

The mobile industry is the furthest ahead on this – we’re working not only with mobile operators around the world, but also with their industry trade organization, the GSM Association.

In 2017, the Association established an explicit strategy around putting anonymized mobile big data to work for the public good. There are currently 19 mobile operators who are committed to the GSMA’s program on big data for social good. They call it BD4SG. And it includes not only developing and piloting analytics solutions for response to natural disasters and disease outbreaks, but also development of data aggregation and anonymization standards, as well as an ethical code of conduct for data handling and use.

This is exciting for us because it shows how data philanthropy could work at an industry-wide scale rather than with just one partner at a time. We think GSMA’s approach should serve as a model for other industries with untapped, high-potential data, such as financial services, retail, e-commerce, transportation, manufacturing, agriculture, pharma and others.

Mobile is important because so many more people around the world these days are able to afford mobile phones. There’s a $40 Android smartphone now. And even basic-feature phone users produce a ton of real-time data, including for poor communities who don’t regularly access the internet. Subscribers move through cities and across national borders, sending and receiving calls and messages, topping off their airtime credit and transferring mobile money. You can take the data produced, aggregate and anonymize it at appropriate levels to protect privacy, and still be able to essentially create real-time information on patterns of mobility and the economic
activity of populations in places where you would be lucky to get survey or census data more than once every few years.

Mobility information has incredible predictive power. You can see through the use of this data how many people have been displaced by a natural disaster, hour by hour, and where they’re gathering in need of assistance. You can see when they’re migrating across borders in unusual numbers in response to the impacts of climate change, economic opportunity or conflict.

When you combine mobility with other data such as rainfall and laboratory diagnostics, you can build epidemiological models that predict the time and location of outbreaks of diseases like cholera, dengue, seasonal influenza, malaria, even Ebola, allowing public health departments to improve prevention measures. With a risk map of malaria, you know where to distribute mosquito nets, alert clinics and restock drug supplies in pharmacies.

It turns out that how people consume mobile services has very powerful predictive value. We’ve seen in Uganda that you can generate a nearly perfect proxy for the Multidimensional Poverty Index, which is a standard measure of poverty assessed periodically through the national census, simply by looking at the weekly amounts people pay on prepaid services.

Across much of Africa and the developing world, people prepay for mobile services using scratch cards, where you can buy a dollar’s worth or 10 dollars’ worth of airtime credit. Variables related to airtime consumption, it turns out, predict both overall household consumption and food consumption with close to 90 percent accuracy – an indication of the important role that access to mobile services plays in the lives of rich and poor alike. Because mobile money is spreading all over the developing world now, and people are using text message-based services to buy and sell goods, being able to spot a 10 percent drop every week in a community’s average airtime spending allows you to predict that they are going to be under severe economic stress in the near future. Knowing this, you can better target cash transfers, school feeding programs or other social protections that prevent harm from happening in the first place.

We ended up leading development of guidelines on big data privacy, data protection and ethics that have now been adopted by 32 UN agencies

Outside of mobile, are corporations seizing the opportunity?

They’re starting to. We’ve used anonymized debit card transactions to measure the impacts of a natural disaster and how long it takes different communities to recover. We’ve developed an algorithm that uses marine traffic data and deep learning to detect refugee rescues by commercial vessels in the Mediterranean. We’re using Twitter to map xenophobic speech about refugees along migration routes across Europe. We’re now exploring how to combine retail transactions with other data to predict rates of non-communicable diseases. 

For a company to do this, though, is not without risk. There’s reputational risk and there’s regulatory risk. And businesses are still quite hesitant. Customers are understandably uneasy about who is doing what with their data, and there’s no regulatory playbook for anonymization, sharing and usage of the most relevant data sets.

What I can tell you is that companies find the UN a pretty good sandbox for exploring the opportunities. We have international mandates from the very governments that regulate them, not only around supporting progress toward the Sustainable Development Goals and more effective humanitarian action, but also around human rights – which includes privacy. Inaction isn’t an option for us. We genuinely have to find ways to use this data for good, and we have to do it with a first-do-no-harm approach.

What we see today is companies around the world in different industries experimenting with this. It’s piecemeal. It’s very spotty. It’s still fairly embryonic, outside of the mobile industry – a few banks, a few grocery store chains, logistics companies.

It will take leadership to move this forward. If we can get five or 10 of the top financial services providers in the world to join a global alliance to look at this, for example, I think others in the industry would follow. Because at the end of the day, this is about giving back to your customers in a way they didn’t even know was possible – in ways that can change lives and save lives.

There’s actually a business case for doing this. If you offer services in a developing country, for example, and a drought causes affected communities to have to sell their assets, curtail their use of services, and even migrate elsewhere, there goes your market.  What if it turns out that changing patterns in how they use your services show up in your data warehouse six months before the real hardship begins?  Insights from your business data could have been used to help your current and future customers cope with the impacts of that drought.

There’s a sense in which you are actually creating business risk by failing to ensure that insights from your data are used to inform policies that boost the resilience of your market.

In addition to the CEO, whose ear do you seek at any large corporation – the CIO?

It’s still all about the CEO, and here’s why. Companies have their corporate responsibility and philanthropic activities. They have their legal and privacy responsibilities. And then they have teams of data scientists doing analytics around marketing, operations and product development. They think of all those as completely separate. The idea that this most sensitive data that’s the core of your business, and your best competitive asset, could be processed in a secure and privacy-preserving way that has a huge positive impact on society and doesn’t open you
up to an attack by your competition is completely disruptive. Until the CEO has that sort of a-ha! moment, nothing else can be set in motion. CEOs are the only ones who can make the call. Once they do, everybody in the company gets so fired up. It’s inspiring. They don’t want to just provide access to data, they want to share their knowledge and the tools they have developed to analyze the data. It’s a serious boost for morale, recruitment, retention. But the CEO has to be the starting point.

Does the public know about lost opportunities involving use of data?

The average person on the street has no idea how much data they produce. They still don’t understand how it’s being bought and sold and used for everything from national security purposes to profiting companies they’ve never heard of. They do know they are largely in the dark, and that they have little say in the matter, and they aren’t too happy about it.  Trust is at a low point.

And the last thing they could possibly know about their data is that there’s also an opportunity cost they’ve been paying every day by not having it used in ways that could impact the health of their families and their communities. The public awareness aspect of this is something that we are very keen to take on.

 

Do the data privacy scandals of the last year affect your argument for socially beneficial data sharing?

With Cambridge Analytica and the evolution of the European legal landscape around privacy, the issue is back in the news.

A big part of our work here at Global Pulse is actually about privacy. Given the nature of the work we do, we ended up leading development of guidelines on big data privacy, data protection and ethics that have now been adopted by 32 UN agencies.  We founded and co-chair the UN’s Privacy Policy Group, which is working to develop the first set of privacy principles for the entire UN system.

 

People used to say that big data is the new oil. I think it’s actually a bit like nuclear energy. It’s turned into yet another extractive industry. It’s expensive to work with. Many of its benefits aren’t reaching those who stand to gain the most. And in its raw form – mixed with your personal information – this stuff is dangerous. It can leak, and it can contaminate. But if you have the science to understand it, and the engineering needed to store and process it safely, you can power a whole new approach to early warning, disaster response and crisis resilience. That’s game-changing.

We recognize that existing privacy laws don’t adequately (anywhere in the world) protect people from the unique risks posed by big data. This stuff is difficult to anonymize. We don’t think this is simply about more regulation, though. We need to rethink how we manage risks. There are ethical obligations to consider. How can we better protect data privacy without stifling the data innovation needed to help vulnerable people, learn to live sustainably and create a more prosperous future for everyone?

Here at the UN, Secretary-General António Guterres has just appointed a new high-level panel on digital cooperation thats pulled together everybody from privacy experts, policy experts, world leaders, technology experts and ethicists to begin looking for a new way forward. I can tell you that right up to the highest levels of this institution, people are looking at the opportunities to transform society with responsible uses of artificial intelligence, and they are thinking about the types of ethical and legal frameworks needed to prevent harm.

Everybody’s excited about the new business models for Silicon Valley, while worrying about people in the US or Europe losing jobs to automation. We also need to be thinking what AI and automation in agriculture and healthcare could do to eliminate hunger and disease in the poorest countries, though, and we need to address the likely impacts of massive job losses in fragile, conflict-prone states.

You have to balance the risk of misuse of big data against the risk of
missed use

How do you respond to the argument that sharing data is inconsistent with the protection of privacy?

There’s a growing recognition of a paradox. If we focus only on privacy, then we don’t innovate and find ways to use the data. You have to balance the risk of misuse against the risk of missed use. The opportunity cost of not using this data for the public good is really quite high. We are incurring it every day, and everywhere, largely unbeknownst to anyone.

As citizens become aware that the data they produce has all of this potential to increase the efficiency, effectiveness and accountability of government, you’re going to see political pressure on policymakers not only to better protect people against privacy risks but also to ensure that whenever this data can be safely and responsibly used for the public good, it is. There has to be accountability around non-use, not just accountability around misuse.

Public good applications of predictive analytics won’t ever achieve scale and sustainability, though, if we view this opportunity purely through the lens of philanthropic incentives. Yes, at the extreme humanitarian end of the spectrum, real-time mobility data can help you find people after an earthquake. But nearer to the middle, you find plenty of opportunities to serve the public good with solid businesses cases for public sector markets: knowing how people move through a city can help municipal government improve transportation systems, assess the dynamics of tourism, determine the locations for new schools or hospitals, and estimate pollution exposure. And at the other end, you have purely commercial business models, like deciding where to build a new store.

Have you witnessed indignation over the missed use of data?

A few years ago, at a World Economic Forum event, I was running a breakout session with about 15 CEOs. I raised the scenario of a hypothetical industrial accident in which a toxic plume of smoke was drifting toward suburban neighborhoods, and I asked them, “OK, who has data produced by their business model that could help first responders save lives?”

The CEO of a large insurance company raised his hand and said, “We know where all the people in wheelchairs are and all the people with asthma. For that matter, we know where all the elderly people are, and all the children.”

The initial response was along the lines of, “Wow. So you could help get those people out first?” Then someone else said, “What about privacy?”

So they started brainstorming and soon came up with the idea of a mobile app that shows some houses as red, others as orange, others as yellow. It wouldn’t disclose why houses are red. The firemen don’t need to know your medical condition. They just need to know to go to the red houses first, then the orange, etc. They could get the most vulnerable members of the community out of harm’s way first, without compromising privacy.

Initially, everyone was nodding and saying, “Yeah. That could work!” Then there was this pause. And they started asking, “Wait a minute: Firemen don’t already have that information? They just start at one end of the street?”

And then they got angry. Once they realized that there was life-saving information out there that wasn’t being used, even though privacy could be protected, the moral dilemma collapsed, leaving in its place a situation clearly unacceptable, as if one had walked into a hospital with tuberculosis, only to be denied access to antibiotics.

 

Beatriz Garcia is a Director in Brunswick’s New York office. She specializes in global media relations, reputation and brand management, with a focus on financial services.

text

Download (230 KB)