Data Integrity | Brunswick Group
Brunswick Review The Integrity Issue

Data Integrity

“Work on the assumption of compromise, either technical or human,” advises Brunswick’s Paddy McGuinness. “Be prepared and expectant without being fearful.”

For most of my career, data integrity was largely a technical matter that IT folk talked about when building and securing databases. It, with process integrity, is vital. Increasingly, data integrity is becoming everyday parlance, a term and topic with growing reach and relevance.

Take the Bodleian Library in Oxford, for instance, which was founded in 1602 and is famous as a setting for the library scenes in Harry Potter. I was hosting a US Cabinet member at the library and our fairly traditional-looking guide talked us through the challenge of holding a copy of every book published in the UK. They’d considered digitization, but problems with “data integrity” meant that the digital versions could not replace the hard copies. Those copies still had to be stored—at considerable expense—even though most were never looked at. The printed page, it seems, has more integrity than data on a server.

Cybercorner.gif

Then there was Cyprus, a country close to the conflict in Syria. A Syrian air-defense missile recently missed its target and fell on a Cypriot mountain side, bringing the battle close to home. A Cypriot government official told me of the increasing concern at the loss of reliability from GPS data in Cyprus’s sea area and airspace. It seems that the Russians (and thus the Syrians) distort GPS data to impede reconnaissance and complicate or even prevent targeting by Western weapon systems. The effect can be felt in the positioning, navigation and timing systems integral to so many transport, communications and industrial systems. The official told me “data integrity” had been lost.

It’s a stretch to describe computational propaganda or “Fake News” as a data integrity issue—that presupposes the other “innocent” data we receive, curated via news outlets or social media, has integrity.

There are already plenty of well-known online threats to data integrity, such as links that take us to pages that appear to be from a trusted provider (your bank) but are actually fake. While other online threats are only starting to surface. We are still learning how to manage “deepfakes,” audio or video content that has been so convincingly altered it’s difficult to tell it’s inauthentic.

The vulnerability is yet greater if the telephone networks that we connect to are not themselves secure. Imagine the surprise in Iran when users accessing web pages through 3,500 switches found that, instead of receiving the results of their search, they saw a fluttering Stars and Stripes and the message, “Hands off our elections.” This is the network vulnerability companies and governments are trying to prevent when they talk about network equipment not yet being resilient. This was a pretty clumsy attack. Consider what the effect would have been if the attacker, rather than replacing the whole searched-for page, had altered one or two items in a trusted news source, say the BBC or Reuters, to publish their article on your phone.

Many of these emerging threats to data integrity touch global organizations, which is why the term has made its way to the boardroom.

A client recently asked me what I thought of the “integrity” of the data on which their board are basing their data and cyber resilience decisions. Like so many executive committees or boards, they have a “data and cyber” agenda item at every meeting and have plenty of reporting on performance against controls and emerging risks. They have RAG-rated (red-amber-green) charts that non-executives dissect, complaining that the risk and mitigation data is presented differently for the other boards on which they sit. There are occasional blood-chilling briefings on threats from former national security officials like myself, or sessions where executives recount what it was like enduring a catastrophic cyber event.

While they knew what was happening on their networks, they didn’t really know what was happening elsewhere ... they felt uneasy... As they should: The position is likely worse than they understand.

My client complained that while they knew what was happening on their networks, they didn’t really know what was happening elsewhere. They had bought threat intelligence services that scrub the darknet looking for compromised data. They had signed up for government- and industry-run information-sharing partnerships in the jurisdictions where they operate. But still they felt uneasy about what they didn’t know.

As they should: The position is likely worse than they understand. What they know is what their existing controls illuminate—what might be termed the “known ambient threat.” The chance of those controls being ahead of emerging threats and malicious insiders is quite small. Board members typically look for external tests of their internal controls, and cite what happened in company X or what security service provider Y is saying. They are especially influenced by public reporting of major data and cyber events (and the increasingly large regulatory fines).

But this approach falls short; not all incidents are reported or become public. A quick scrub of the many major cyber incidents that Brunswick has handled for clients this year reveals that in the UK and Europe, fewer than 50 percent voluntarily went public with the breach, while 60 percent ended up being made public. In the US, roughly 30 percent wanted to go public but 80 percent became public eventually.

In the UK and Europe, around 75 percent of clients had to report the incident to some regulatory body, while roughly a third claimed against insurance policies. In the US, roughly 60 percent reported to a regulator (including state attorney generals), and more than 80 percent claimed against insurance policies. In other words, the picture painted by regulators, the media and insurers is incomplete.

Even when an incident becomes public, the full nature of what happens is rarely revealed, either because of investigatory or legal constraints or simple corporate diffidence. This may change if mandatory breach reporting is required by law or if cross-sectoral data sharing at machine speed becomes standard—but that’s nowhere near the case today.

And notice the strikingly different insurance claim figures between the US and Europe. The European market is less developed, with the consequence that there are too few claims in Europe for there to be a reliable actuarial risk model. We just don’t know how great the risk is.

Where does this leave my client? My advice was to work on the assumption of compromise, either technical or human, and build up organizational resilience against the potential fallout—to be prepared and expectant without being fearful. This gap between the reality of the cyber risk and what is planned for will close eventually, just not any time soon.

Paddy McGuinness is a Senior Advisor with Brunswick. He was the UK’s Deputy National Security Adviser for Intelligence, Security and Resilience where he advised the Prime Minister and National Security Council on policy and decision-making on homeland security issues, leading on the UK’s cyber strategy and programs.

Brunswick Review Sign Up

Download (4 MB)