The truth is under threat from “extreme reality manipulation.” Aviv Ovadya, prophet of a looming “infocalypse,” speaks to the Brunswick Review about how business can fight back
Video tools now on the market allow one person’s face or voice to be replaced by another’s, creating what are known as “deepfakes” that can fool most viewers. Audio tools can replicate a person’s voice from samples. Anyone can be made to say anything.
Such hoaxes could be used to destabilize delicate situations like trade negotiations or criminal investigations, delegitimize reputable sources or slander celebrities or political candidates. They could also wreak global havoc: Imagine a convincing, but fake announcement by the President of the United States of a nuclear missile strike against North Korea.
In the spring of 2016, before the tsunami of “fake news” roiled the 2016 US presidential race, only a few saw the impending danger. A young MIT alumnus and Silicon Valley consultant named Aviv Ovadya was one.
“It became clear we were at an inflection point,” says Mr. Ovadya, now Chief Technologist for the University of Michigan’s new Center for Social Media Responsibility and a Knight News Innovation Fellow at Columbia University’s Tow Center for Digital Journalism. While noting that much good can come from these innovations, Mr. Ovadya compares the growing threat to that of nuclear weapons, and sees society’s awareness as myopic – “a one-inch view of the outside through the windshield” of a car careening out of control.
Over the past two years, his warning of a looming “infocalypse” has drawn attention, and Facebook, Twitter, Google and other platforms have put more resources into preventing malicious use of their products. The next step, Mr. Ovadya says, is a commitment to massive investment to develop countermeasures, and to allocate “nimble money” – talent pools and shared resources across organizations that can be deployed quickly as the fast-moving technology creates new threats.
Can you tell me a little about your background?
At MIT, I studied computer science. But a big part of the conversation in the community around me was about the impact of technology on society. During that time, I came to terms with the idea that maybe technology isn’t an unqualified good. It can change the way the world works; it can put you into a better world and it can put you into a worse world.
That was pretty formative – realizing that there’s a trade-off between the efficiency that comes from technology and resilience, which is often lost as a result. Technology can make the world much more fragile. I went on to get my Master’s at MIT and then spent a bunch of time in Silicon Valley, as a software engineer and product design consultant. But on the side, I was working on understanding some of these systems around technology and society.
About two years ago now it became clear that we were at an inflection point. The means of distribution of information was being manipulated, co-opted and optimized in a way that was really harmful for democracy, for public discourse, for health – for all these things that we clearly care about in society.
Not only was it very bad already, but it was going to get much worse very quickly. And there was nothing being done that would make it not continue to get worse. That was what triggered me into action. That isn’t acceptable. That isn’t a world I want to live in. So, I decided to focus my energies to see what I can do about that.
What kind of reception did you get when you started to spread the word about this in 2016?
Probably the most common response was, “That’s not actually a problem. Prove to me that it’s a problem.” You still hear some of that: “This has always been true. Nothing’s new.” But there’s a lot of evidence to the contrary at this point.
It’s sort of like saying, “Nukes aren’t really a problem because there was always war.” Well, they actually are. They changed the game in a way that wasn’t possible before and as a result, you need to change the entire face of diplomacy, among other things. It’s true, nukes don’t do anything new – you could use a spear to kill someone. But at some level, it’s definitely new – in terms of the scale and scope, for instance.
Do you think the impact of “fake news” in the election helped prove your point?
Yes, there’s a lot more interest – whether or not there’s actually been effective investment. But that’s starting to happen and it’s good to see. It’s still too little, too slow. It’s a big ship, but when you’ve decided you want to move it, it can be moved quick.
There are organizations that have invested single-digit millions of dollars, where tens of millions actually need to be invested by many different organizations across the board – and billions across the ecosystem – to address these threats as they continue to spiral.
Likewise, it’s good to see some of the platforms taking this seriously. Even people at the very top in some cases are owning up, saying, “Hey, we didn’t do a great job.” The more that happens, the more likely it is that there will be significant progress.