May
15
I have empathy for writers of futuristic science fiction. It’s almost impossible to correctly envision the future.
You may have noticed, for instance, that 2001 came and went without the discovery of an obelisk on the moon.
But last week, Google’s I/O Conference apparently conjured up visions of 2001: A Space Odyssey’s rogue robot Hal for a few attendees and bloggers.
It began when Google CEO Sundar Pichai played a recording of a phone call to a salon to book a haircut appointment. Normally such a conversation would be unremarkable, and this one would have been, too—had not the caller been an artificial intelligence app called Duplex. Astonishingly, the salon employee who took the call couldn’t tell. Duplex handles conversation adeptly and naturally, even tossing in convincing ums and ahs, hesitations, and tonal variations. Even after being clued in, audience members had a hard time believing the caller wasn’t a real person.
To hear Duplex make the salon appointment, click here. To hear it make a dinner reservation, click here.
It’s a marvelous technological achievement with untold potential to be useful. Yet, perhaps even predictably, no small number of people were creeped out. NPR reported:
While Google wowed developers with the realness of the bot’s speech, many observers immediately took issue with how the technology apparently tricked the human on the line.
“Google Assistant making calls pretending to be human not only without disclosing that it’s a bot, but adding ‘ummm’ and ‘aaah’ to deceive the human on the other end with the room cheering it… horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing,” tweeted Zeynep Tufekci, a professor at the University of North Carolina at Chapel Hill who studies the social impacts of technology.
“As digital technologies become better at doing human thingsDuplex-Salon appt Duplex-dinner reservation, the focus has to be on how to protect humans, how to delineate humans and machines, and how to create reliable signals of each—see 2016. This is straight up, deliberate deception. Not okay,”
Entrepreneur and writer Anil Dash agreed: “This stuff is really, really basic, but: any interaction with technology or the products of tech companies must be exist within a context of informed consent. Something like #GoogleDuplex fails this test, _by design_. That’s an unfixable flaw.”
To which I say, oh come on.
Development is in the early stages. If the thought of unknowingly speaking with a machine need really be troubling, and I’m not convinced it need be, it’s a simple matter to program Duplex to open a conversation by identifying itself. Let’s not lose sight of the Wow! by inventing pseudo-moral dilemmas.
Last year, another AI experiment was blown out of proportion in the media. Perhaps you heard about it: Facebook shut down a pair of AI-esque bots because they had invented their own secret language and, for all we knew, were plotting humankind’s overthrow. At least, that’s what you might have thought from irresponsible headlines and stories. In fact, Facebook was conducting an experiment to see if the AIs could manage a simple negotiation. In the process, the bots improvised their own shorthand, which, when you think about it, is remarkable. Facebook stopped the experiment not because a program in which they couldn’t understand the bots was dangerous, but because it was useless.
A few weeks ago I wrote about the Cambridge Analytica situation, a case in which it’s too early to tell if fears and accusations are grounded, hyperbolic, or a bit of both. In the meantime, Mark Zuckerberg appears to be getting serious about privacy. I suppose being called before a Senate panel can do that to a person. As I write, Facebook staffers are busily digging through a mountain of apps—a fishing expedition to find out who has been on a fishing expedition, as it were. Facebook’s Newsroom page just announced,
Facebook will investigate all the apps that had access to large amounts of information before we changed our platform policies in 2014—significantly reducing the data apps could access. [Zuckerberg] also made clear that where we had concerns about individual apps we would audit them—and any app that either refused or failed an audit would be banned from Facebook.
The investigation process is in full swing, and it has two phases. To date thousands of apps have been investigated and around 200 have been suspended—pending a thorough investigation into whether they did in fact misuse any data. Where we find evidence that these or other apps did misuse data, we will ban them and notify people via this website. It will show people if they or their friends installed an app that misused data before 2015—just as we did for Cambridge Analytica.
And not just Facebook. Chances are you, as I, have of late received privacy policy statements or reassurances from search engines, social media sites, financial institutions, and others. Try clicking on a link that takes you Twitter, and you’ll be asked to click OK on a screen that says, “By playing this video you agree to Twitter’s use of cookies. This may include analytics, personalization, and ads.”
Where will it end? I’m expecting a privacy statement from the pizza delivery dude any day now.
I don’t know whether or not all of the preventive disclosures are overkill, but if they are I don’t mind. Something about “safe” scoring a little higher than “sorry” on the Preferred Outcome Scale.