In Terminator, killer robots from the future send one of their number back in time to our present, intending to kill a rebel leader before he can overthrow their mechanical hegemony. The “Terminator” unit they deploy, so memorably played by “Aaaaahnold,” was quickly catapulted into movie villain stardom, joining the likes of Darth Vader and Hannibal Lecter. This wasn’t just because of his memorable accent, but also because of the fear he embodies, engenders, and encapsulates – the age old fear of a wolf in sheep’s clothing. A fear of imposters has long permeated the human consciousness, and is speculated to be the reason why so many of us have adverse reactions to things like clowns and zombies: things that almost look human, but aren’t.
We’re halfway into 2017, and though killer robots living among us haven’t quite materialized, society is finding itself up to its ears in mechanical imposters. It started simply – that nice lady on the other end of the telephone when your number is out of service wasn’t really a helpful woman, but a recording automatically tripped by your telephone bungling. Before long, fraudulent emails of dubious repute promising penis engorgement services began clogging our inboxes, courtesy of helpful humans like “Kerry Chan” and “Aron Washington.” These early attempts to fool us were laughable, even a bit adorable in their sheer ineptitude. But computers adapt fast. Soon, innocent grandmas were shelling out their credit card numbers to Nigerian princes. Call a technical support number today, and there’s a good chance that you’ll be redirected through an automated network to talk to a support worker. If you’re really unlucky, that support worker will be nothing more than a few lines of code and some mp3’s strung together to create helpful, 100% human worker “Hugh Mann.” AI algorithms are constantly adapting and mutating, and even now it’s getting hard to separate the phonies from the real humans on the other end. The rapid pace of AI evolution can be best seen in the evolution of their speech. We are well past the days of “Danger, Will Robinson”; modern day artificial intelligences are canny enough to draw on different, varying speech patterns, even taking care to insert st-stutters, er, ah, fumbles, and other, um, hallmarks of modern speech that have long eluded silicon-based dictions.
Feeling a bit overwhelmed at a potential future of nothing but talking to circuits, wires, nuts, or bolts? Craving some human interaction? Look no further than the recently revealed “Anti AI-AI,” a proof of concept design by Australian technology firm DT. This idea aims to fight fire with fire by using a wearable computer of your own to determine who’s real and who’s not. Just like the programs that aim to bamboozle you, anti AI-AI is closely tuned to those exact same algorithms designed to effortlessly mimic human speech.
If it senses that something is amiss – for instance, if its patterns are too regular, or there’s not enough of a pause between certain phrases, the computer will use a thermoelectric plate to literally send a chill down your spine to alert you that something’s up without tipping anyone else off. Already, the device has done the impossible: in a test run, the computer was able to distinguish between a recording of the real Donald Trump and an AI-generated “imposter” created by stringing out-of-context words together. (Perhaps it listened for the word “covfefe”.) In the future, DT hopes to refine the rather crude and cumbersome prototype into something that’s sleek, wearable, and discreet.
The applications of this technology are filled with potential. Imagine living in the future. You call a tech support line, get a friendly voice on the other end, feel a tingle in your vertebrae… and you hang up without any regrets.