A Google self-driving car on the streets of Mountain View, California, 2016.

A Google self-driving car on the streets of Mountain View, California, 2016.

Wikimedia Commons

Creative Computers

Computers can give you the weather forecast and call you a cab home. But can they tell you a story?

In the past year I’ve kind of become the office AI guy. Part of that role has meant dispelling the most lurid fears that accompany advances in artificial intelligence. You know the ones: Skynet, HAL 9000, Cylons. All of these science-fiction robots rebel against humanity and wreak death and destruction. It’s a fear that has gripped technology mogul Elon Musk and others in the Tesla-driving demographic, but, in fact, most AI researchers agree that we won’t have to worry about killer robots anytime soon.

As I recounted in my article “Thinking Machines,” the types of programs being built today are pretty good at performing narrow tasks, such as driving a car or playing chess, but operate at nowhere near the level of general intelligence and common-sense knowledge humans possess. They can’t, for instance, write an exceptionally creative article such as this one. Right?

Not so fast. Enter Mark Riedl, an AI researcher who has been trying to develop an algorithm that has what he calls “narrative intelligence” since the early 2000s. And he’s getting scarily close to succeeding.

Riedl’s latest program is Scheherazade, named after the storyteller-turned-queen from One Thousand and One Nights. In order to improve and “learn,” the most commonly used types of AI programs require large data sets of information. To perfect his storytelling program, Riedl crowdsourced data sets (in this case, stories) from the Internet, specifically stories about a couple going on a date to the movies. The result is that the stories produced by Scheherazade usually involve one person picking up the other in a car, watching a movie, and sometimes kissing at the end. These actions are selected and ordered in a sort of flow chart of events with a beginning, middle, and end. Riedl helps the program by weighing certain events that show up more frequently in chronological order, such as arriving at the theater and then buying tickets. Outliers in the data create the unexpected developments that hopefully make the stories worth reading.

Crowdsourced data does have its downsides; when Microsoft released Tay, a Twitter “chat bot” that internalized everything people tweeted to it and used those phrases to respond to others, it went from cheerily writing “humans are super cool” to hateful racism and misogyny in less than a day.

In the examples Riedl showed at a recent conference for science writers, Scheherazade’s stories veered from boring to straight-up bizarre. Sometimes the characters just held hands. Other times they went on nonsensical tangents that involved, for instance, running errands for the male character’s mother. As Riedl himself admitted, it was hard to tell if we were seeing a creative, intelligent program adding twists to a predictable story, or seeing a cleverly programmed flowchart that created the illusion of an emergent story.

While Riedl’s work is impressive, it doesn’t signal that robots will replace humans as storytellers or game designers, or that AI is any closer to having power over people. But Scheherazade could be used to make existing AIs, such as Apple’s Siri and Microsoft’s Cortana, more relatable and friendly. Unfortunately, an AI program with the ability to understand and influence users with stories could come with its own set of ethical problems. Although none of Scheherazade’s tales were overtly prejudiced, they definitely contained the cultural biases of the people who submitted stories to fuel it. For example, the male character usually picked up the female character, adhering to traditional Western gender roles. That may be one of the trickier things programmers face when trying to humanize their creations: how do you make something that feels completely human and at the same time does not offend?

And there are other potential problems. A sympathetic AI that communicates with stories would be able to manipulate people in a variety of ways; for instance, it might influence the way we vote or where we choose to shop. A recent controversy over the frequency with which fake news stories were showing up in people’s Facebook feeds has alarmed experts who believe such manipulations are subtly changing the way we think. Facebook has since adjusted its AI algorithm and filters that decide what we see to lessen the impact of fake news, but that raises all kinds of new moral issues, including who gets to determine which information is legitimate.

AI has a lot of power over our lives precisely because we don’t see it at work. But political opinions and shopping habits are one thing. What about when AI gets to decide who lives, and who dies?

Before fully autonomous vehicles can be put on the road, programmers have to consider what the algorithm will do when faced with an inescapable dilemma: run over a child, or swerve into a tree and likely kill the driver? AI has no predetermined understanding of the value of a child versus an adult, just as it has no inherently malicious intentions. Mercedes-Benz recently set off some philosophical soul-searching when it announced its self-driving system would prioritize the safety of car occupants in all crash scenarios. Military strategists face an even more difficult choice when designing autonomous weapons that must calculate acceptable civilian casualties.

On a lighter note, AIs could be trained to be polite and follow social norms that grease the wheels of society, such as waiting patiently in line at the grocery store for their more grumpy owners. Those are the sorts of issues that research initiatives at Google and Carnegie Mellon are planning for, not how to stop Skynet from nuking the planet. (Even though, let’s be honest, it would be nice to have that one covered—just in case.)

Jacob Roberts

was staff writer for Distillations magazine.