The novella, titled The Day a Computer Writes a Novel, is about a computer that becomes sentient and grows tired of serving humans. Instead, it spends its time writing and producing other works of art. The plot and the characters of the novella were created by the people in charge of the AI, while the program itself handled the bulk of the writing.
What’s most interesting is the novella’s reception. Most who read the novella seemed to think that the plot was its best aspect, even winning the praise of the established Japanese sci-fi author Satoshi Hase, who claimed that the plot was well-done but that the work lacked character development. When you think about it, this makes sense. The characters were created by humans, which were then expanded upon by the computer, which likely doesn’t have the emotional capacity to connect with the reader. Therefore, it’s reasonable to think that there’s only so much a computer can accomplish on its own without human interaction.
Though humans aren’t perfect, their natural appeal to emotion makes them a more viable author than computers, which have a dangerous tendency to lean toward logistics. Even if they’re great for accomplishing goals that are straightforward in nature, in terms of art, computers are incapable of becoming emotionally attached to their work. For reference, Google created an artificial intelligence program that was capable of beating a world-class player of Go, an ancient Eastern strategy game similar to chess. The AI was designed to follow a specific set of possibilities, taking the road with the best chance for success. Google uses the same tactics with their self-driving vehicles.
The main question that’s on our minds is whether or not people who create artificial intelligence programs are aware of what they’re getting themselves (and all of us) into. While computers might not turn into supreme overlords like in cheesy sci-fi flicks, what would result in escalating computer intelligence to the point where it’s fully autonomous? If computers are capable of learning from and understanding human beings, would it even be for the better?
A prime example of this plan going haywire can be seen in Microsoft’s Twitter chat bot, Tay. Tay was designed to mimic the conversational understanding of a teenage girl, but rather than develop its own understanding, it wound up being taught by Internet trolls to replicate obscene language. Within hours, Tay was saying misogynistic and racist quotes from both well-known politicians and controversial figures from history. The Verge describes Tay as a “robot parrot with an Internet connection,” which presents an interesting and somewhat scary possibility: how can we implement AI that learns from society, while also sheltering it from hate, ignorance, and troublemakers?
Do you have any thoughts on artificial intelligence? Let us know in the comments, and be sure to subscribe to our blog.