An artificially intelligent computer was programmed to reflect some of the issues present in schizophrenia, and it proceeded to claim to be a terrorist
“Researchers at Yale and the University of Texas used a neural network — a computer brain — to test out medical theories of what causes schizophrenia. The result was a computer brain that can’t tell the difference between stories about itself and fanciful stories about gangsters, and claims responsibility for terrorist acts.
In a paper published today in the journal Biological Psychiatry, researchers started with a neural network: a computer program that replicates, as best we know how, an actual human brain. It can take in information, learn from it, combine it, talk back to you.
Then they applied to their computer brain a medical hypothesis about schizophrenia: that memories are encoded too quickly on the brain, something the researchers call “”hyper-learning.””
“”What the model suggests is if this process is accelerated unduly, that bad things happen and that stuff going into memory gets intermingled, corrupted, kind of like a bad sector in a hard drive,”” says Dr. Ralph Hoffman, a psychiatrist at the Yale School of Medicine. The brain takes in too much information, too quickly, “”can’t organize it and sift it. And somehow something to do with that process may be running amok.””
Then Hoffman and his colleagues began telling their computer brain “”stories.”” Some were meant to be autobiographical, about a young doctor working in a New York City hospital.
Then there was a second set of stories, some of which had parallel narrative structures to the autobiographical ones, involving “”crime and crime-fighting, mafia activities and assassination and things of that sort,”” Hoffman says.
Next step: Have the neural network — the computer brain — try to retell the stories. Uli Grasemann, a computer scientist at the University of Texas at Austin, says the stories came out much differently than they went in.
“”We get stories where the content is changed, but the local structure and the grammar remain intact,”” says Grasemann. “”So for example there might be a story about one gangster killing another gangster, and the network will basically leave the story intact and replace one of the gangsters with itself.
“”There was a story about a terrorist bombing, and the network consistently replaced the terrorist with ‘I’. Which is an interesting result because these confusions among actors, and inserting oneself into shared cultural stories like movies or legends, that’s something that happens a lot with delusions or schizophrenia.””
The researchers tweaked the neural network in a number of ways they though might mimic schizophrenia. But this hyperlearning — taking in too much information too fast — emerged as the most promising.
Hoffman says the next step is to keep comparing the results with actual schizophrenic people. “”If there is validation there, then a real exciting possibility is we might be able to use these artificial brains or networks to test out novel treatments that we haven’t really thought of yet.”””