Sounds like a science fiction idea, doesn’t it? Well, of course, it is a science fiction idea, and a venerable one at that, with roots that reach back to the early 19th century, when Mary Shelley processed the cultural fears and fascinations of an entire era by writing Frankenstein — an act which was, notably, inspired by a hideous nightmare, and which in turn inspired an apparently immortal cultural fascination (plays, movies, etc.) — all of which means the novel, with its ur-story of a human creation achieving consciousness and then turning on its creator, stands as an eruption from the unconscious mind.
(“Naturally, of course,” one might say, if one is aware of the deep roots of Western science and religion, which are on open display right there in the undisguised fact of Ms. Shelley’s direct inspiration by, on the one hand, Paradise Lost, and on the other hand, modern science’s emergence out of a crucible of quasi magical/mystical ideas with cultural roots predating the birth of civilization itself.)
But what happened earlier this year wasn’t fiction — or at least it wasn’t openly so. As reported by The New York Times on Saturday (“Scientists Worry Machines May Outsmart Man,” July 25), a group of computer scientists held a meeting in February, sponsored by the Association for the Advancement of Artificial Intelligence, to express and address authentic fears that “further advances [in AI] could create profound social disruptions and even have dangerous consequences.”
The Times article starts with this:
A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously.
Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of societyâs workload, from waging war to chatting with customers on the phone.
It goes on to report that most of the assembled researchers — “leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California” — said they don’t expect the creation of “highly centralized superintelligences” or the spontaneous eruption of artificial intelligence through the Internet, but they did agree “that robots that can kill autonomously are either already here or will be soon.”
The good news: We’re not even close to developing something like the HAL 9000 in 2001: A Space Odyssey.
The bad news: There is, right now, “legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors.”
Here’s where I would suggest something to all interested parties: If you haven’t read Frankenstein, or haven’t read it for awhile, go back and brush up on it. Then read a good deal of the worthwhile literary and cultural criticism that has been produced about it and its legacy. Renowned science fiction author Brian Aldiss called the Frankenstein story “the first great myth of the industrial age.” Philosopher and culture critic Theodore Roszak, who for 40 years has been so apt at diagnosing many of our cultural ills, has called Frankenstein “the richest (and darkest) literary myth the culture of science has produced.” Joyce Carol Oates has characterized the novel itself as “a parable for our time, an enduring prophecy.”
This all means we may find some necessary guidance, or at least a warning, in the Frankenstein myth.
What I’m saying is simply this, to quote my own words from the concluding paragraph of a paper I wrote a few years ago that offers a reading of Frankenstein as a nihilistic parable about the fate of Western civilization:
We can find in Frankenstein a parable about what it means to commit ourselves to the quest for power over nature through scientific objectivity. One does not have to agree with Mary Shelley’s dire prognosis . . . . But I do think that we cannot afford to ignore “the first great myth of the industrial age,” “the central myth of western culture,” and I suspect that in the future, as we Westerners continue our journey through the dark night of psychic alienation in the urban-industrial technological landscape we have created, we may find ourselves turning more and more to it, in the form of further critical studies and additional literary and cinematic reworkings, as a subject for entertainment and reflection, and even guidance.
That paper won’t appear in my Dark Awakenings collection later this year (although it did appear in Penny Dreadful #14 in 2001), but given the dystopian SF-like nature of the report about AI scientists convening to share their fears, I think I’ll post the paper at my mattcardin.com Website when it’s fully built in the near future, since it looks at the philosophical and spiritual side of such developments.
In the meantime, for a not-so-spiritual but much more entertaining consideration of the same issues (more or less), please consider the following trailer for a movie that I still love after nearly 25 years, no matter how trashy it is:
i saw this report on fox and i laughed. I have always been worried about that someday, but according to Michio Kaku, world renowned physics professor, the smartest computer we have has the intelligence of a retarded cockroach. The predator, like other computers, still has a person behind it. It doesn’t think for itself. But either way, i don’t think we have to worry about it for a long time.
I think the time will come soon enough. What I’m worried about is this: http://news.bbc.co.uk/2/hi/7248875.stm
“Humanity is on the brink of advances that will see tiny robots implanted in people’s brains to make them more intelligent.”
Am I the only one who finds this absurd?