Hot For [Pedagogical Agent]
As a "natural", I lack belief in a human soul, whether mortal or immortal, so, in principle, I can't see any objection to the idea that someone will one day succeed in creating an "artificial intelligence."
Because of my naturalist and individualist bent, I'm really not bothered by the possibility that humankind might one day be destroyed, Terminator- or Matrix-style, by our machine offspring - at least not any more than I'm bothered by the possibility that I'll be bludgeoned to death in a dark alley, or waste away, uncared-for, in a convalescence home.
I wonder, though... Is the Terminator myth really a likely, or even possible, future? We're still not entirely sure what "intelligence" really is, let alone how to create it (aside from growing and interacting with human babies, that is). Is the ability to be introspective and/or self-aware a requirement for intelligence? What about feeling emotions? What about having an instinct for self-preservation? I'm not sure about any of those things - and I'm not sure anyone else is, either (in spite of the attractiveness of the thesis found in the hugely entertaining book, Gödel, Escher, Bach).
However, if there is a possibility for some sort of machine revolution, then we are surely doomed. If Congress's reaction to a vegetable that could follow the movements of a balloon is any indication, then, long before our simulated friends (in meatspace or virtual space) have anything approaching a human-level intelligence, we will have been completely beguiled.
If I'm conveying the sense that I think any of this is bad, then I apologize, because I don't mean to. I'm not entirely sure how I feel about this stuff, yet. Like any technology, there are good and bad aspects.
No comments:
Post a Comment