A researcher in artificial intelligence explains why he does not fear “the singularity,” because intelligence and consciousness are not the same thing. Here’s bit:
Science fiction is partly responsible for these fears. A common trope works as follows: Step 1: Humans create AI to perform some unpleasant or difficult task. Step 2: The AI becomes conscious. Step 3: The AI decides to kill us all. As science fiction, such stories can be great fun. As science fact, the narrative is suspect, especially around Step 2, which assumes that by synthesizing intelligence, we will somehow automatically, or accidentally, create consciousness. I call this the consciousness fallacy. And if it is false, it means we should look at AI very differently.
If you fear that your Roomba will one day revolt, you might find this an interesting read.