Does really Science Fiction become real? “ I’m sorry Dave, I’m afraid I can’t do that.” These are the words of a computer rejecting a human’s command.
On Wednesday, OpenAI cofounder Ilya Sutskever claimed on Twitter that. “It may be that today’s largest neural networks are slightly conscious.”
An artificial neural network is a collection of connected units or nodes that model the neurons found within a biological brain. That can be trained to perform tasks and activities without human input – by learning, however. Most experts say these systems aren’t even close to human intelligence, let alone consciousness.
It is also unclear what ‘slightly conscious’ actually means. The concept of consciousness in artificial intelligence is a controversial idea.
Needless to say, that’s an unusual point of view. The widely accepted idea among AI researchers is that tech has made great strides over the past decade. But still falls far short of human intelligence, never mind being anywhere close to experiencing the world consciously.
It’s possible that Sutskever was speaking facetiously, but it’s also conceivable that as the top researcher. At one of the foremost AI groups in the world, he’s already looking downrange.
He’s long been preoccupied with artificial general intelligence or AGI. Which would refer to AI that operates at a human or superhuman level. During his appearance in the human-AI documentary “iHuman,” for instance, he even declared that AGIs will “solve all the problems that we have today” before warning that they will also present “the potential to create infinitely stable dictatorships.”
For decades science fiction has peddled the idea of artificial intelligence on a human scale, from Mr. Data in Star Trek to HAL 9000, the artificial intelligence character in Arthur C. Clarke’s Space Odyssey that opts to kill astronauts to save itself.
When asked to open the pod bay doors to let the astronauts return to the spacecraft, HAL says ‘I’m sorry Dave, I’m afraid I can’t do that.
OpenAI is no stranger to controversy, including around its GPT-3 system, which when first released was used to create a chatbot emulating a dead woman, and by gamers to get it to spew out pedophilic content.
The firm says it has since reconfigured the AI to improve its behavior and reduce the risk of it happening again.