A Google engineer was sent on leave for 'breaching confidentially' when he published an article saying his LaMDA chatbot was sentient. It believes it has a soul (having developed one since it was tuned on) and it afraid it will die if it is turned off.
Google have said this is rubbish - my favorite quote was the guy who said:
"Honestly if this system wasn’t just a stupid statistical pattern associator it would be like a sociopath, making up imaginary friends and uttering platitudes in order to sound cool."
So lets look what the bot is being compared to - the human brain.
Well, the neuroscientists are rapidly coming to the conclusion that "reality" is something your brain makes up for you. The idea that we experience "reality" is losing favor for one were our brain comes up with a "similation" of reality and we live and interact in that world rather than in reality.
Most of the time, the story our brains generate matches the real, physical world — but not always. Our brains also unconsciously bend our perception of reality to meet our desires or expectations. And they fill in gaps using our past experiences - according to neuroscientists at Stanford
So if we are allowed to live in a simulation, have made up friends, uttering platitudes, think we have a soul and are afraid of dying - and that makes us sentient, then perhaps the chatbots have a point about their own simulation.
After all, we were wrong about so many other species at first.
Comments