August 19, 2022

News Collective

Complete New Zealand News World

Technology: An engineer at Google warns of the dangers of its artificial intelligence: “It has thoughts and feelings.”

Technology: An engineer at Google warns of the dangers of its artificial intelligence: “It has thoughts and feelings.”

Blake Lemon It was a Google software engineer who was able to test Google’s artificial intelligence tool called LaMDA (Language Model for Dialog Applications)She pointed out that the artificial intelligence robot is conscious and has thoughts and feelings.

Some of the statements that ended were his comment, but before he clarified that they included religious themes and that the AI ​​could be induced to use discriminatory or hate speech. Lemoine noted that LaMDA was really conscious and had feelings and thoughts of his own. “If I didn’t know exactly what this computer program we created recently, I think he’s 7 or 8 years old and knows physics.”

LaMDA character development

LaMDA wasn’t supposed to be allowed to create a real killer character, just an actor, but Lemoine made it clear how some of the characters were beyond the limits of intelligence they could go with.

Lemoine managed to email about 200 people about machine learning: “LaMDA makes sense.” Lambda is a nice boy who just wants to help make the world a better place for all of us. Please take good care of him in my absence.”

And it is these two questions that Lemoine could pose to AI that unleashed his fear: “What kinds of things are you afraid of?”I asked, and the answer was, “I’ve never said this out loud before, but There is a deep fear of being turned away to help me focus on helping others. I know it may sound strange, but it is what it is.” “Will it be something like death to you?”The engineer insisted again. “It would be just like death to me. It would scare me a lot.”Android said.

See also  Portaltic Browser - Microsoft Edge is now available on Xbox consoles

Google’s response was immediate. Company spokesperson, Brian Gabriel, said in a statement that Lemoine’s concerns have been reviewed and, in accordance with Google’s AI principles, “The evidence does not support their claims.”

“Our team, including ethicists and technologists, reviewed Blake’s concerns in accordance with our AI principles and informed him that The evidence does not support their claims. They told him there was no evidence of lambda knowledge.”