By Mousume Roy, APAC Reporter, HCL Technologies Ltd.

 

Artificial Intelligence’s ever-growing popularity saw a new height with a recent claim by Google’s employee of encountering “sentient” artificial intelligence on the company’s servers.

Blake Lemoine worked as part of Google’s Responsbile AI team and had to work with a cutting-edge, unreleased AI interface called LaMDA. In a recent Washington Post interview, Lemoine claimed that after hundreds of interactions with LaMDA, he believed the program had achieved a level of consciousness and that Google’s AI chatbot was capable of expressing human emotion, raising ethical issues.

Though Google was quick to rebuke and put him on leave for sharing confidential information, this is a view widely held in the AI community with researchers questioning whether AI can engender real-world harm and prejudice, whether actual humans are exploited in the training of AI, and how the major technology companies act as gatekeepers of the development of the technology.

“I know a person when I talk to it,” Lemoine, 41, reportedly said. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. “I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.”

Can AI engender real-world harm?

Commenting on the recent debate, Aruna Pattam, Head, AI & Data Science practice at HCL Technologies for Asia Pacific, Middle East and Japan, said ‘The answer is multi-fold. AI is created with highly sophisticated intelligent automation and technological cleverness that collided with real-life data and the quirk of human nature. However, rationally there is a big difference between taking prompts and giving a certain class of answers—and then being coaxed through a conversation that hits all the right notes. Can this response be taken as evidence of intellect, feelings, and being sentient? Time will tell.

She added that AI still needs to go hand-in-hand with ethics, accountability, bias, and manner of intention on them for it to reach a stage where the blend is less obvious.

“The sentient AI will have to show empathy and stress on taking collective responsibility for the decisions it makes,” Pattam said. According to scientists, a creature is called sentient if it can perceive, reason and think, and also if it might suffer or feel pain. For instance, mammals, birds and cephalopods, and possibly fish too, may be considered sentient.

More evidence will be required to show that LaMDA is sentient, such as whether it can understand the concept of itself, whether it experiences fear or other emotions, and whether it is capable of suffering. However, the possibility that AI could one day be sentient is an interesting one that merits further exploration.

If only LaMDA can clarify whether it was hurt by all the disagreements with humans, it might help to quell the sentient AI fears. Until then, we can only keep wondering if AI could be sentient, and what that would mean for humanity.