2 min read

Google’s LaMDA engineer got it wrong — and that's exactly what was meant to happen

Chatbots
Sentient AI Blog cover
Share to:

An engineer has recently claimed that Google LaMDA, a conversational AI technology, has become sentient. The statements have sparked a heated debate around the capabilities of AI-powered chatbots. A debate that was bound to happen sooner or later — and engineers, designers, companies, and consumers alike should learn from, says Joachim Jonkers, chatbot expert and Director of Product - Conversational AI at Sinch.  

Let me start by stating the obvious: everyone – except that one engineer – agrees that Google’s LaMDA Conversational AI is nowhere near conscious. 

However, everyone and their mom has a reaction about this story (and I’m as guilty as the rest of them). It triggered a broad discussion on what consciousness is in machines and the ethical implications of conversational AI. It’s clear that it resonates with people, it hit a nerve. Why is that?

Do you see the face?

This story feeds off the hype created by years of prophetic articles claiming that the AI revolution is upon us. Decades of science fiction and recent advancements in technology have prepared us for it. And now, finally, we have someone on the inside, admitting something we all knew was coming sooner or later: the uprising of the machines is here. AI has become self-aware.  

People look at the technology with a mix of excitement and fear. This creates the perfect breeding ground for a story like this to go viral. However, there’s still something to be learned here beyond the buzz.  

Why did the engineer claim consciousness for a system that’s nowhere near it? Because humans are social animals! We look at the world around us, trying to find behaviors that we recognize. In doing that, we have a strong tendency to assign human traits to things.  

For example, we’re more than happy to think about the planet we live on as “Mother Earth”. Two small shapes with a bigger shape below them? That’s a face! 

stone that looks like a face
Humans have a tendency to see faces everywhere. (Source: Unsplash.com/ Harry Grout)

 With chatbots that effect is even stronger because bots are specifically built to mimic human behavior as closely as possible, making us even more susceptible to projecting human characteristics on them. 

Side effects of conversations that feel real

The engineer’s statements on Google’s LaMDA reveal that we must carefully consider the side-effects of a conversation that feels real. The main advantage of conversations automated by AI is that they make the interaction very intuitive because they’re so intrinsically human. However, that also opens the door for people assigning more knowledge and feelings to these systems than they deserve, because their scope is limited to very specific tasks.

AI-driven conversations have the power to solve problems in a new way, by helping people faster and better, and providing a personalized experience. In the best cases, this builds trust and loyalty. 

The debate around Google’s LaMDA shows us that we need to consider the consequences of the technology, focus not just on which problems these systems solve, but also which impact they have on the people using them. The best thing about conversational AI is that it can bring technology and people closer together. Figuring out how to do that ethically is the next big challenge of AI. 

Joachim Jonkers profile picture
Written by: Joachim Jonkers
Director at Product Conversational AI at Sinch