An engineer has recently claimed that LaMDA, a Google conversational AI technology, has become sentient. The statements have sparked a heated debate around the capabilities of AI-powered chatbots. A debate that was bound to happen sooner or later—and engineers, designers, companies, and consumers alike should learn from, says Joachim Jonkers, chatbot expert and Chief Product Officer at Chatlayer by Sinch.
Let me start by stating the obvious: everyone – except that one engineer – agrees that Google’s LaMDA Conversational AI is nowhere near conscious.
However, everyone and their mom has a reaction about this story (and I’m as guilty as the rest of them). It triggered a broad discussion on what consciousness is in machines and the ethical implications of conversational AI. It’s clear that it resonates with people, it hit a nerve. Why is that?
Do you see the face?
This story feeds off the hype created by years of prophetic articles claiming that the AI revolution is upon us. Decades of science fiction and recent advancements in technology have prepared us for it. And now finally we have someone on the inside, admitting something we all knew was coming sooner or later: the uprising of the machines is here. AI has become self-aware.
People look at the technology with a mix of excitement and fear. This creates the perfect breeding ground for a story like this to go viral. However, there’s still something to be learned here beyond the buzz.
Why did the engineer claim consciousness for a system that’s nowhere near it? Because humans are social animals! We look at the world around us, trying to find behaviors that we recognize. In doing that, we have a strong tendency to assign human traits to things.
For example, we’re more than happy to think about the planet we live on as “Mother Earth”. Two small shapes with a bigger shape below them? That’s a face!
With chatbots that effect is even stronger because bots are specifically built to mimic human behavior as closely as possible, making us even more susceptible to projecting human characteristics on them.
Side effects of conversations that feel real
The engineer’s statements on Google’s LaMDA reveal that we must carefully consider the side-effects of a conversation that feels real. The main advantage of conversations automated by AI is that they make the interaction very intuitive because they’re so intrinsically human. However, that also opens the door for people assigning more knowledge and feelings to these systems than they deserve, because their scope is limited to very specific tasks.
AI-driven conversations have the power to solve problems in a new way, by helping people faster and better, and providing a personalized experience that. In the best cases, this builds trust and loyalty.
The debate around Google’s LaMDA shows us that we need to consider the consequences of the technology, focusing not just on which problems these systems solve, but also which impact they have on the people using them. The best thing about conversational AI is that it can bring technology and people closer together. Figuring out how to do that ethically is the next big challenge of AI.
What’s the difference between a chatbot and conversational AI? This question causes a lot of confusion because
An engineer has recently claimed that LaMDA, a Google conversational AI technology, has become sentient. The statements
Live chats for websites improve customer experience, customer satisfaction and drive conversions. Most companies already use the