6 min read

Lessons on responsible AI from industry leaders at MWC Barcelona

Insights
Person interacting with AI chatbot on mobile phone
Share to:

You may have seen in the news recently that Air Canada faced scrutiny and legal action following a customer’s experience with its AI-powered chatbot. While the airline argued the chatbot was its own legal entity, a court found them liable for failing to provide reasonable care. 

This debacle emphasizes the need for enterprises to understand how to implement AI technology responsibly. As companies increasingly integrate AI into their platforms and services, the risk of misinformation and errors soar. 

At Mobile World Congress (MWC) in Barcelona 2024, industry leaders in healthcare, finance, and technology gathered for a panel discussion called “Responsible AI in Customer Communications,” shedding light on essential strategies to mitigate mishaps in the future.   

Moderated by Lodema Steinbach, VP of Product and Carrier Partnerships at Sinch, the panel featured Gautam Anand, Senior Vice President at HDFC Bank, Geertina Hamstra, Conversational AI expert at Moet Ik Naar De Dokter (MINDD), Alexis Safarikas, CEO of Campfire, and Joachim Jonkers, Director of Product AI at Sinch. 

Panelists discussing responsible AI at MWC Barcelona 2024

Panelists discussing responsible AI in customer communications at MWC Barcelona 2024, from left to right: Joachim Jonkers (Sinch), Gautam Anand (HDFC Bank), Geertina Hamstra (MINDD), Alexis Safarikas (Campfire), and Lodema Steinback (Sinch).

In this article, we delve into the key topics shared during the panel discussion and cover how responsible AI practices are shaping the future of customer communications at global enterprises. 

Putting boundaries around bots

Air Canada aren’t the only ones to find themselves wading into unknown waters. A number of other AI chatbots have made headlines in recent months for their unusual behavior, like DPD’s chatbot that swore at customers and a GM dealer’s chatbot that was talked into selling a Chevy Tahoe for $1.  

These recent events led perfectly into the panel at MWC Barcelona that discussed this crucial point: Setting appropriate boundaries for AI-powered chatbots to maintain a company’s trust and security. 

Without boundaries, noted Alexis Safarikas, “AI is a people-pleaser.” An AI bot will give you an answer you want, and without prompts, may create untrue statements or statements that don't align with company values.  

In the healthcare sector, Geertina Hamstra further emphasized the importance of boundary-setting as misinformation from chatbots can be a recipe for disaster. She stressed the importance of confirming all medical responses with relevant authorities and ensuring they originate from controlled environments.

 

Takeaway: Responsible AI means setting parameters

These stories set the imperative for AI developers and technology providers to implement robust guardrails so that businesses that use the tools can use them correctly. That way, businesses can focus on delivering relevant content that leverages the expert knowledge they already have about their industry.

Handing off from AI to agents

The panel also discussed the evolving landscape of human and bot-driven conversations, particularly in text-based interactions, exploring both the challenges and opportunities this creates for crafting conversational flows for customers.  

Hamstra shared a compelling healthcare use case from the Netherlands, where AI voice systems enhance patient triage in emergency rooms. MINDD built a flow that relies on AI to streamline emergency calls by proactively assessing situations and routing them accordingly, all while attending to patients in the queue.  

In India, Gautam Anand highlighted HDFC’s transformative use of AI on WhatsApp to engage diverse rural communities. This technology enables conversations in various regional languages, overcoming barriers that human agents may encounter.

 

Takeaway: Keep your customers at the center of your AI strategy

Anand's advice? Prioritize the end user – converse with them the way they prefer to converse. Human interaction may be necessary in the conversation, but the aim is to use AI to help you tailor the conversation to the way your customers prefer to engage.

Ensuring you’re ready for regulation

With all the buzz around about the impending AI Act, enterprises should be gearing up to ensure compliance and readiness to avoid costly system overhauls down the line. It's essential for businesses to understand the ins and outs of the act and fine-tune their AI implementation strategies accordingly. 

During the panel, the experts emphasized the need for readiness, particularly in sectors like financial services and healthcare where sensitive data handling is critical. Often, enterprises will use a hashing system in their AI so only certified individuals can see sensitive information like a social security number.  

And regardless of a business’ industry, Joakim Jonkers mentioned that enterprises should take proactive measures to safeguard privacy and security. Let’s face it: Legislation will move slower than technology will evolve, so abiding by best practices and commitments to ethical AI are essential.

 

Takeaway: Know your "why”

In the discussion, Safarikas stressed the need for enterprises to understand why their company is using AI to use it well. Companies shouldn’t implement AI for the sake of using AI – they need to know the risk and limitations associated with the technologies, particularly regarding data privacy, and advocate for proactive measures to drive regulatory standards in their respective industry.

Maintaining accessibility and equity

AI systems often carry biases and can be challenging for large groups of populations to use and relate to. The panel discussed what they’re doing to ensure fair and accessible communications that will resonate with their entire audiences.  

In healthcare, Hamstra highlighted the importance of using AI to cater to individuals who may not speak Dutch fluently or at all. Her organization has used AI to develop voice and speech bots that can detect speech impairments or foreign languages so that all patients can communicate with healthcare providers. If a patient isn’t understood, they can switch to text communication and then be redirected to a provider who can assist.  

Additionally, she said she was initially concerned about potential challenges with elderly users but found reassurance with one 78-year-old woman’s seamless interaction with the bot when it launched. This was reassuring for both the creators of the AI bot, and nurses who were skeptical of the technology. 

In parallel, Anand shared financial services insights from rural India, where enterprises are using WhatsApp to help people do business. HDFC Bank has worked to incorporate voice notes, text, and payment features via WhatsApp so their small business users can conduct transactions more seamlessly and in their local languages. 

 

Takeaway: Responsible AI focuses on customer experience

During the session, Safarikas said it best: “An accessible AI is one that focuses on customer experience.” Enterprises bear the responsibility of providing positive, inclusive experiences for all users, which means prioritizing accessibility in their AI strategies.  

Combatting fraud

Today, fraudsters use various methods meant to deceive businesses and individuals that can pose significant threats. Leading enterprises are fighting back using AI.  

In banking, Anand explained how some financial institutions are using AI to help ensure security with voice recognition and OTP authentication, boosting customer trust.  

And in insurance, Safarikas mentioned some fraudsters may attempt repetitive conversations, so the solution is a matter of training agents to identify suspicious behavior and know what to look out for. 

There are some forms of fraud, like Artificially Inflated Traffic (AIT), that enterprises can use AI to swiftly identify and thwart scams. It’s a matter of trying new techniques and keeping on top of issues to protect systems and users. 

 

Takeaway: AI can help enterprises get ahead of scams

The bottom line? Proactive integration of AI technologies can empower enterprises to stay ahead of the game. With more AI-generated spam and scams now than ever, it’s time for businesses to start training and use AI solutions to detect and mitigate fraud. 

The discussions at MWC Barcelona 2024 highlighted the critical importance of responsible AI implementation in customer communications across various industries. From setting boundaries around chatbots to preparing for upcoming regulations, industry leaders have set standards for enterprises venturing into this new technology.  

As businesses, we recognize that AI is inevitable. How we move forward with being accountable, transparent, and customer-centric will set the tone for a future of AI-driven interactions that enrich the lives of customers worldwide.  

Ready to explore launching a chatbot or delve into conversational AI? Connect with our team to create a conversational experience that you and your customers will love.