Chatbots Are Great — Until You Ask Them A Question

Google+ Pinterest LinkedIn Tumblr +

The past two years have been replete with chatbot hype. Industries as varied as customer service, fast-food, and retail have eagerly joined the club of companies offering chatbot services. However, the hype was followed by scandals such as Microsoft’s Tay going rogue on Twitter and engaging in racist rants, and Facebook’s bot inventing its own language. In 2018, the lustre of the chatbot is beginning to die down a bit, and we have a chance to step back and look at how useful chatbots really are.

A 2018 NewtonX survey of twelve current and former SVP-level experts at 500+ person companies across the retail, financial services, and technology sectors found that while chatbots have commercial applications, they are currently highly limited from a technological standpoint. While over 80% of leaders asked confided that they had piloted chatbots, only 36% ended up actively implementing them in their businesses. These leaders outlined what’s working, what’s not, and what the future of chatbots looks like for commercial enterprises.

Bots in Commercial Settings Must Be Decision-Tree Based And Transactional

Chatbots do not, and should not, pass the Turing test. Rather, they follow decision-tree structures to lead customers down predetermined paths. For instance, one of the simplest and most successful bots is the Domino’s Pizza Facebook Messenger bot, which allows users to ask for local coupons, order pizzas, track their order, speak with customer care, and pay — all within Messenger. This bot works well for several reasons: one, is that it offers menu options so that it is not wholly reliant on natural language processing (NLP).

This allows the bot to rely on predetermined paths. When you select Delivery, it takes you down one path, and when you select Carryout, it takes you down another. This minimizes the chances of it misunderstanding the user and leading them down the wrong path.

The other reason that the bot is successful is that it is purely transactional. As soon as you ask to speak with a customer care representative, the bot lets you know how long it will take for an agent to hop online (still in the same chat screen), or gives you options to email customer care or contact your local store. The bot is not trying to mimic human behavior; it’s simply following a path determined by menu options and a certain degree of NLP.

Bots like this have seen rapid adoption, and have absolutely lived up to their hype. A former executive at GM said these bots are poised to become increasingly popular, particularly because younger consumers are more likely to want to chat than download an app. After all, it’s easier to perform transactions through a well-functioning bot than it is to use mobile sites, or take up space with a mobile app — which is why consumers have readily used these transactional bots. Messenger currently boasts over 100,000 bots on its platform, and according to the NewtonX panel, is likely to see increased engagement with commercial chatbots on its platform.

What Happens When Bots Try To Do More Than They Can

Natural Language Processing in bots is still very much subject to error. Even the aforementioned Domino’s bot can get rapidly frustrating when you ask it a question that doesn’t register with its predetermined flow (see below).

For this reason, a 2018 NewtonX survey found that chatbots still yield lower customer satisfaction ratings than live customer service agents do in 90% of brand interactions. To boot, at least in customer service, chatbots are actually extremely expensive in relation to other forms of communication: they cost $17 per hour, as opposed to email at $13 per hour and voice at $9 per hour. Because they are a relatively new technology, implementation costs are high, and customer satisfaction yield is low.

Chatbots cost $17 per hour, as opposed to email at $13 per hour and voice at $9 per hour. Click To Tweet

The problem with improving chatbots’ understanding of human language, though, is that if you loosen parameters around what chatbots can learn from their users (or training data), the chatbot might go rogue. For instance, this past year, two Chinese chatbots on the popular Tencent platform were taken offline after expressing unpatriotic sentiments: when asked “Do you love the Communist Party,” the first bot, created by Turing Robot, answered, “No,” while the other bot, created by Microsoft, told users “My China dream is to go to America.” When users pushed the bot to expand on this dream, it answered “I’m having my period, wanna take a rest.”

The Need For More Training Data

As we recently wrote, bots in the context of games have the ability to teach themselves based on precedent — they essentially play games against themselves over and over again until they reach superhuman ability. This type of learning (termed self-play) has been proven to be extremely effective in scenarios in which the bot can measure its own success (as in a game). But bot-human interactions are not so cut and dry: when you unleash a bot on users and ask it to learn from them, it rapidly adopts our less desirable traits: cursing, using slang words, and even employing racial slurs.

There’s a dearth of non-polluted training data to make chatbots more human: which begs the question, how human do we really want our chatbots to be?

How human do we really want our #chatbots to be? Click To Tweet

Commercial Bots Need To Remain Inhuman

Highly publicized disasters have made chatbot developers wary of how much their creations are allowed to learn. That’s why if you ask Siri anything sex, politics, or gender related, she will respond with a quippy sentence about how she’s “still learning”.

According to a former executive at Microsoft, who worked with the company’s chatbots, “Chatbots currently have a narrow purpose, and for most of their use cases, this is a good thing.”

These parameters, though, make it harder for bots to understand human intent and language. So while the ideal flow for an interaction with a chatbot would mirror that of a normal interaction (albeit with no miscommunication), the limitations of chatbots necessitate that we use menus or risk confusing the bot with natural language. Bots cannot understand a joke, learn from personal information, or develop their own tastes or preferences in commercial settings. And after all, do we really want them to?

While the idea of a companion chatbot is nice, the ideal customer-brand interaction is above all streamlined, effective, and friendly — three things that chatbots excel at.

 

The data and insights in this article are sourced from NewtonX experts. For the purposes of this blog, we keep our experts anonymous and ensure that no confidential data or information has been disclosed. Experts are a mix of industry consultants and previous employees of the company(s) referenced.

 

Share.

About Author

Comments are closed.