Most successful customer experience (CX) programmes use some form of text analytics to surface insights from huge volumes of unstructured data, such as verbatim comments from customers. The comments come from a variety of sources, such as structured periodic survey feedback, event-driven reviews, customer service calls, emails, chats, social media engagements and online reviews.
Using machines to derive meaning and sentiment from such data isn’t easy. However, the benefits of developing and using text analytics, such as finding out what matters most to customers and what you can do about it, is worth exploring.
At the same time, while there has been huge growth in the use of AI to learn and discover what customers are talking about, there is still a place for humans when analysing customer comments.
AI, ML or NLP?
There are many buzzwords being thrown around in the text-analytics arena right now. Sometimes this is referred to as artificial intelligence (AI), machine learning (ML) or natural-language processing (NLP) so it’s not surprising that even experienced CX buyers are confused when confronted with such terminology.
We know that clients want to deliver great customer experiences, driven by actionable insights. We know that good business practice is aided by understanding what organisations should be doing more of and what they should be doing less of. What they don’t need is to be experts in the underlying technology that delivers those insights.
How text analytics has evolved
In the early days of text-analysis products, the concepts of AI/ML/NLP were still at the theoretical stage, with academics devising complex mathematical models which needed huge and expensive supercomputers to develop and execute.
There is a much simpler way – using humans to read customer comments and label them manually, creating a repository of what people say, how they say it and what it means. This form of ML teaches the machines to interpret language, categorising it against a myriad of topics, allowing for misspellings, slang, colloquialisms, idioms and a host of other linguistic nuances and then scoring it for ‘sentiment’.
Learning models
Machine learning is now within the reach of every business. However, it’s not some sort of miracle that allows computer systems to think for themselves. Without getting too technical, it works by feeding them relevant data that is ‘tagged’ with useful information.
Using this information, the systems construct learning models which can be used for subsequent analytical tasks, such as text analysis, image recognition and voice recognition. This can be thought of as a training process and these models can be continuously re-trained with new data as it becomes available to enhance the models’ capabilities. Training the models is the key to making them capable of understanding customer comments and other unstructured data, and the better the training data, the better the results.
For text analytics companies like Feedback Ferret which use humans, that means training the ML models with an enormous repository of textual data (such as customer comments) that has already been correctly labelled and validated. This wealth of human-labelled words gives a significant advantage when developing ML models because there is a strong base from which to start.
Text analytics in practice
A hybrid model of using people and machines to identify gaps and improvements in coding frameworks is the best scenario. Around 80 per cent of the code base is repeatable and reusable across any business sector owing to the original human-based efforts. In addition, sector-specific coding to handle different terminologies and topics can be quickly added.
The advantage of a hybrid model is that the human element can respond quickly to new phrases and terminologies as language and situations evolve. For example, we saw the first mentions of coronavirus in February 2020 and our team began coding against that, and just a few weeks later we offered our customers a code framework refresh. Today we see more than 90 different ways of referring to just the word coronavirus e.g. corona, covid, covid-19, kovid, etc.
Does AI really help?
Assuming that the ML models are trained thoroughly then the outcomes can be powerful. We’ve found that with proper training, an AI system can unlock understandings way beyond the original human training given to it. This is perhaps one of the most surprising and rewarding aspects of AI and highlights the technology’s potential.
A degree of scepticism
Our advice to anyone considering text analytics would be to have a degree of scepticism around AI, ML and NLP unless the methods and sources of the iterative model training are transparent. AI is often sold as a ‘black box’ solution which may work in some cases but may be a bit lightweight and inaccurate for handling real-time customer feedback. For better accuracy we need real human ingenuity, interpretation, and creativity to harness the true value of artificial intelligence.
Mark Spicer is the chief technical officer at Feedback Ferret.