My first interaction with speech recognition technology came in the form of the dictation feature in Microsoft Encarta – the multimedia encyclopedia that used to be standard fare on every PC manufactured in the late 90’s and early 00’s. Needless to say, it was pretty underwhelming. My dreams of dictating a random train of thought to the computer and having it spit out a perfect history paper for school were crushed. Luckily, speech recognition and analytics technologies have come a long way since then.
The goal of this series of blog posts is to educate those who are just getting started with speech analytics. According to ContactBabel, speech analytics was the second fastest growing call center tool in 2012 – adoption grew by 59%. I will be covering the basics of what speech analytics is, the different technologies that are available, and some interesting case studies. Today, we start with the basics:
So what is speech analytics?
At its core, speech analytics is a tool that automates the process of listening to customer interactions. Delivered as an enterprise software solution, speech analytics extracts information from customer conversations that might otherwise be lost. In addition to using speech recognition technology to identify spoken words or phrases, many speech analytics solutions can analyze the emotional character of the speech and the amount of silence in the conversation.
How does speech analytics work?
It is a multi-step process to take the unstructured data trapped in the audio of recorded calls and turn it into structured data that can be searched and analyzed. The first step involves incorporating conversations from the source system (call recorder, VOIP stream) and the associated metadata such as which agent handled the interaction, what day and time did it occur, and who the customer was.
Next, the audio undergoes the speech recognition process where sounds are turned into text. At the same time, acoustic signals such as agitation and silence are extracted and text transcripts are normalized into a consistent form. If multiple channels are used for customer contacts (email, chat, etc.), these nuances in the different formats need to be dealt with in order to use a single system and process for analyzing the contacts. The end result is a unified data view for all types of customer interactions.
Finally, the system automatically analyzes the interactions for certain language patterns to categorize or tag contacts as containing certain language or characteristics. Advanced speech analytics solutions such as CallMiner Eureka also support automatic scoring. This combines the presence of certain language and other key metrics into an index that measures various performance indicators such as agent quality, customer satisfaction, emotion, and first contact resolution.
Discovery, category analysis, and score analysis is achieved through a web interface that allows users to search for contacts using any criteria, visualize data in any number of ways, and conduct automatic topic analysis. All of this data can be put into action by providing direct feedback to analysts, supervisors, and agents through notifications and reports.
What’s the bottom line?
Speech analytics can extract valuable business intelligence that would otherwise be lost in random call sampling. Traditionally, the most powerful returns are realized in the contact center, where speech analytics can be used to identify the reasons why customers call the company and what causes dissatisfaction. It also helps contact centers improve compliance, operational efficiency, and agent performance. Today, some companies are implementing speech analytics as part of a greater Customer Relationship Management (CRM) strategy, using the intelligence mined from customer interactions to continuously improve processes throughout the entire business.
Learn more about adding speech analytics to your contact center strategy.
By John Kullmann | December 5th, 2019 | CallMiner