This paper is the second installment of our series about call fidelity impact on automated speech analytics, specifically on transcription and scorecard accuracy.
In the first part, we discussed the importance of having high fidelity audio recordings to fully utilize CallMiner features. We also covered the usage of aliases and their importance; especially important when the calls’ fidelity is not good. Furthermore, we defined an initial framework on how to quantify the impact of fidelity on the accuracy of the results.
In this paper, we are going to go one step further and delve into more details about the impact quantification process for call fidelity improvements. The qualified results are shown in multiple tables and we provide analysis to illustrate the direct positive relationship between transcription accuracy and call fidelity
Based on our experience when engaging clients, there are several shortcomings when it comes to traditional QA as it is implemented in a contact center:
- Highly subjective: While most contact centers have scoresheets, the actual review of the calls remains to be highly subjective and not quantified.
- Sample Too Small: Sample sets used for QA analysis are random and very small. Before engaging with us on CallMiner, one of our clients used to have a QA selection weekly where they randomly picked 10 calls from the thousands of calls that came in that week. Clearly, the sample size is way too small and random to represent the dynamics of the whole contact center.
- Low efficiency: The company often needs to staff a QA specialist or have contact center supervisors spend extra hours to sit down with agents to listen to a select set of calls, hoping to find some coachable moments. And of course, since the sample evaluated are so small, there is ofter the agent retort “I have 100 good calls this week and the one we review happens to be the worst one”.
All three of these major issues can be solved proactively with the CallMiner Speech Analytic platform, where the system takes in 100% of your customer interactions. In addition to call recordings the system is also able to take in chat sessions, emails, texts, and social conversations and bring them all together to analyze all of your customer interactions. Your call recordings (plus chat, email, test, social) are an unmined treasure of customer and agent information and automated speech analytic is the key to unlock all the valuable business insights contained within this trove.
Call fidelity is by far the most important success factor when it comes to Speech Analysis with CallMiner since most insights come from the transcriptions of the audio, and without good fidelity the transcriptions will be problematic. The more accurate your transcriptions are, the more benefit you will derive from your Speech Analytic platform. So, to make sure you get the most out of your speech analytics platform, it is imperative that the call fidelity be good.
Our first fidelity paper showed a simple case study to illustrate how important fidelity is. This paper will provide additional case studies showing the impact of call fidelity on the accuracy of speech analytics.
Call Fidelity and Transcription Quality
To recap, here we provide the list of factors that may have an impact on transcription accuracy and a description of the potential impact on accuracy:
Table 1 – Factors Affecting Transcription Accuracy
|Recording Channel (Mono vs. Stereo)||Having stereo will improve transcript quality as well as allow us to do precise speaker-separation.|
|Audio file quality (mp3 vs. wav)||High-quality non-compressed audio (like wav) is preferred over compressed audio files (like mp3).|
|Many Speakers; Over-talk||Generally, over-talk is much more of an issue in mono. More over-talk generally leads to lower transcription accuracy.|
|Cell Phone vs. Land Line||Calls made on/to cell phones tend to have more disruptions and lower fidelity.|
|Headset quality and usage||Replace bad headsets with good quality noise-canceling headsets.|
Make sure headset/handset is at a good distance from mouth when talking.
|Scripted vs. Free-flowing Real Conversation||Consistent language for an issue helps in defining scorecards.|
|Agent or Customer Accent||Strong accents can be difficult to transcribe.|
|Type of Calls||For example, Service vs. Sales calls.|
|Silence on the call||Higher periods of silence is indicative of lower call fidelity.|
|Language of the call||Some languages are harder to transcribe.|
|Background noise||Background noise is always a detriment to call fidelity.|
Following the same defined framework as in our previous post, we sampled 212 calls for one agent. Out of those 212 calls, half (i.e. 106 calls) were from low fidelity audio files (24kbs mp3) while the other half had high fidelity audio (128kbs mp3). To maintain language consistency in this comparative analysis, we selected the Agent Introduction score. This score is essentially capturing whether an agent properly introduced himself/herself over the call. The below table summarizes the distribution of the calls based on a manual transcription review:
Table 2 – Sample Calls’ Distribution
|Calls’ Review Summary|
|Metrics||Low Fidelity Calls||High Fidelity Calls|
|No Agent Identification Occurred||30||35|
|Incorrectly Transcribed Identification Audio||39||7|
|Correctly Transcribed Identification Audio||37||64|
After carefully reviewing the inputted audio files, generated transcripts and calculated scores within CallMiner, we were able to quantify the below three quality metrics for both the high and low fidelity segments:
- Transcription Accuracy Rate
- This indicates the proportion of calls that were correctly transcribed for the selected score
- False Positives generated due to usage of aliases
- This count indicates the number of false score hits that were driven by the usage of aliases.
- Alias coverage
- An indicator to capture the share of score hits that were driven by the usage of aliases. The less dependency we have on aliases, the better it is.
The below table summarizes the results we obtained from our analysis:
Table 3 – Fidelity Impact Indicators
|Score: Agent Introduction|
|Metrics||Low Fidelity Calls||High Fidelity Calls|
|False Positive Rate (because of Aliases)||20%||14%|
|Alias Coverage %||49%||10%|
As we can see from table, we witnessed a huge increase (41%) in transcription accuracy by improving the fidelity of audio calls. At the same time, we also found a 6% reduction in false positive score hits that we were initially observing with low fidelity calls.
Lastly, with the better fidelity audio files, the transcription quality significantly improved, and in turn we can rely less on aliases to help identify relevant text for the given score. The lesser usage of aliases is good for two reasons: (1) it means that we don’t have to spend hours extracting them in the first place and, (2) at the same time, we are now avoiding more false positives caused by them. It’s worth pointing out that the reason behind such significant improvements in these three quality metrics is that in this case study the fidelity of the recordings takes a huge jump from 24kbs mp3 to 128 kbs mp3.
To further validate our results, we chose another sample call set that consists of language around order inquiries from inbound calls. Using a similar approach to above, we selected a total of 165 calls, with 63 of poor quality (24kbs mp3) and 102 of better quality (32kbs mp3).
The below table summarizes the review of these calls with respect to ‘order’ language:
Table 4 – Sample Calls’ Distribution
|Calls’ Review Summary|
|Metrics||Low Fidelity Calls||Better Fidelity Calls|
|No Agent Identification Occurred||6||11|
|Incorrectly Transcribed Identification Audio||30||43|
|Correctly Transcribed Identification Audio||27||48|
It’s important to point out with for this second call set, both the ‘low’ and ‘better’ fidelity are below the minimum requirement of CallMiner’s recommended bitrates for achieving good transcriptions. CallMiner requires a minimum of 64kbs stereo mp3. In the group with ‘better’ fidelity, bitrates of calls are only 32kbs, which is half of the recommended minimum. Still, we see a 7% increase in transcription accuracy from this slight improvement in audio fidelity.
Notice this increase in audio fidelity for this second case study is much lower than for the earlier case study. This is because the bitrate differences between the call sets in our first experiment are much bigger (24kbs vs 128kbs) than in our second experiment (24kbs vs 32kbs). This result further bolsters our overall view about fidelity having a significant impact on transcription accuracy, and the relationship is positive in that the better your recording fidelity is, the better transcription accuracy you will have.
As our two case studies above show, call recording fidelity is a critically important factor when engaging with Speech analytics. We recommend you check with your internal infrastructure organization or contact your call recording vendor to check your contact center’s current level of call fidelity. This really must be step one in engaging in an automated speech analysis work program.
If the fidelity is not good enough, you need to consider upgrading the quality before beginning in earnest with your automated speech analytics work program. Please refer to the fidelity vs transcription accuracy chart in our first paper on fidelity research to see the recommended bitrate of recordings for achieving accurate transcriptions.
As part of our customer onboarding process, Macrosoft will work with you to examine your call fidelity to make sure it is sufficient for you to get the most out of CallMiner’s speech analytic platform. Not sure if your recording quality is ready for Speech Analysis, contact us and we will be happy to work with you to get the answer.
By Muhammad Saqib, Mingren Xiang, Allen Shapiro, Ronald Mueller | December 1st, 2020 | CallMiner