Industry

Why product teams at top call tracking solutions are turning to AI

This article looks at the top Speech AI models for product teams to integrate into call tracking solutions, including what they are, how they work, and real-world use cases.

Why product teams at top call tracking solutions are turning to AI

Eighty-four percent of marketing teams agree that phone calls have higher conversion rates than any other form of engagement. But the manual tasks associated with mass phone calls—note taking, logging CRM data, QA reviews—can be labor intensive.

Call tracking tools and solutions help ease this process for marketers and sales teams with suites of AI-powered call tracking automation tools.

In this article, we’ll cover what call tracking solutions are, as well as the AI models behind call tracking tools. We’ll also explore some of the top call tracking companies that have already built AI-powered call tracking tools for their platforms—and achieved impressive results.

What are call tracking solutions?

Call tracking solutions offer suites of tools for more effective lead tracking, lead management, and call analytics for companies that process large volumes of phone calls.

The solutions use Dynamic Number Insertion (DNI) to track the online activity of leads across emails, social media posts, PPC ads, and more. When a lead decides to engage a company with a phone call, the call tracking solution can identify what prior actions directly led to the phone call to better inform the conversation or future campaigns.

Many call tracking tools also offer form tracking, lead valuing, call flows, lead searching/filtering, lead qualifications, custom reporting, and more.

The call tracking solutions aggregate the collected data and offer targeted, highly useful call analytics for end users.

Speech AI for call tracking 

Thanks to recent advances in Artificial Intelligence (AI) research, Automatic Speech Recognition, or ASR, models today are more accessible, affordable, and accurate. The best Speech-to-Text APIs can transcribe real-time and asynchronous audio and video streams at near-human-level accuracy. This makes highly accurate call transcription a natural starting point for most product teams at call tracking solutions.

Once the solution has accurate, readable transcription, product teams can also build conversational AI analytics tools on top of this call transcription data with the help of Audio Intelligence models and Large Language Models (LLMs). 

Audio Intelligence models include a host of models for analyzing conversational data, such as Text Summarization, Content Moderation, Sentiment Analysis, Topic Detection, Entity Detection, PII Redaction, and more.

LLMs are machine learning models that understand, generate, and interact with human language. For example, LeMUR, a framework for applying LLMs to spoken data, lets users answer specific questions, create custom summaries, and perform other specified tasks on audio data. 

Product teams at call tracking solutions can integrate these advanced AI models into their platforms to build high-utility call tracking solutions on top of the call transcription data.

Now, let’s dive deeper into how industry-leading call tracking solutions are successfully incorporating Speech AI models into their tools today.

Use case 1: Amplifying conversational intelligence

CallRail is a Software as a Service (SaaS) solution that offers call tracking, form tracking, conversation intelligence, and other marketing analytics products. To gain a greater competitive advantage, CallRail leadership wanted to incorporate AI models that would help them build out their product roadmap and scale their offering.

The call tracking solution started by integrating highly accurate call transcription into its platform. 

The product team then identified two key areas to start with, to expand into Audio Intelligence:

  • Detecting Important Words and Phrases
  • Text Summarization
Source: CallRail

Detecting Important Words and Phrases

To make each call transcription more digestible at a glance, CallRail needed an API that could Detect Important Words and Phrases. The Detect Important Words and Phrases model automatically detects key moments in the transcription text, helping CallRail surface and define important content for its end users. This includes commonly asked questions, frequently used keywords or customer requests.

Text Summarization

Text Summarization models automatically generate key highlights and summaries from phone call conversations. Some even provide time-stamped summaries to make matching the text summary to the appropriate point in the audio or video stream easier for end users. For example, AssemblyAI’s Auto Chapters Summarization API segments phone calls into logical, time-stamped chapters when the conversation changes topic. Then, the API outputs a summary for each chapter, similar to what YouTube displays beneath its videos.

Text Summarization models make transcripts easier to process, speeding up QA processes. They can also help automate the CRM data transfer process.

With speech transcription and Audio Intelligence, CallRail has seen call transcription accuracy improve by 23% and doubled the number of Conversation Intelligence customers using its platform.

Use case 2: Complying with regulations and protecting callers

WhatConverts is a lead tracking solution for marketing agencies and clients. To facilitate easier, smarter call tracking for its customers, WhatConverts wanted to integrate automatic call transcription into its platform. The company also needed to apply PII Redaction to each transcript to meet the compliance and regulatory requirements of its customers.

Accurate Call Transcription

Extremely accurate call transcription was the most important first step for WhatConverts.

“A quick glance at the transcript can tell you if the lead is quotable and should be passed on to sales, or if it’s junk,” explains Mac Mischke at WhatConverts. “That’s a huge piece of information that you need to know right away whenever a new lead comes in.”

In addition, each call transcript lets its customers quickly assign value to new leads, review best practices, and provide areas for improvement for its customer service and sales team.

To make the transcripts even easier to process at a glance, WhatConverts displays each conversation as message bubbles between callers and recipients. With the message bubbles, users can more quickly pinpoint key moments in the conversation.

If the user wants to listen to a specific moment in the conversation, they simply need to click the text in the transcript to be automatically taken to that point in the audio file.

PII Redaction

WhatConverts also works with many end users who have security and compliance needs, so the company wanted to be able to detect and redact PII, or Personally Identifiable Information, from each transcription.

PII Redaction models automatically detect and remove sensitive or personal content in a transcription text. Then, they replace the sensitive content with a series of “#” that corresponds to the number of retracted digits. PII that can be redacted include social security numbers, credit card numbers, driver’s license numbers, home addresses, phone numbers, and more.

For call tracking solutions, PII Redaction is an important addition to call transcription to meet privacy compliance requirements, laws, and/or regulations.

With accurate, automatic call transcription and PII Redaction, WhatConverts improved its accuracy by 10% and can offer an industry-leading service to its customers.

Use case 3: Identifying key insights from conversations

In addition to call transcription, detecting important words and phrases, Text Summarization, and PII Redaction, product teams at call tracking solutions need to build tools that can intelligently identify key areas of conversations.

Three main Audio Intelligence tools can help these teams meet this need–Sentiment Analysis, Topic Detection, and Entity Detection—as well as the use of Large Language Models.

Sentiment Analysis models automatically label speech segments in a transcription text as positive, negative, or neutral. For example, a Sentiment Analysis model would label the statement “I love this product” as positive or “I’m getting frustrated” by the support as negative.

Sentiment Analysis can track how agent and customer sentiment is trending throughout each section of the conversation. It can even be combined with other Audio Intelligence models, like Text Summarization, to tie a sentiment to each summary. Then, managers can use this data to quickly flag areas of conversation for further review, identify key buy indicators, or note potential churn risks to follow up on additional conversations.

Sentiment Analysis models can also be used to coach sales representatives during onboarding, training, or review processes. For example, customer sentiment could be identified as tracking negatively at the beginning of a call but as tracking positively by the end. By reviewing the sales rep’s talk track, managers could then point to key turns of phrases or approaches that other agents could incorporate into their conversations as well.

Here’s an example of how Aloware, a contact center software, uses Sentiment Analysis to label sentiments as conversations occur between agents and contacts:

Source: Aloware

Topic Detection and Entity Detection models can be useful here as well. Entity Detection, or Named Entity Recognition, models identify and classify important information in the transcription text. For example,” doctor” is an entity that is classified as an “occupation”. Topic Detection models identify and label a broader range of topics in the transcription text, such as “baseball” or “women’s fashion”.

With Entity Detection and Topic Detection, product teams can build tools to identify commonly recurring entities and topics and compile them for further analysis. These entities and topics can also be tied to Sentiment Analysis to determine customer feelings and opinions towards products, campaigns, or even agents.

Large Language Models (LLMs) can also be used to flag key insights from call data. For example, LLMs can be used to generate lists of action items following a sales or customer call, suggest a follow-up email post-call, or surface additional actionable insights. LLMs can also be fed additional context about the conversational data being processed to tailor the responses further.

Try it: Analyze a conversation with AI

Follow along in this tutorial to learn how to get multiple insights from your call  transcription:

Additional Reads: AI for call tracking