Newsletter

Improved Hold Music Detection + Build LLM Audio Apps with LeMUR

Improved Hold Music Detection for enhanced transcript accuracy and New Cookbooks on building LLM apps for voice data with LeMUR.

Improved Hold Music Detection + Build LLM Audio Apps with LeMUR

Hey đź‘‹, this weekly update contains the latest info on our new product features, tutorials, and our community.

🚀Improved Hold Music Detection

We have implemented a new heuristic to detect and remove hold music hallucinations. It's already live and you should see music hallucinations reduced in your transcripts! Check it out here.

Stay updated on the latest product updates with our changelog

LeMUR: Build LLM apps on voice data

LeMUR is the easiest way to code applications that apply LLMs to speech. In just a few lines of code you can search, summarize, ask questions, and generate text across your audio and video data.

Check out the following popular LeMUR cookbooks:

Fresh From Our Blog

Extract phone call insights with LLMs in Python: Learn how to automatically extract insights from customer calls with Large Language Models (LLMs) and Python. Read more>>

How to integrate spoken audio into LangChain.js using AssemblyAI: Learn how to apply LLMs to spoken audio with AssemblyAI's new integration for LangChain.js, using TypeScript and Node.js. Read more>>

How to use audio data in LlamaIndex with Python: Learn how to incorporate audio files into LlamaIndex and build an LLM-powered query engine in this step-by-step tutorial. Read more>>

Run LLMs locally - 5 Must-Know Frameworks!: Learn how to run LLMs locally including, Ollama, GPT4All, PrivateGPT, llama.cpp and LangChain.

Convert Hindi Speech to Text (Python Tutorial): Learn how to convert spoken Hindi language into text using Python, using AssemblyAI’s Speech-to-Text library.

Analyze a Conversation with AI for Free on the Playground: Learn to analyze customer calls with AssemblyAI's speech-to-text API.