Speaker Separation
Erol Toker avatar
Written by Erol Toker
Updated over a week ago

Speaker separation is essential to high quality analytics. If you can't connect WHO said what, you can start putting your own words in the customer's mouth and get dramatically different results when it comes to qualification, product feedback and more.

Speaker Attribution (Web Meetings)

Truly uses its AI bot to capture metadata during each meeting around who is speaking at any given time, and uses this record to attribute speech to each participant on a call.

It also measures 'cross talk' to identify periods where different people may be speaking over each other, and ensures that these ambiguous periods of time are appropriately labeled so they can be excluded from advanced analytics in your data pipeline.

Channel Separation (Phone Calls)

On phone calls, Truly separates audio into different channels (internal speakers in one, external in the other). This ensures that all transcription that occurs can be accurately tied back to the source based on the raw data.

Many transcription vendors use an alternative approach called 'diarization', which separates speakers based on a best guess of who they are using a 'voice print' that they keep on file. The problem with this is that accuracy can be affected by environmental factors like your microphone, background noise or even having a cold.

Did this answer your question?