System Scalability
Erol Toker avatar
Written by Erol Toker
Updated over a week ago

Truly's global telephony back end has been battle tested with organizations with hundreds of users over the past 8 years. It has achieve 99.999% uptime in 2021 due to a sophisticated system with heavy redundancy built in. This article discusses this redundancy.

Data Center Footprint

  • 3 AWS regions (Virginia, Frankfurt, Sydney).

  • Multiple Availability Zones in each region

Load Balancing

  • Requests globally loadbalanced via Route53 to the nearest available data center

  • Within each region, requests are loadbalanced to one of several redundant SBCs

  • Within each AZ, requests are loadbalanced to one of several App/Telephony Servers

  • Rate limiting at the perimeter at the User/App level, API/IP Adress level and at the SBC unique phone number level.

Database

  • Core Database RDS (AWS managed), with 5 minute failover to hot standby

  • Loadbalanced requests across read-write replicas (10:1 read:write provisioning)

  • Application level redundancy to permit traffic to pass through in cases of partial database failures.

Carrier Redundancy:

  • At least 3 redundant termination providers in each market, with at last one Tier1 provider

Did this answer your question?