Throughput vs. latency

Throughput vs. latency

If throughput were a drinking straw in a glass of orange juice, latency is how much time it takes for the juice to get from the cup to your mouth through that straw. It’s basically a metric of delay: how long do you have to wait before enjoying the juice? 

In other words: throughput is the number of data packets being successfully sent per second, and latency is the actual time those packets are taking to get there. So, the terms are related - they both relate to data transfer and speed. They’re basically two sides of the same coin, but still different metrics. You want to maximize throughput but minimize latency.  

Drink more juice, and drink it faster. Got it? 

 

Why should you care about throughput vs. latency?  

Throughput and latency are both metrics that you should watch if you want to make sure you have a healthy network – whether you’re sending VOIP, SMS, or other types of data packages.  

High latency can be a red flag that there’s something wrong with your network.  

 

How does Sinch work with throughput and latency?  

For messages that need to be delivered immediately, like two-factor authentication, banking, or ticketing applications, low latency is crucial – and we get it!  Here at Sinch, we aim to deliver all messages as quickly as possible (of course with respect to the requested send time).  

When you sign an agreement with Sinch, it might include a specific throughput level. Depending on the interface you use, Sinch will either slow down responses to your submission requests (SMPP) or accept your submissions and queue them. In both scenarios, the messages enter a queue environment which is managed as a first-in, first-out queue per customer. 

Max your throughput – get in touch!

Illustration of a girl talking in a handset