What do we mean when we talked about the speed of a function, or how fast a database is? “Speed” is an ambiguous term, and we could build better systems if we stopped using it in favor of more specific concepts like bandwidth and latency.

Bandwidth
measures the volume of data period of time: 300 bytes per second or 1 million records per day.
Latency
measures how long it takes data to travel or be processed. It’s expressed in units of time: a delay of 300 milliseconds or 500 seconds to process to record.

There’s a trade-off between bandwidth and latency. In the simplest case, bandwidth can be (artificially) increased with a buffer, at the expense of increased latency. More records are being handled by the system, but at the cost of delays since they may sit around waiting to be processed.

Reducing latency can increase bandwidth utilization since more records are available in a given time period. However, low-latency systems are often have lower peak bandwidth capacity, since processing power and network resources prioritize moving data as fast as possible instead of maximizing efficiency.

A car can zoom down an empty highway at 60 miles per hour and arrive at its destination quickly, but relatively few vehicles are traveling over a stretch of road (low latency / low bandwidth). Conversely, at rush hour the highway is full of cars but traffic moves slowly. Overall more cars move through, but the travel time for any individual vehicle is longer (high latency / high bandwidth)

The relationship between bandwidth and latency can be complex in practice, but it’s important to keep the difference in mind and not fall back on ambiguous concepts like “speed”.

Never underestimate the bandwidth of a station wagon full of tapes hurtling down the highway. — Andrew S. Tanenbaum

photo by myoldpostcards

Contact Me

I’ll get back to you shortly