You’ve probably seen the bandwidth advertised if you’ve ever shopped for a new internet connection. You’ve probably seen your actual measured bandwidth if you’ve performed an internet speed test. While it’s evident that higher numbers are better, it can be a little unclear precisely what bandwidth is if you’re not all that familiar with the terminology.
What Is Bandwidth?
Bandwidth is a measure of the maximum possible transmission rate of a connection. In some edge cases, like with internet connection bandwidths, the advertised bandwidth may not be a hard limit as some countries have legislation requiring ISPs to at least meet the advertised speeds for a specific. Generally large, the proportion of the customer base. ISPs typically provide a little more than advertised to avoid potential lawsuits if this is the case. The bandwidth is the absolute upper limit of transmitted data in terms of the actual bandwidth of a cable or wireless transmission technology.
As with any measure of data, bandwidth is measured in bits or bytes. A bit is a single unit of binary data, either 1 or 0. A byte is composed of eight bits, the standard number of a group of bits. Thankfully, modern bandwidth is very high. In the millions or billions of bits per second, this is generally displayed as megabits per second or gigabits per second and megabytes per second or gigabytes per second. The standard unit contraction for these is Mbps, Gbps, MBps, and GBps, respectively. However, these may sometime have the “p” replaced with a “/” as in other units over time, such as Mb/s.
Note: Units measured in bits always use a lowercase “b,” i.e., Gb/s. Bytes are always represented with a capital “B,” i.e., MB/s.
It’s key to remember that anything measured in bytes will look eight times smaller than the same thing measured in bits. For example, a 1 gigabit per second fiber connection provides 125 megabytes per second. This conversion rate is essential as bandwidths, such as your internet speed, are typically listed in multiples of bits per second. While file sizes are generally listed in multiples of bytes per second.
While bandwidth is the most known measure of connection speed, it’s far from the only one. Latency is another important measure to take into account. Latency often isn’t felt by the user for connections internal to their computer or local network. However, that doesn’t mean that it doesn’t have an effect. The effect of latency is most often felt on the internet, though it is also referred to as “ping.”
Latency is the measure of delay between a request being sent and the recipient starting to receive it. On the internet, latency can vary with the distance to the server you’re communicating with. For example, a standard ping to the US from the UK is around 100 milliseconds. In some cases, if you live near the server location, you might get as low as 10 or even 8 milliseconds. Latency below this doesn’t really happen on the internet, though, as your signal does still have to travel through multiple networks. On local networks, you can get millisecond or sub-millisecond pings. On locally connected memory devices, latency can be low enough to be measured in nanoseconds.
It doesn’t matter how good your bandwidth is. If you’ve got a big latency, you will have a poor experience. Take Mars, for example. Even if you had a gigabit internet connection to Earth, it would still take at least six minutes for the signal to get to Earth and at least another six minutes to get a response. This isn’t great for browsing the web or trying to drive Mars rovers.
Throughput is another measure. It’s very similar to bandwidth but measures the useful data bandwidth currently being used. It considers any signaling overhead and the fact that some devices may not be able to saturate a high-bandwidth connection.
For example, take a SATA cable. It has a bandwidth of 6Gb/s or 750MB/s. SATA is traditionally used to connect HDDs. An HDD, however, can typically only read data at around 230MB/s. This is the real measure of data transmitted rather than the theoretical peak bandwidth of the connection. Throughput is critical when the bandwidth of a connection is not the limiting factor.
A Classic Example
A big issue with bandwidth can be seen when trying to do large transfers. Imagine a company that has had a disaster strike that corrupted several critical hard drives. Say, a power surge fried the drives. Thankfully, they had spare drives on hand that they could just swap in and backups from which they could restore.
Now, however, is when they realize the bandwidth problem. They store data on speedy PCIe Gen3 SSDs, but the backup is stored remotely. The remote site has a gigabit ethernet connection. This sounds great to home users, but 1Gb/s is only 125MB/s which is slower than an HDD can transfer data. With a backup in the range of 100TB and utilizing the total bandwidth of the connection, it will take more than nine days to complete the transfer. This, of course, is bad.
This is where an engineer offers a solution. They’ll drive the three-hour trip to the other data center, collect and carefully label all the necessary drives, then drive back with them in their car. The plan is that once they’ve completed the round trip, they can plug the drives in locally and complete the restore process at much faster local transfer speeds.
While this plan may have a terrible three-hour latency and a minimum six-hour round trip time moving the drives manually offers excellent bandwidth allowing the whole process to complete in less than a day. This leads to the classic phrase in disaster recovery planning scenarios: “never underestimate the bandwidth of a truck full of hard drives.”
Note: The “truck full of hard drives” method is often used to transfer substantial scientific data sets from the location. Where the data was collected to the supercomputer that will process it.
Bandwidth is a measure of the peak possible transfer speed of a connection. It’s an important measure of connection speed, but generally only if it is the limiting factor. It’s important to be aware when bandwidth is an important limiting factor and when it isn’t. Other connection speed measures, such as latency and throughput, can also be important limiting factors. Ideally, you don’t want any single large bottleneck and transfer speeds to match up while providing a useful connection for your use cases.
Some server systems, mainly cloud server usage dashboards, often refer to bandwidth. In this case, they generally don’t mean peak transfer rate. Instead, they refer to the total amount of data transferred over time, typically a day, week, month, or year. Technically this shouldn’t be referred to as bandwidth. A better name for this would be “monthly data transferred” or similar, as this is a measure of actual use, not theoretical peak data transfer.
Did this help? Let us know!