Bandwidth and throughput are two networking concepts that are commonly misunderstood. System administrators regularly use these two concepts to help plan, design, and build new networks. Networking exams also include a few bandwidth and throughput questions, so brushing up on these two subjects is a good idea before exam day.
What is Bandwidth?
You probably already have a fairly good idea on what bandwidth is. It is technically defined as the amount of information that can flow through a network at a given period of time. This is, however, theoretical- the actual bandwidth available to a certain device on the network is actually referred to as throughput (which we’ll discuss further on in this section).
Bandwidth can be compared to a highway in many respects. A highway can only allow for a certain amount of vehicles before traffic becomes congested. Likewise, we refer to bandwidth as finite- it has a limit to its capability. If we accommodate the highway with multiple lanes, more traffic could get through. This also applies to networks- we could perhaps upgrade a 56K modem to a DSL modem and get much higher transfer speeds.
Bandwidth is measured in bits per second (bps). This basic unit of measurement is fairly small, however, and you’ll more than likely see bandwidth expressed as kilobits, megabits, and gigabits.
Make sure you make the distinction between bits and bytes. A megabyte is certainly not the same as a megabit, although they are abbreviated quite similarly. Since we know there are 8 bits in a byte, you can simply divide the number of bits by 8 to find the byte equivalent (or to convert from bytes to bits, multiply by 8).
Lastly, it’s important to also make the distinction between speed and bandwidth. Bandwidth is simply how many bits we can transmit a second, not the speed at which they travel. We can use the water pipe analogy to grasp this concept further. More water could be transported by buying a larger pipe- but the speed at which the water flows is less affected.
The Difference between Throughput and Bandwidth
Although bandwidth can tell us about how much information a network can move at a period of time, you’ll find that actual network speeds are much lower. We use the term throughput to refer to the actual bandwidth that is available to a network, as opposed to theoretical bandwidth.
Several different things may affect the actual bandwidth a device gets. The number of users accessing the network, the physical media, the network topology, hardware capability, and many other aspects can affect bandwidth.
To calculate data transfer speeds, we use the equation Time = Size / Theoretical Bandwidth.
Keep in mind that the above equation is actually what we use to find the “best download.” It assumes optimal network speeds and conditions since we use theoretical bandwidth. So to get a better idea on the typical download speed, we use a different equation: Time = Size / Actual Throughput.
Essentially you only need to know the basics of bandwidth and throughput for most exams. Try to commit the units of bandwidth to memory- you’ll see them on exam day. Cisco also demands that CCNA 1 students know what determines throughput. (You may read back and note that network topology, number of users on the network, and other factors will indeed bring throughput down.)
Lastly, you can remember the simple equation T = S / P to calculate file transfer time. (Where T is time, S is file size, and P is actual throughput.)