Concept of Network Bandwidth



Bandwidth is a measurement of how much data a network medium is capable of carrying at any given time. Bandwidth can be measured in cycles per second, or Hertz (Hz),or in bits per second (bps). Generally, the higher the bandwidth, the more data can be transmitted.


Throughput vs. Bandwidth:

While bandwidth is a theoretical rating of the capabilities of a network channel, throughput is a measurement of how much data can actually pass through the channel in a given time period. You can think of bandwidth as the number of lanes on a high-way, and throughput, or data rate, as the maximum number of cars that can pass a certain point on the highway in a certain length of time. If there are too many cars on the road, traffic slows and throughput is reduced. By adding lanes, or increasing band-width, you can improve the throughput under peak conditions; however, much of the bandwidth will be wasted at non-peak times.


Latency is the time it takes from when an action is initiated to when a result becomes observable. In networking, latency is the time is takes for a transmission to reach its destination. Often, network latency is measured as a round-trip, the time it takes a request to be transmitted from a client to a server, plus the time for the result to be transmitted back to the client. The processing time on the server end is not usually included in round-trip latency. A ping is an excellent way to measure latency between two nodes on a network as only cursory processing is performed by the target node—virtually as soon as a ping is received a response is sent.

                                                                                             —————- Thanks


About Author


Leave A Reply

Powered by