2008-Aug-20, Wednesday

mellowtigger: (Default)
Suppose you have a [connection X] for your business, and you want to know whether you need to upgrade to a faster [connection Y]. How do you define what bandwidth you need? Do you focus on the average 8am-5pm usage, or do you focus on the spikes of activity? Do you care about total bytes transferred, or do you run a constant ping-test between your server and the outside world to see what latency appears?

I have access to the Sonicwall data archives for a T1 connection. The web interface that they provide can easily show average byte transmission over the course of an hour, but that seems far too coarse a view. So, I went about some data mining to see what other numbers I could produce.

bandwidth exampleI successfully imported one of the archive files into an Access database. This particular file happened to have nearly 20k records for a total of 83 minutes of data. Although the data points are identified down to the second, unfortunately an entire file transfer is reduced to just a byte count and a timestamp for the close of the connection. I can't tell how long the connection was active in order to transfer those bytes. I aggregated information up into the measure of individual minutes.  I could break it into smaller chunks of time if I wanted to, but it seemed like a minute was a nice compromise between too broad and too fine a measure.

I calculated the theoretical maximum bandwidth of a T1 connection (using decimal MB instead of binary MB here, since MB transferred were counted decimally).
1536000 bit/sec * 1/8 B/bit * 1/1000 KB/B * 1/1000 MB/KB * 60 sec/min = 11.52 MB/min

So there's my chart.  I know from the one spike that during at least one minute of my 83-minute sample, the bandwidth was insufficient to transmit the amount of data that was requested.  But then the average is still far below the maximum capacity. 

Should I just throw all this data away though?  Should I instead be using a constant ping test between "inside" servers and "outside" servers to see what kind of latency (delay) is appearing?  Should I be using charts of latency rise and fall instead of total byte transmission?

I know what bandwidth saturation "feels like".  (And it does not feel like this T1 is being taxed that hard.)  But how do I define it?

Profile

mellowtigger: (Default)
mellowtigger

About

September 2025

S M T W T F S
 1234 56
78910 111213
14 15 16 17181920
21222324252627
282930    

Most Popular Tags

Powered by Dreamwidth Studios
Page generated 2025-Sep-20, Saturday 02:53 am