Expect actual broadband only when you're asleep or at work. When you're home and awake expect something more like dialup. You remember dialup don't you? Welcome back!In networking, for high quality throughput you must build to peak demand, or at least devise a combination of infrastructure and compensating strategies that adequately approximates that ideal. Windstream doesn't even come close. The problem is that if your ISP does not care to give you what they promise, and is disinclined to be honest about it, they can usually dismiss speed test data as anecdotal or blame it on internal wiring. Before dealing with Windstream, I didn't think a large telecom would have a systematic policy of trying to deceive its customers, but if you call 'tech support', you'll get an amazing runaround; you'll be told to unplug things and restart things by people who obviously have no idea what they're doing, and assume you're too clueless to notice. You might be told that from 'there' your speed is fine, although if they are actually testing something and not just faking it, their test packets must be getting a much better QoS priority than their customers'. A sign that they might actually be testing something is that increasingly I got "Hmmm, yeah, that's not good..." A sign that they're faking is that I've been told to unplug my ethernet cable and then been told "Ah see, the speed JUMPED up!" while watching wireshark across the NIC showing that NO packets were going ANYWHERE. You might be told that they'll send someone out, even though you say that's a waste of a field tech's time because there is nothing to fix. You can point out that your gateway's diagnostics report the Signal/Noise Margin and Attenuation between you and the DSLAM are 25.1dB and 5.8dB respectively, that these are excellent numbers, and that MLab's NDT probe always produces variations on the following.
To produce this page, I wrote a Java client/server socket pair that attempts to be as 'generous' as possible to the connection. Its results tend to be about equal to Ookla's, and much more generous than 2wire's. It doesn't include initial latency and socket set-up in its calculations; it only starts the clock after the first (8K) test packet is received, not sent. Its throughput on localhost is ~120 Mbps, on a slow (1.4 GHz) single-core Intel P4, and since the load is normally split between the client and the server, we can pretty safely ignore CPU load as a significant contributor to slow throughput in the sub-3 Mbps range (and indeed CPU usage is empirically negligible). The client runs as a Linux cron job or a Windows scheduled task, but can be invoked manually any time. The server runs on a low-load backbone-connected Linux server. Each run produces a log entry on the client that is then sent to the server. The server's log is rendered out to this page as a PNG by a PHP page that invokes Google's pretty sweet Chart API (although a built-in regression line would have been nice).
100313.2330 - Added a regression line. We seem to be trending upward! Let's see if it's just the weekend...
100314.1207 - Flattened out.
100316.0100 - Changed the auto-check interval to every hour, then took a few hours off.
100316.2250 - Added the ability to split charts and scroll through them (look for side arrows).
100320.1714 - Today, for the first time, the single-chart mean is above 1 Mbps.
100322.1204 - The client was down for 3 hours last night, so Windstream got a free pass at what looks like a time of abysmal performance.
100326.1538 - Got an email from Mollie at Windstream, offering help...
100329.1640 - ...I said great, as long as it's a fix and not just preferential treatment for my packets.
100330.1400 - Got a call from a 'level 2' person at Windstream; he said it was a 'known issue' that they were working on.
100410.0021 - It still all looks pretty much the same to me.
100420.1501 - Notice the big data gap on 4/18? Lines went completely dead throughout the region at about 00:45 on the 18th. Interestingly, DSL was back up before voice, about 9 hours later at 10:00. Local-only voice was back by 22:00. Long distance was up the next morning. I hope nobody in the area needed 911 for those 21 hours. BTW, the mean is back solidly below 1 Mps, and that is ignoring the dead time on the 18th, which, strictly speaking, should have counted as 9 zeros.
100507.1632 - Well...it's been over a month since Windstream offered to help me with 'my' problem, and nothing has changed. I guess we can safely conclude that they have no answer to my challenge. Have you noticed that every bill is higher than the last? This is the first time I have ever seen a utility do this: inexorably raise the bill every single month, without delivering what they promise, and without explanation. Perhaps it's time to turn over my results to the FTC or PUC, or whoever else is responsible for protecting citizens from this sort of behavior.
100521.1407 - Starting yesterday morning we look fixed! Is it time to retire this page? I'll let the test run a little longer, then, as I promised Mollie on 100329, run it from a neighbor's. If the fix looks real, we might be done! I feel like Ralph Nader or Upton Sinclair. :)
110305.2111 - My neighbor's results were always suspicious-looking, as if the fix had not been real, as if Windstream had somehow been giving special priority to my packets. Now—suddenly—it looks as if Windstream has decided to stop doing that, so we're back! Here, for comparison, are my results in May 2010, when 'the fix was in'—apparently just to shut me up.
110724.1931 - I was in Seattle for few weeks, and when I came back on July 6 Windstream was completely down for all but 5 or so of the next 72 hours. Their downstream throughput seems to have improved though. We'll see...
120227.2320 - That big gap starting 120206 wasn't Windstream's fault; I was away on the beach!
120626.2120 - The gap at the end of May WAS Windstream's fault: four whole days completely down!
131215.1800 - After being reasonable (though not great) for a while, peak-time throughput has degenerated radically down to nearly 0 since December 2, 2013.