We have implemented Tom Kelly's scalable TCP algorithm in ns-2, our TCP-over-UDP (atou), and as a tunable option per flow in the net100 kernel. Scalable TCP changes the algorithm to update TCP's congestion window to the following:
The following graphs plot the TCP congestion window over time.
Here are results from ns-2, with max cwnd of 3.5 M packets, RTT 140 ms,
delayed ACKs, and Kelly values of (.125,.02).
For comparison we include Floyd HS TCP and standard TCP.
There is one packet dropped for each test.
For the cwnd value in the experiment above, Floyd's HS TCP selects
a MD of 0.26 and AI of 17.
Standard TCP uses an MD of 0.5 and AI of 1.
For this CWND, Kelly's 0.02 is equivalent to an AI of 70.
For atou, these are tests over the real Internet from ORNL to LBL
using different AIMD algorithms with a single packet drop.
Using our Net100 linux kernel, this is an iperf
test from ORNL to CERN (RTT 150 ms)
with a UDP burst used to induce loss after the TCP session has started.
The linear recovery phase is under 3 s as predicted.
Kelly's scalable TCP linux kernel patch includes several other "fixes",
our Net100 scalalble TCP includes only our implementation
of Kelly's AIMD modifications.
The data above was collected by a tracer daemon and the net100 kernel. The instrumented iperf also reported the following TCP variables at the end of the transfer:
Here are three independent tests with standard TCP (blue),
with HS TCP (red) and scalable
TCP (green) between ORNL and CERN using the Net100 linux kernel.
A UDP blast causes loss and timeout.
Note the effects of the different AIMD algorithms.