David X. Wei | Prof. Pei Cao | |
Netlab @ Caltech | CS @ Stanford |
May 2006
This is a patch that can run Linux TCP congestion control algorithms on NS2, with similar simulation speed and memory usages as other NS2 TCPs (e.g. Sack1). The implementation loosely follows the Linux TCP implementation, and can produce results comparable to Linux experimental results. The patch is for NS-2.29. It is confirmed to be compatible with NS-2.30. You may need some modification if you want to install it for other versions.
The patch can be downloaded from here. It is for NS-2.29. It is also confirmed to be compatible with NS-2.30. You may need some modification if you want to install it for other versions. To install the patch, you need to take the following steps, assuming you have successfully installed and run NS-2.29 in your computer:
The patch changes the following files:
This section serves as a quick reference for users who want to run different Linux TCP algorithms in the TCP Linux patch.
There is a mini-tutorial for TCP Linux . Please read the mini-tutorial for details if you want to design your own algorithms or port new algorithms from Linux to NS-2.
If you find some performance problem of some Linux algorithms, please check the known Linux bugs page to make sure it is really the problem of the algorithm, not a bug in Linux implementation.
The TCP-Linux module is call "Agent/TCP/Linux". If you have an
existing script that runs TCP and want to change to TCP-Linux, what you
need to do is:
$TCP_Name |
Congestion Control Algorithm |
bic |
Binary
Increase Congestion control for TCP
|
cubic |
TCP CUBIC: Binary Increase Congestion control for TCP v2.0 , an
extension of BIC-TCP |
highspeed |
Sally Floyd's High Speed
TCP (HS-TCP RFC 3649) congestion control |
htcp |
Hamilton TCP (H-TCP) congestion control |
hybla |
TCP-HYBLA
Congestion control algorithm |
reno |
TCP NewReno |
scalable |
Tom Kelly's Scalable TCP |
vegas |
TCP Vegas congestion
control |
westwood |
TCP Westwood+ |
veno |
TCP Veno |
compound |
TCP Compound (C-TCP) |
lp |
TCP Low-Priority (TCP-LP) |
You can add your own congestion control module once you develop the module in Linux.
If you decide to do so, take the following steps:
You might encounter one of the following problems in the last step:
Here is a list of known problems:
NS-2 TCP-Linux: An NS-2 TCP Implementation with Congestion Control Algorithms from Linux ;
D. X. Wei and P. Cao; in proceedings of ValueTool'06 -- Workshop of NS-2, Oct, 2006.
PDF
Bibtex
Setup of Linux experiments |
Setup of NS2 simulations |
A single flow is running from the sender side (right) to the receiver side (left). The congestion window size and the sending rate is recorded every 0.5second.
In experiment, the application is Iperf with large enough buffer. (See kernel tuning script). We read the /proc/net/tcp file every 0.5 second to get the congestion window and measure throughput at the receiver side's iperf output. (Dummynet setup script in the experiments can be found here . Also note that e1000 card's driver has set the RxDescriptors number to be 4096.)
In simulation, the application is infinite FTP flows, with large enough buffer. (See TCL script and CSH scripts ).
The following figures report the congestion window trajectory
and the rate trajectory over time. Red curves are the results
of NS-2 simulations. Green curves are the results of Linux-Dummynet
experiments. For comparison, we also show the NS-2 Sack1 results
(in blue curves) for the cases of Reno and HighSpeed TCP.
(See TCL script
and CSH scripts for NS-2 Sack1 simulations.)
You can click on a figure to get the full-size version.
TCP Options |
Congestion Window Trajectory (Y axle: packet; X axle: sec) |
Rate Trajectory (Y axle: bps; X axle: sec) |
Remark |
bic | |||
cubic | The difference of cubic results is the most significant one among all the results. We still need to understand why. |
||
highspeed | The blue curve is the NS-2 TCP/Sack1 results, with TCPSink/Sack1/DelAck. We also have the TCP/Sack1 result with TCPSink/Sack1 ( cwnd, rate) which are less closer to the Linux results. |
||
htcp | |||
hybla | hybla sets AI parameters based on minimum observed RTT. NS-2 has a cleaner situation with a smaller minimum observed RTT. Hence AI parameter in simulation is smaller than the one in dummynet experiment. That's my explanation why the simulation result has a longer congestion epoch than the dummynet result. |
||
reno | The blue curve is the NS-2 TCP/Sack1 results, with TCPSink/Sack1. We also have the TCP/Sack1 result with TCPSink/Sack1/DelAck ( cwnd, rate) which are less closer to the Linux results. |
||
scalable | |||
vegas |
Interestingly, we found that NS-2's Vegas implementation is much better than TCP-Linux, which is very close to Linux results. Both Linux and TCP-Linux do not fully utilize the bottleneck bandwidth. We studied the problem and it turned out to be a problem with Linux implementation with DelayAck. We need to set alpha to be larger than 1 to eliminate this problem when DelayAck exists. | ||
westwood |
The whole modules include four parts, corresponding to the four white blocks in the figure. Yellow blocks are from outside source codes such as NS-2 or Linux:
The Linux+Dummynet experiments were carried out with WAN-in-Lab (WIL) facilities and greatly helped by Dr. Lachlan Andrew at Caltech.
The work is inspired and greatly helped by Prof. Pei Cao at
Stanford and by Prof.
Steven Low at Caltech.
Many thanks to them!