Featured image of post Simulating WAN links with WANem and Linux VMs

Simulating WAN links with WANem and Linux VMs

Simulate a WAN link with traffic shaping in a local virtual environment, with WANem.

Introduction

Recently, while browsing the web looking for some interesting stuff, I stumbled upon a project called WANem, which aims to simulate WAN links by just booting a VM, configuring the network interfaces and setting up the WAN link parameters in a nice web UI. It can be very useful for studying Computer Networks topics, such as transport protocols, TCP congestion algorithms, resiliency against delays and losses etc. It may also help some QA tests, specially when you need to check how an application behaves in certain WAN scenarios.

So, I decided to try it and have some fun.

Simulated scenario

Since the WANem ISO is ancient by now (it’s based on Knoppix 5.3.1, with kernel 2.6.24), you need to make sure the VM you create for it is fully compatible, specially regarding the NICs. My scenario is as follows:

  • Two virtual switches, one being reachable by the host (switch-01), and another one isolated (switch-02).
  • WANem VM (wanem-vm): connected to both switches.
  • Linux VM 1 (linux-vm-01): connected to switch-01 only.
  • Linux VM 2 (linux-vm-02): connected to switch-02 only.

The idea is to have wanem-vm route packets between linux-vm-01 and linux-vm-02, applying whatever traffic shaping we want. It’s important that wanem-vm is also reachable by the host, as we need to access its web UI to configure it, hence the connection to a switch that is also reachable by the host.

The diagram below summarizes this scenario:

Creating and configuring the VMs

Virtual Switches

Create the two virtual switches, as described before. Make sure one of them is reachable by the host, and the other one is fully isolated. You can enable a DHCP server or not; in my case, I used DHCP only for the first switch (switch-01).

WANem VM

Create a VM with 1 vCPU and 1 GB of RAM (512 MB may suffice). Use the WANem ISO as the boot device (no need to create a virtual disk). Make sure you attach two compatible NICs, one to each virtual switch.

Linux VMs

I used Oracle Linux 8, but any distro should work. I gave 2 vCPUs and 1 GB of RAM to each. The first Linux VM (linux-vm-01) should be attached to only one of the switches, while linux-vm-02 should be attached only to the other one. Some important remarks:

  • Install whatever testing utilities you may want beforehand (with the VMs being able to reach the Internet), such as iPerf, IPTraf, TCPDump etc.
  • After configuring the IP addresses, disable anything that may interfere with the networking, such as NetworkManager, Systemd-firewalld, Systemd-networkd etc. We don’t want this stuff messing with our configuration.

Setting up the IPs and routes

wanem-vm

After the first boot, we’ll be presented with the WANem shell:

Type reset to reconfigure the NICs: eth0 should be attached to switch-01, getting an IP through DHCP; and eth1 should be attached to switch-02, needing manual configuration. Refer to the following images:

You can then confirm the status by typing status at the WANem shell:

linux-vm-01

This VM will be attached to switch-01, and as such it will get an IP address through DHCP. We now need to configure the route to linux-vm-02 via wanem-vm:

1
2
linux-vm-01# ping 192.168.91.7 # should reach eth0 of the wanem-vm
linux-vm-01# ip route add 192.168.200.0/24 via 192.168.91.7 dev eth0

linux-vm-02

This VM will be attached to switch-02, and as such we’ll need to manually assign an IP address to it:

1
linux-vm-02# ip addr add 192.168.200.2/24 dev eth0

We now need to configure the route to linux-vm-01 via wanem-vm:

1
2
linux-vm-02# ping 192.168.200.1 # should reach eth1 of the wanem-vm
linux-vm-02# ip route add 192.168.80.0/20 via 192.168.200.1 dev eth0

At this point we should be able to ping linux-vm-02 from linux-vm-01 and vice-versa.

Access the WANem web UI at http://192.168.91.7/WANem, and go to Advanced Mode. From that page, we can configure multiple parameters for each NIC (eth0 and eth1, in this case). For eth0, I’ve set a bandwidth of 60000 Kbps, 5 ms delay, and 2% loss:

And for eth1, I’ve set a bandwidth of 10000 Kbps, 120 ms delay, 10 ms of jitter, and 15% loss. Yeah, a crappy link. :) This difference between eth0 and eth1 will allow us to see some interesting things.

I chose iPerf 3 and flood pings to test the WAN link simulated by WANem, as they are pretty straightforward to use and can give us some useful insights.

First, before configuring the WAN link, with just plain routing between linux-vm-01 and linux-vm-02, we got about 50 Mbps of bandwidth between the VMs, with some losses along the way – probably the ancient Knoppix with kernel 2.6 couldn’t give us more (as the hardware resources weren’t constrained):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@linux-vm-01 ~]# iperf3 -c 192.168.200.2 -i 5 -t 60
Connecting to host 192.168.200.2, port 5201
[  5] local 192.168.81.233 port 41994 connected to 192.168.200.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-5.00   sec  31.5 MBytes  52.9 Mbits/sec    0   1.47 MBytes
[  5]   5.00-10.00  sec  30.0 MBytes  50.3 Mbits/sec  369   1.05 MBytes
[  5]  10.00-15.00  sec  30.0 MBytes  50.3 Mbits/sec    0   1.09 MBytes
[  5]  15.00-20.00  sec  30.0 MBytes  50.3 Mbits/sec    0   1.20 MBytes
[  5]  20.00-25.00  sec  28.8 MBytes  48.2 Mbits/sec  128    816 KBytes
[  5]  25.00-30.00  sec  30.0 MBytes  50.3 Mbits/sec    0    959 KBytes
[  5]  30.00-35.00  sec  30.0 MBytes  50.3 Mbits/sec    0    986 KBytes
[  5]  35.00-40.00  sec  30.0 MBytes  50.3 Mbits/sec    0   1.29 MBytes
[  5]  40.00-45.00  sec  28.8 MBytes  48.2 Mbits/sec  249    947 KBytes
[  5]  45.00-50.00  sec  30.0 MBytes  50.3 Mbits/sec    0   1.01 MBytes
[  5]  50.00-55.00  sec  30.0 MBytes  50.3 Mbits/sec    0   1.06 MBytes
[  5]  55.00-60.00  sec  30.0 MBytes  50.3 Mbits/sec    0   1.48 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec   359 MBytes  50.2 Mbits/sec  746             sender
[  5]   0.00-60.12  sec   357 MBytes  49.8 Mbits/sec                  receiver

iperf Done.

The flood ping went fine, without any losses, and with about 0.5 ms delay:

1
2
3
4
5
6
[root@linux-vm-01 ~]# ping -f 192.168.200.2
PING 192.168.200.2 (192.168.200.2) 56(84) bytes of data.
.^C
--- 192.168.200.2 ping statistics ---
13829 packets transmitted, 13828 received, 0.00723118% packet loss, time 7625ms
rtt min/avg/max/mdev = 0.398/0.504/6.682/0.096 ms, ipg/ewma 0.551/0.472 ms

Now, with the WAN link configured as described in the previous section, we got a very different result:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[root@linux-vm-01 ~]# iperf3 -c 192.168.200.2 -i 5 -t 60
Connecting to host 192.168.200.2, port 5201
[  5] local 192.168.81.233 port 41982 connected to 192.168.200.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-5.00   sec   219 KBytes   359 Kbits/sec   24   1.41 KBytes
[  5]   5.00-10.00  sec   151 KBytes   248 Kbits/sec   11   2.83 KBytes
[  5]  10.00-15.00  sec  72.1 KBytes   118 Kbits/sec   18   2.83 KBytes
[  5]  15.00-20.00  sec  73.5 KBytes   120 Kbits/sec   10   5.66 KBytes
[  5]  20.00-25.00  sec   151 KBytes   248 Kbits/sec   15   1.41 KBytes
[  5]  25.00-30.01  sec  0.00 Bytes  0.00 bits/sec   11   2.83 KBytes
[  5]  30.01-35.00  sec  74.9 KBytes   123 Kbits/sec   12   2.83 KBytes
[  5]  35.00-40.00  sec   146 KBytes   239 Kbits/sec   15   2.83 KBytes
[  5]  40.00-45.00  sec  76.4 KBytes   125 Kbits/sec   13   5.66 KBytes
[  5]  45.00-50.00  sec  77.8 KBytes   127 Kbits/sec   14   2.83 KBytes
[  5]  50.00-55.00  sec   156 KBytes   255 Kbits/sec   10   5.66 KBytes
[  5]  55.00-60.00  sec   153 KBytes   250 Kbits/sec   13   4.24 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  1.32 MBytes   184 Kbits/sec  166             sender
[  5]   0.00-60.24  sec  1.21 MBytes   168 Kbits/sec                  receiver

iperf Done.

[root@linux-vm-01 ~]# ping -f 192.168.200.2
PING 192.168.200.2 (192.168.200.2) 56(84) bytes of data.
..................................................................................................^C
--- 192.168.200.2 ping statistics ---
481 packets transmitted, 383 received, 20.3742% packet loss, time 6374ms
rtt min/avg/max/mdev = 115.731/126.823/137.424/5.862 ms, pipe 13, ipg/ewma 13.279/126.367 ms

Notice how we had many losses along the way, with an average 126 ms RTT. In iPerf we can see how it translates to a very low bitrate and very small TCP congestion window. Also, the bitrate was much less than the 10000 Kbps configured on eth1: remember the losses are in play in this scenario (the way the bitrate and congestion window grow and shrink regularly also tells a lot).

Don’t believe me? Well, let’s remove the loss configured in WANem, leaving the rest untouched:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@linux-vm-01 ~]# iperf3 -c 192.168.200.2 -i 5 -t 60
Connecting to host 192.168.200.2, port 5201
[  5] local 192.168.81.233 port 41986 connected to 192.168.200.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-5.00   sec  7.65 MBytes  12.8 Mbits/sec    0    713 KBytes
[  5]   5.00-10.00  sec  6.25 MBytes  10.5 Mbits/sec    0   1008 KBytes
[  5]  10.00-15.00  sec  5.00 MBytes  8.39 Mbits/sec   23   1.57 MBytes
[  5]  15.00-20.00  sec  6.25 MBytes  10.5 Mbits/sec  173   1.43 MBytes
[  5]  20.00-25.00  sec  6.25 MBytes  10.5 Mbits/sec   67   1.16 MBytes
[  5]  25.00-30.00  sec  5.00 MBytes  8.39 Mbits/sec    0   1.26 MBytes
[  5]  30.00-35.00  sec  6.25 MBytes  10.5 Mbits/sec    0   1.30 MBytes
[  5]  35.00-40.00  sec  6.25 MBytes  10.5 Mbits/sec   64   1.04 MBytes
[  5]  40.00-45.00  sec  5.00 MBytes  8.39 Mbits/sec    0   1.39 MBytes
[  5]  45.00-50.00  sec  6.25 MBytes  10.5 Mbits/sec   32   1.09 MBytes
[  5]  50.00-55.00  sec  5.00 MBytes  8.39 Mbits/sec    0   1.21 MBytes
[  5]  55.00-60.00  sec  6.25 MBytes  10.5 Mbits/sec    0   1.25 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  71.4 MBytes  9.98 Mbits/sec  359             sender
[  5]   0.00-61.08  sec  69.9 MBytes  9.60 Mbits/sec                  receiver

iperf Done.

Ah-ha, now we can reach the wire speed of wanem-vm’s eth1. Well, sort of: there are still some TCP retransmissions, probably because of a buffer bloat in WANem, specially given the asymetry of the two links (eth0 and eth1).

Conclusions

This is a pretty straightforward way to simulate WAN links, which may be very useful when we need to replicate certain scenarios to understand how applications behave (and how to fine-tune them accordingly).

As shown in the screenshots, WANem can simulate bandwidth (symmetric or asymetric), delay (with jitter), loss, duplication, packet reordering, corruption, disconnects (including at random), MTTF and MTTR; and we can even create multiple rules for different subnets. Awesome job by the team behind WANem.

I also plan to set up a similar scenario but using a plain Linux VM as a router, instead of WANem. In this case, I’ll leverage IPRoute’s Traffic Control (TC) to apply some traffic shaping policies. Perhaps I’ll even experiment with IPTables’ hash-limit extension. Stay tuned!

Built with Hugo
Theme Stack designed by Jimmy