Network Protocols

Docker networking on MacOS: How to reach gigabit

Docker networking on MacOS: How to reach gigabit

At Codavel, we believe that software should be as easy to use as possible. As we are building a new transfer protocol, Bolina, we have been making an effort to ensure that anyone can install it and see how fast it is in less than 30 minutes, irrespectively of the operating system or device they’re using.

For that, we decided to provide a Docker container to run our server on the three major OS (Linux, MacOS and Windows).

Network performance on a container

Since Bolina uses both TCP and UDP at its core, our first step to ensure that this was feasible was to prove that the network would not be a bottleneck when running the server inside a container. While this proved to be true on Linux, where we could get almost 2Gbits/second of throughput, on Windows and MacOS the results were not that encouraging.

This happened for different reasons on both operating systems: on Windows, the client was not able to generate as much UDP traffic as TCP, while on MacOS, UDP was hugely affected by packet loss.

  Linux Windows MacOS
TCP 1930 533 657
UDP 1740 259 61

Max throughput, in Mbits/second

Next, I’ll explain how these values were obtained and guide you through an alternative solution to improve the network throughput to a Docker container.

Setup

To isolate the network benchmark from our product development, I decided to use a well known third-party software to generate TCP and UDP traffic and estimated the network throughput: iperf3. To unify both protocol’s results, I used an MTU of 1400 for the data packets.

Linux

My baseline started with a Linux server running Ubuntu 18.04 on an Intel Core i5-2500 CPU @ 3.30GHz with 8GB of ram, with iperf3 running in server mode inside a container (provided by https://hub.docker.com/r/mlabbe/iperf3), and an iperf3 client ran directly on the host machine.

First, we tested the TCP connection:

TCP

Commands

Server: docker run --name=iperf3 --rm -p 5201:5201/tcp -p 5201:5201/udp mlabbe/iperf3

Client: iperf3 -c 192.168.1.2 -i 1 -t 10 -l 1400

Results
  [ ID] Interval Transfer Bandwidth Retr
  [ 4] 0.00-10.00 sec 2.25 GBytes 1.93 Gbits/sec 0 sender
  [ 4] 0.00-10.00 sec 2.25 GBytes 1.93 Gbits/sec  receiver

   

This shows that we are able to communicate at 1.93 Gbits per second through TCP. The next step was to check if we were able to reach the same values through UDP:

UDP (2Gbits/second)

Commands

docker run --name=iperf3 --rm -p 5201:5201/tcp -p 5201:5201/udp mlabbe/iperf3

iperf3 -c 192.168.1.2 -i 1 -t 10 -u -b 2Gb --length 1400

 
Results
  [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
  [ 4] 0.00-10.00 sec  2.03 GBytes 1.75 Gbits/sec 0.001 ms 0/1558358 (0%)
  [ 4] Sent 1558358 datagrams     
 
  [ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
  [ 5] 0.00-10.04 sec  2.03 GBytes 1.74 Gbits/sec  0.001 ms 0/1558358 (0%) receiver
       

The results show that we are able to reach similar values that when using TCP and with no packet loss: perfect! Time to move on to the next OS, Windows.

Windows

The tests on Windows were performed on a laptop with an Intel(R) Core(™) i7-6500U @ 2.50GHz CPU with 8GB RAM, running Windows 10 64-bit home. When trying to install Docker Desktop for Windows, I noticed that this machine did not meet the minimum requirements: only Pro, Enterprise or Education versions of Windows are supported. As an alternative, I had to run the container with my iperf3 server on Docker’s Legacy solution, Docker Toolbox, which uses Oracle Virtual Box instead of Hyper-V.

After completing the installation process, I launched a Docker Quickstart Terminal and started the container. The results were the following:

TCP

Commands

Server: docker run --name=iperf3 --rm -p 5201:5201/tcp -p 5201:5201/udp mlabbe/iperf3

Client: iperf3.exe -c 192.168.99.100 -i 1 -t 10 -l 1400

Results
  [ ID] Interval Transfer Bandwidth  
  [ 4] 0.00-10.00 sec  636 MBytes 533 Mbits/sec sender  
  [ 4] 0.00-10.00 sec  635 MBytes 533 Mbits/sec receiver  

 

The results already showed a significant decrease in performance when comparing to Linux’ tests, going from almost 2Gbits/second to around 500Mbits. But since the values were consistent, I decided to continue the tests, this time using UDP.

UDP (500Mbits/second)

Commands

Server: docker run --name=iperf3 --rm -p 5201:5201/tcp -p 5201:5201/udp mlabbe/iperf3

Client: iperf3.exe -c 192.168.99.100 -i 1 -t 10 -u -b 500Mb -l 1400

Results
  [ ID] Interval Transfer Bitrate  Jitter Lost/Total Datagrams
  [ 5] 0.00-10.04 sec 311 MBytes  259 Mbits/sec 0.063 ms 1944/234533 (0.83%) receiver

 

I was only able to reach 260Mbits/second through UDP, which is only half of the TCP throughput for the same scenario. However, there was no packet loss, which means that it was the sender that was unable to generate traffic above that threshold, and not a network issue.

Mac OS

Finally, we performed the tests on a MacBook Pro with an Intel Core i5 @ 2Ghz and 8GB of ram, running macOS Mojave. For running the server container I installed Docker Desktop for Mac, while the iperf3 client was installed through brew.

Starting with the TCP throughput test:

TCP

Commands

docker run --name=iperf3 --rm -p 5201:5201/tcp -p 5201:5201/udp mlabbe/iperf3

iperf3 -c 192.168.1.2 -i 1 -t 10 -l 1400

Results
  [ ID] Interval Transfer Bitrate  
  [ 5] 0.00-10.00 sec  783MBytes 657 Mbits/sec sender  
  [ 5] 0.00-10.05 sec 782 MBytes 653 Mbits/sec receiver  
 

Although the results are slightly higher than the ones on the Windows machine, they are still considerably lower than the Ubuntu’s results, only reaching the 650Mbits/second.

When trying to reach those rates through UDP, the results were the following:

UDP (700Mbits/second)

Commands

docker run --name=iperf3 --rm -p 5201:5201/tcp -p 5201:5201/udp mlabbe/iperf3

iperf3 -c 192.168.13.34 -i 1 -t 10 -u -b 700Mb --length 1400

Results
  [ ID] Interval Transfer Bitrate  Jitter Lost/Total Datagrams
  [ 5] 0.00-10.00 sec  704 MBytes 590 Mbits/sec 0.000 ms 0/527011 (0%) sender
  [ 5] 0.00-10.04 sec 73.0 MBytes 61.0 Mbits/sec 0.196 ms 465750/520391 (90%) receiver

 

Comparing with the Windows’ results, the client is now able to generate the amount of UDP traffic configured. However, the results show a huge performance decrease due to high levels of packet loss, that would make any network service unusable. My first guess was that we were overshooting the network capacity so I tried to lower the sending throughput:

UDP (100Mbits/sec)

Commands

docker run --name=iperf3 --rm -p 5201:5201/tcp -p 5201:5201/udp mlabbe/iperf3

iperf3 -c 192.168.13.34 -i 1 -t 10 -u -b 100Mb --length 1400

Results
  [ ID] Interval Transfer Bitrate  Jitter Lost/Total Datagrams
  [ 5] 0.00-10.00 sec  119 MBytes 100 Mbits/sec 0.000 ms 0/89279 (0%) sender
  [ 5] 0.00-10.05 sec 116 MBytes 97.0 Mbits/sec 0.121 ms 2202/89243 (2.5%) receiver

 

Great, no packet loss. But still, 100Mbits/sec is not acceptable as maximum throughput, so I tried increasing the throughput:

UDP (200Mbits/sec)

Commands

docker run --name=iperf3 --rm -p 5201:5201/tcp -p 5201:5201/udp mlabbe/iperf3

iperf3 -c 192.168.13.34 -i 1 -t 10 -u -b 200Mb --length 1400

Results
  [ ID] Interval Transfer Bitrate  Jitter Lost/Total Datagrams
  [ 5] 0.00-10.00 sec  238 MBytes 200 Mbits/sec 0.000 ms 0/178559 (0%) sender
  [ 5] 0.00-10.07 sec 137 MBytes 114 Mbits/sec 0.168 ms 75085/177875 (42%) receiver


We can see from the above that anything higher than 100Mbits would result in significant packet loss, which is not nearly good enough if you want to test a network protocol. When trying to understand the issue, and after failing to find similar issues on the internet, I found the following statement on Docker’s documentation:

“Docker Desktop for Mac uses HyperKit instead of Virtual Box. Hyperkit is a lightweight macOS virtualization solution built on top of Hypervisor.framework in macOS 10.10 Yosemite and higher.”

Maybe the problem could be related to the use of HyperKit, since I was using Docker Desktop for Mac, so I decided to repeat the tests on MacOS but this time using VirtualBox to run a Linux VM where I could run my container, and check if the problem persisted. Instead of installing Docker Toolbox as I did for the Windows tests, which is by now Docker’s legacy solution, this time I decided to use Vagrant to run a Linux VM inside VirtualBox, and install docker as I have previously done on my Linux Server.

The Vagrant image used is available here, and it was based on this Ubuntu’s image.

After starting the Vagrant machine and running the iperf3 server inside the container, the results were the following:

TCP

Commands

docker run --name=iperf3 --rm -p 5201:5201/tcp -p 5201:5201/udp mlabbe/iperf3

iperf3 -c 192.168.13.93 -i 1 -t 10 -l 1400

Results
  [ ID] Interval Transfer Bitrate  
  [ 5] 0.00-10.00 sec  1.13 GBytes 974 Mbits/sec sender  
  [ 5] 0.00-10.01 sec 1.13 GBytes 972 Mbits/sec receiver  

 

TCP had a much better performance than when using the docker desktop solution, increasing almost 300Mbits/second. Time to check the UDP’s behaviour:

UDP (1Gbit/second)

Commands

docker run --name=iperf3 --rm -p 5201:5201/tcp -p 5201:5201/udp mlabbe/iperf3

iperf3 -c 192.168.13.93 -i 1 -t 10 -u -b 1Gb -l 1400

Results
  [ ID] Interval Transfer Bitrate  Jitter Lost/Total Datagrams
  [ 5] 0.00-10.00 sec  1.16 GBytes 1000 Mbits/sec 0.000 ms 0/892826 (0%) sender
  [ 5] 0.00-10.01 sec 1.11 GBytes 956 Mbits/sec  0.017 ms 38594/892750 (4.3%) receiver


And the UDP results were also much better than before, almost reaching 1Gbit/second of UDP throughput with low levels of packet loss.

Conclusion

The tests above were a first step to understand if the network would be a bottleneck for a service running inside a container, and if it will be possible to run that same container in the different operating systems. While I was able to reach satisfactory values when running container on Linux, running Docker’s suggested solution on MacOS was not nearly good enough regarding performance, as UDP will not go beyond 100Mbits/second without increasing significantly the packet loss.

This problem seems to be related with the use of HyperKit when using Docker Desktop, and that’s why I definitely suggest running your container on a Linux virtual machine running on VirtualBox as a workaround if you really must test your container’s network performance on MacOS. But even with this workaround the maximum network throughput reached on both MacOS and Windows tests are much lower than my results when running on Linux, and that’s why I would recommend using the latter in your test environment.