Ask Your Question
1

TCP is limiting the use of bandwidth

asked 2018-11-27 22:27:26 +0000

Malloy gravatar image

I am trying to determine why a Windows 2008 R2 server is throttling the bandwidth for file transfers. I am trying to send files from the USA to Malaysia. We have a 50 Mbs MPLS circuit that typically only shows about 30%-35% usage. When I send a file from server A, in the US, to server B, in Malaysia, I get a transfer speed of about 130kb-300kb/sec. However, if I send multiple files simultaneously they will all go at same speed, until I am consuming about 3,500 - 4,000 Mbs.

Server A is Windows 2012 R2, Server B is Windows 2008 R2.

Using Wireshark, I can see that the initial handshake started with a window size of 8192 and a scaling factor of 256, and quickly negotiated it down to a widow size of 513, and then it never has more than 82k bytes in flight. I am trying to determine what, in the settings on the two servers, is driving this behavior, and if I can adjust it so a single file transfer uses closer to the total available bandwidth.

edit retag flag offensive close merge delete

Comments

Hello,

Doing such kind of analysis (TCP performance issues) demands looking at PCAPs, highly preferably sender-side ones. Could you please share them if possible?

Packet_vlad gravatar imagePacket_vlad ( 2018-11-28 05:33:20 +0000 )edit

https://www.dropbox.com/s/wizk8jjx265... Here you go. I started a transfer, and recorded it on both ends. I let it run long enough to settle into a steady state, and then cancelled it.

Malloy gravatar imageMalloy ( 2018-11-28 22:05:16 +0000 )edit

2 Answers

Sort by ยป oldest newest most voted
1

answered 2018-11-30 10:43:15 +0000

updated 2018-11-30 10:49:55 +0000

Below are some observations / thoughts about the case

  • Receiver Window size was not "negotiated down from 8192 to 513" but negotiated up from 8192 to 131328 Bytes. Scaling factor is not taken into account for SYN and SYN,ACK packets.
  • RTT between endpoints is 200+ms, so this is high-BDP link. Sender-side capture is recorded on the sender itself, therefore we see large packets (offloading engine) of up to 65930 Bytes size which is repeatable throughout many packets. Every large packet contains PUSH flag.

  • There is a long gap (up to 33 seconds) of total transmission pause caused by Zero Window Condition: image description

I don't know is it normal or not - we have to consider what application is this (SSL data transfer).

  • There's one episode of packet loss (sender side trace, starting from packet no.332), but its effect is not so big compared to total transfer time.
  • Now to the most interesting part. The transfer is happening in "send-wait-send.. manner":

image description

The interesting thing is the sender never exceeds 82345 Bytes in flight, which is LESS than available RWIN.

image description

So the questions are:

  • Why the receiver doesn't increase it's RWIN? You can try to influence RWIN by playing with auto-tuning settings

    • It may help or not, because the sender anyway doesn't fully use advertised RWIN. Larger advertised RWIN could potentially cause it to send more. It looks like the software being used for data transfer on sender side has some predefined buffers of two sizes: 64k+ and 82k+. This is a good idea to further investigate sender settings (what software is it and what settings you can influence).
edit flag offensive delete link more

Comments

@Packet_vlad: If we ignore the Zero Window condition we can look at the start of the real transfer at around 35s from the session setup.

The CWND or somethinfg atr the sender side is the limited. The seq 3571475914 (absolut) is retransmitted twice. 1st as FastRetr 2nd as a RTO. Which causes the ramping down. So this results in reducing cwnd but ramping up again until the cwnd limit at server side is reached (needs around 9sek). This ramping up combined with high iRTT (200ms) and the limited cwnd (which maybe is tuneable in Win2008) results in limited goodput.

It would be better if we had a trace from outside the sender, as TCP offloading is enabled which makes reliable packet timing analysis hard.



Bandwidth-delay Product and buffer size
BDP (50 Mbit/sec, 200.0 ms) = 1.25 MByte
required tcp buffer to reach 50 Mbps with RTT of 200 ...(more)

Christian_R gravatar imageChristian_R ( 2018-11-30 16:04:25 +0000 )edit

@Christian_R: good observations!

Interesting thing is - the second retransmission was not necessary because: a) it was emitted after 60 ms whereas RTT is 200 ms, which means RTO was even larger at the moment; b) both retransmissions came to the receiver which can be seen in the second ("receiver") trace; c) the receiver issued D-SACK in packets 385,386. Anyway this retransmission could potentially have an influence to CWND I think.

But the main point is - BIF had completely stopped to grow after 82345 Bytes and this behavior is absolutely consistent. CWND doesn't behave this way, it's always dynamic and is growing which is the essence of probe for available bandwidth. Furthermore CWND had no reason to stop growing because we have no more signs of congestion detected (there were neither packet loss nor delay increase observed further in the trace). In addition to this, the exact same ...(more)

Packet_vlad gravatar imagePacket_vlad ( 2018-12-01 14:17:16 +0000 )edit

Can be true. But the Bytes in flight looks a little bit strange. Maybe it it is the SO_SNDBUF or some other buffer which is limiting the sender side.


http://smallvoid.com/?s=so_snd

Christian_R gravatar imageChristian_R ( 2018-12-01 20:18:56 +0000 )edit

@Malloy, could you please take a trace outside the sender, so we can say in more reliable way is going on. As the sender trace has TCP segementation offloading enbaled a reliable analysis is not really possible.

Christian_R gravatar imageChristian_R ( 2018-12-02 12:10:51 +0000 )edit

There are a few interesting items in these captures - but in terms of the overall throughput, it does seem that the limiting factor is the sender's ability to deliver data. It is sending alternating bursts of 64 KB and 80 KB (4 x 16 KB and 5 x 16 KB) and then waiting to receive acknowledgements before sending the next burst. It is never filling the receiver's RWIN of 132 KB. I'd suggest investigating the send buffer settings in the sending server. As well as that, investigate the way the application works to determine if the sending limits are within the application. However, the SSL block sizes of 16 KB perhaps point at the OS buffers rather than the application.

Philst gravatar imagePhilst ( 2018-12-03 05:32:48 +0000 )edit
0

answered 2018-11-29 21:02:10 +0000

NJL gravatar image

In the sender.pcapng file, it's clear that 10.92.48.68 for some reason is incapable of processing the incoming data. Look at frame 171 and forwards, there you'll see that 10.92.48.68 announces a TCP Receive Window of zero bytes i.e. he's telling the sender that he cannot transmit anymore data. That coupled with packet loss will result in very poor transfer rates. Apart from that, a Receive Window of only 128K is not nearly enough to fill the circuit in order to get the maximum transfer rate. I'd look into the various settings on Windows 2008 that could prohibit it from scaling higher than 128K. Not sure, but maybe this can lead you in the right direction: https://docs.microsoft.com/en-us/prev...

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

3 followers

Stats

Asked: 2018-11-27 22:27:26 +0000

Seen: 2,459 times

Last updated: Nov 30 '18