Ask Your Question
0

smb2 fails for large file transfer

asked 2020-05-15 22:39:14 +0000

sharky483 gravatar image

updated 2020-05-18 20:53:46 +0000

smb2 between client and server fails every single time for large file transfers , while small file transfers work just fine.

https://www.cloudshark.org/captures/6... --small

https://www.cloudshark.org/captures/c... --large

cant seem to understand why

edit retag flag offensive close merge delete

Comments

It's not the file size, it's the direction.

SMB transfer from 10.241.5.133 to 192.168.5.123 is bad because the ACKs from 192.168.5.123 to 10.241.5.133 get lost or delayed during the transmission to 10.241.5.133.

SMB transfer from 192.168.5.123 to 10.241.5.133 runs fine.

Maybe a port duplex mismatch on the way? Bandwidth utilization of down- or upload? IPS?

JasMan gravatar imageJasMan ( 2020-05-16 14:51:10 +0000 )edit

1 Answer

Sort by ยป oldest newest most voted
0

answered 2020-05-17 13:34:17 +0000

Eddi gravatar image

Do you have some load balancer, VPN gateway, WAN optimizer or other active device in your WAN?

Both traces include a broken connection resulting in a three way handshake.

We see in both traces

Client 10.241.5.133    SYN with MSS = 1460
Server 192.168.5.123   SYN/ACK with MSS = 1338

Since the smaller value is 1338, no TCP segment should hold more than 1338 byte. The client keeps to that limit, but the server is sending packets with a payload of 1380 byte. This suggests, that there is at least one device in the return path that modifies the MSS of the SYN/ACK packet. My guess is that packets are modified in transit in both directions with different values for incoming or outgoing packets.

Certain VPN gateways or DSL routers would modify the segment size in transit. Obviously client and server are working with a different MSS which can lead to all types of trouble.

I would only continue this analysis after both server and client work with the same MSS. This might be a good opportunity to review all other settings of this intermediate device.

Remarkably the server offers an initial receive window of 8192 * 2^8 which results in a 2 GByte buffer. I find this a surprisingly aggressive value that is helpful for "Long Fat Networks" (LFN), like 10 GBit/sec or more for links with very long round trip times (way above 100 msec).

Good luck

Eddi

edit flag offensive delete link more

Comments

That was also my first thought. But the segments with a payload >1338 bytes are all comming from 192.168.5.123 and going to 10.241.5.133, and 10.241.5.133 has a MMS of 1460 bytes. All segments from the client 10.241.5.133 to the server 192.168.5.123 are less or equal 1338 bytes. So everything is fine there. If there is a router or middlebox on the way with a smaller MSS, we should see this in the SYN/SYNACK packet.

JasMan gravatar imageJasMan ( 2020-05-18 06:19:45 +0000 )edit

yes , the client and server are connected through IPsec VPN tunnel . Palo Alto VM in Azure on one side and Cisco ASA on the other . let me check the MSS on the firewall in between .

sharky483 gravatar imagesharky483 ( 2020-05-18 13:43:21 +0000 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2020-05-15 22:39:14 +0000

Seen: 457 times

Last updated: May 18 '20