Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

confused by calculated throughput of Wireshark for CIFS

I am dealing with a problem using Wireshark for a slow performed CIFS conversation.

But packets captured at Client-side by Wireshark confused me:

There are 13 consecutive packets, the same size of 1514, arrived with high frequency. First packet's timestamp is 3.838928, the last one is 3.838934, all of the 13 packets been transferred only for 0.000006s.

After my calculation, the throughput is (1514 Byte*13)/0.000006s = 3,280,333,333, nearly 3GBps = 24Gbps.

Although the Server-side has a 10GE interface, the Client PC only has a 1Gbps 1000BASE-T link, how does it happen?

confused by calculated throughput of Wireshark for CIFS

I am dealing with a problem using Wireshark for a slow performed CIFS conversation.

But packets captured at Client-side by Wireshark confused me:

There are 13 consecutive packets, with the same size of 1514, arrived from Server-side with high frequency. First packet's timestamp is 3.838928, the last one is 3.838934, all of the 13 packets been transferred only for 0.000006s. 0.000006s(3.838934-3.838928).

After my calculation, the throughput is (1514 Byte*13)/0.000006s = 3,280,333,333, nearly 3GBps = 24Gbps.

Although the Server-side has a 10GE interface, the Client PC only has a 1Gbps 1000BASE-T link, how does it happen?

confused by calculated throughput of Wireshark for CIFS

I am dealing with a problem using Wireshark for a slow performed CIFS conversation.conversation:

PC(Client-side) <----------Download------------Server(Windows SMBv2)

But packets captured at Client-side by Wireshark confused me:

There are 13 consecutive packets, with the same size of 1514, arrived from Server-side with high frequency.

First packet's timestamp is 3.838928, the last one is 3.838934, all of the 13 packets been transferred only for 0.000006s(3.838934-3.838928). 0.000006s(3.838934-3.838928)

After my calculation, the throughput is (1514 Byte*13)/0.000006s = 3,280,333,333, nearly 3,280,333,333 ≈ 3GBps = 24Gbps.

Although the Server-side has a 10GE interface, the Client PC only has a 1Gbps 1000BASE-T link, how does it happen?

confused by calculated throughput of Wireshark for CIFS

I am dealing with a problem using Wireshark for a slow performed CIFS conversation:

PC(Client-side) <----------Download------------Server(Windows SMBv2)

But packets captured at Client-side by Wireshark confused me:me: pcap snapshot

There are 13 consecutive packets, with the same size of 1514, arrived from Server-side with high frequency.

First packet's timestamp is 3.838928, the last one is 3.838934, all of the 13 packets been transferred only for 0.000006s(3.838934-3.838928)

After my calculation, the throughput is (1514 Byte*13)/0.000006s = 3,280,333,333 ≈ 3GBps ≈ 24Gbps.

Although the Server-side has a 10GE interface, the Client PC only has a 1Gbps 1000BASE-T link, how does it happen?

confused by calculated throughput of Wireshark for CIFS

I am dealing with a problem using Wireshark for a slow performed CIFS conversation:

PC(Client-side) <----------Download------------Server(Windows SMBv2)

But packets captured at Client-side by Wireshark confused me: pcap snapshot

There are 13 consecutive packets, with the same size of 1514, arrived from Server-side with high frequency.

First packet's timestamp is 3.838928, the last one is 3.838934, all of the 13 packets been transferred only for 0.000006s(3.838934-3.838928)

After my calculation, the throughput is (1514 Byte*13)/0.000006s = 3,280,333,333 ≈ 3GBps ≈ 24Gbps.

Although the Server-side has a 10GE interface, the Client PC only has a 1Gbps 1000BASE-T link, how does it happen?