Making TCP Performance Analysis Deterministic: A Model-Driven Way to Read Wireshark Traces

asked 2026-03-31 15:54:28 +0000

Taifeng Tan gravatar image

Hi all,

Performance analysis is often like asking “why my car won’t start.” Throughput drops, latency spikes, connections stall — it feels random, but it’s not.

There’s a hidden logic behind every slowdown.

I’ve been working on a model-driven way to expose that logic directly using Wireshark, by combining two complementary models that turn packet captures into something much more deterministic.


1) Network Path Model (structure / limits)

This model abstracts the entire end-to-end path into a single “pipe” defined by:

• Base RTT

• Bottleneck bandwidth

• Buffer size

• Queuing / drop behavior

From a TCP perspective, the upper bound of Bytes in Flight can be expressed as: BIF ≤ min(send buffer, cwnd, rwnd, BDP + bottleneck buffer)

These four “valves” define the maximum possible ceiling. The 5th Factor — Feedback Signals (ACK dynamics)

Beyond the four structural limits, there is a critical dynamic factor:

• Feedback Signals (ACK timing, stability, pacing)

They determine how close BIF can actually get to that ceiling in reality.

In other words:

• The valves define the limit

• The feedback drives the system toward (or away from) that limit

📌 [Network Path Model diagram] image description


2) Network Congestion Model (state / behavior)

On top of the structural limits, every flow at any moment falls into one of three observable phases:

• Application-limited

• Bandwidth-limited

• Buffer-limited

This model describes the actual operating state of the flow.

image description This model is based on the BBR paper(https://research.google/pubs/bbr-congestion-based-congestion-control-2/), extended through parameters, and adapted to Wireshark's existing capacity.


Why Wireshark makes these two models powerful

Individually, these models are conceptual.

With Wireshark, they become directly observable and testable.

Using TCP Stream Graphs, we can map:

• RTT → RTT Graph (base RTT vs queuing delay)

• Bytes in Flight → Window Scaling Graph

• Send Rate → Throughput Graph

• Delivery Rate → Goodput Graph

By correlating these signals:

👉 You can pinpoint exactly where the bottleneck is

👉 You can separate network, TCP, and application limitations clearly

👉 You move from intuition → to evidence

📌 [Network Congestion Model diagram] image description


Why this matters

In practice, this approach helps:

• Avoid “network vs system vs application” blame loops

• Quickly narrow down the problem domain

• Turn packet analysis into a repeatable, model-driven methodology


I’ve also submitted a feature request to improve multi-metric correlation in Wireshark:

https://gitlab.com/wireshark/wireshar...

I’ve also put together some packet-level walkthroughs showing how these two models map to real captures step by step. If that’s useful, feel free to take a look:

https://www.youtube.com/@NetworkTechE...

Thanks, and appreciate any feedback or discussion.

Taifeng Tan

edit retag flag offensive close merge delete

Comments

Something is wrong and the graphs don't appear.

Taifeng Tan gravatar imageTaifeng Tan ( 2026-03-31 15:57:17 +0000 )edit

Well, I don't think this belongs here anyway. You've already opened Wireshark Work Item #21126 for this as well as asked for feedback about it on the Wireshark-dev mailing list. This is a Q&A site, and there isn't really a question here.

cmaynard gravatar imagecmaynard ( 2026-03-31 17:29:47 +0000 )edit