I have a server that streams video data (RTP over UDP) to a client. I intermittently introduce network delays (using tc-netem on the server side) to influence the video stream. Using wireshark, I captured some of the traffic to understand the impact of the network (delays, packet loss..) on the video stream performance on the client side (lagging, frame loss..etc).
Looking at the captured TCP stream, I could see multiple PSH,ACK (from client) and ACK (from server) messages going back and forth. Now, I thought to understand the network performance, I can look at the wireshark TCP RTT generated graphs, which match perfectly with the delays I introduce (when there's a delay introduced, the RTT increases).
On the other hand, I also collect RTP/UDP packets from both server and client, and then using the sequence number for the RTP packets along with frame timestamps, I calculate a delay. However, when I plot this delay, I can't clearly define the pattern that I could with the TCP RTT figures.
Now, my question is, Which approach is more accurate at representing the network performance analysis? Does the TCP RTT directly correlate to understanding the delays in the network? Or am I thinking about this wrong?