This is a static archive of our old Q&A Site. Please post any new questions and answers at ask.wireshark.org.

How can I measure packet loss, delay and jitter for a streaming video over a LAN?

0

Hi.! if anyone can help me, exactly I need to know how to measure parameters such as packet loss, jitter and delay over a LAN on which runs the service streaming video, as currently I tried filtering UDP packets and when analyzing the statistics and the summary does not meet the above parameters, this might doing wrong ?? I forgot to mention that there is also over this LAN VoIP and file transfer services but need only evaluate these parameters in the video streaming service, two questions: - It is possible to obtain these parameters if I send streaming video on the RTP protocol, if possible, how to avoid confusion with the telephony service? - I can use Telephony> RTP> "stream analysis" to analyze streaming video?

asked 26 Nov '15, 14:10

Fer1006's gravatar image

Fer1006
6112
accept rate: 0%


One Answer:

0

There are two distinct issues to deal with:

  • separation of the of the video traffic from other RTP traffic. Unless the IP telephony server is the same machine as the video streamer, the source IP address should be a sufficient identifier of a video stream.

  • precision of the measurement, especially of jitter.

To the first point - yes, you may use "RTP stream analysis" to analyse streamed video if it is really properly encapsulated as RTP with all the headers. But be careful:

  • to analyse packet loss, you have to analyse only the capture from the receiving machine, not from the sending one, so that you could see the real loss (caused by e.g. overbooking of the available connection bandwidth).
  • to measure delay, you need to capture at both the source and the destination machines simultaneously, because there is no absolute information about departure time inside the packet.
  • to measure jitter, you should compare both captures because to determine the jitter caused by the LAN alone, you have to subtract the amount of jitter coming already from the sending machine itself and from the machines running the capture - see below.

The precision of measurement is another issue. According to common sense and also this answer, the jitter is calculated as variation of difference between the RTP timestamp of the packet (which is written into the RTP packet by its sender) and the packet capture timestamp (which is added to the packet badge by the capturing machine). And both precision and resolution of the capture timestamps depend on the operating system and load of the machine running the capture, which may easily contribute much more jitter than the LAN: unless you use a specialized hardware, the capture timestamp is not assigned at the moment when the packet physically arrives over the wire and is stored to RAM but at the moment when the operating system notices that it has arrived. So the capturing machine should run a "real time" operating system, use "real" Ethernet ports (not USB - Ethernet adaptors) and preferably do nothing else but capture. So capturing directly on sending and receiving machine is not a good idea, you should use a tap or a SPAN port of a switch and a separate (physical, not virtual!) machine for capturing. Only if you'll see a difference between the mean jitter value of a given stream captured at source and the mean jitter value of exactly the same stream captured at destination (and for several such measurements, the mean jitter value is consistently higher in the captures taken at receiving side), you can consider the difference to be the jitter contributed by the LAN.

answered 27 Nov '15, 02:10

sindy's gravatar image

sindy
6.0k4851
accept rate: 24%

edited 27 Nov '15, 14:18