# Can tshark skip packets when processing a file?

I have a PCAP file with 2,949,187 packets. I would like to use tshark to dissect some of the packets so that I can do further analysis. I am doing this by having tshark write the packets out (in JSON format) to a file, which I can then process.

If I look at the first 100 packets, like this:

tshark -r INFILE.pcap -T json -a packets:100  "frame.number>=1 && frame.number<=100" > OUTFILE.json


things are quite fast. The "-a" tag lets me tell tshark to stop once it has found 100 packets, so writing out this file takes about 0.8 seconds. Which, in my application, is fine.

But if I want to go deeper into the file, it seems like tshark has to process all of the packets along the way. So

tshark -r INFILE.pcap -T json -a packets:101  "frame.number>=2900000 && frame.number<=2900100" > OUTFILE.json


takes 168.4 seconds to complete.

I've looked in the user guide and don't see anything, but am wondering if more experienced hands have some ideas. Is there a way to tell tshark that it doesn't have to completely process these first 2.9M packets (in this case)? I've done a similar thing on my own using a node.js pcap parser, and it's much faster than this. But I want to take advantage of tshark's dissection engine, so it would be convenient if tshark knew how to simply blow by a whole section of the capture.

Is that possible?

edit retag close merge delete

Sort by » oldest newest most voted

Hi, Though it won't be ideal case but in my case I broke this requirement in smaller tasks step 1. using tshark find 1st interesting packet step 2. using tshark filter skip all the packets before interesting packet & write remaining packets to separate file for subsequent processing. Since end-goal was important - I preferred this work-around rather than processing 10GB+ pcap file in every single tshark run.

more

It's in the works but not done: 8789: Qt: add limit + offset options when loading a file.

Until then, look at using editcap to split the file into chunks - PCAP Split and Merge

more

Very helpful, thanks. Yes, I had thought about splitting into chunks, which I could do either with editcap (as you say) or with the node.js parser that I have used. My only problem with this idea is that the subfiles all have packet numbering starting from 1, right? If I'm looking at these chunks to try and get a sense of what's happening in the overall file, I'll have to keep track of the split and adjust the packet numbers in my analysis. That's not impossible, but it's a bit of a pain. I'll be happy when the offset feature has been added.

Along the lines of my point, above, is it possible to change the starting packet number in a file so that it is not "1"?

( 2022-11-09 17:51:47 +0000 )edit

Packet/frame numbers start at 1:

    info.frame_number = 1;


Frame number & Frame time & Frame.time_delta

You could have a lua script that adds another field with a calculated value for the packet from the start of the original file - "File ordinal" * "packets per file" + "current packet number". I haven't come up with a way to programmatically determine the ordinal and packets per file. They could be specified on the command line when calling tshark or entered in a gui menu in wireshark.

( 2022-11-09 19:49:32 +0000 )edit