I was searching for icmp.checksum_bad==1 and found no matches, which was seemingly a good thing; however, when I changed the filter to icmp.checksum_bad==0, I also got no matches. My conclusion is that Wireshark is not computing checksum_bad either way, and is ignoring the display filter specification.
My question is thus: is there a way to get Wireshark to calculate icmp.checksum_bad properly, so that I can rely on it to locate icmp checksum errors?
I did confirm that I had plenty of captured icmp packets.
A similar mechanism on ip header checksums, using ip.checksum_bad produced the expected results where ip.checksum_bad==1 produced no results, but ip.checksum_bad==0 matched every packet.
Another difference I noted was in the expansion of the ip header in a captured packet vs. the expansion of the ICMP packet: there was no checksum_bad field under checksum for ICMP but there was for IP.
asked 26 Mar '13, 09:30
The icmp.checksum_bad was only being added to the tree, thus was only filterable, in the case of a bad checksum, i.e., "icmp.checksum_bad==1" would match those ICMP packets with bad ICMP checksums; however, "icmp.checksum_bad==0" would not have worked, as you've discovered.
Note that in some cases, such as when a snaplen is applied and not all packet bytes are available, the ICMP checksum can't be verified. In this case, "icmp.checksum_bad" will not be set either way as it's simply unknown.
answered 26 Mar '13, 11:19