1 | initial version |
In most cases, the specification for the protocol, and not the host, determines the byte order for the particular protocol, so not all layers will necessarily have the same byte order.
For Ethernet, according to the 802.3-2019 - IEEE Standard for Ethernet, Section 3.2.6 Length/Type field (the field you highlighted from the Ethernet frame) clearly indicates the following regarding the byte ordering:
For numerical evaluation, the first octet is the most significant octet of this field.
In other words, "Big Endian".
Regarding TCP, the protocol is defined in RFC 793, but strangely, it's not clearly stated that "Big Endian" is the byte order for this protocol, although RFC 1700 does clarify this in the INTRODUCTION under Data Notations:
Data Notations The convention in the documentation of Internet Protocols is to express numbers in decimal and to picture data in "big-endian" order [COHEN]. That is, fields are described left to right, with the most significant octet on the left and the least significant octet on the right.
[COHEN], by the way, references the October 1981 IEEE Computer Magazine article written by Danny Cohen. titled, "On Holy Wars and a Plea for Peace", which is an interesting read in itself.
So again, in almost all cases, the specification will define the byte order. One notable exception to this rule is ICMP, which is defined in RFC 792. It clearly states in the Introduction of the RFC that:
ICMP, uses the basic support of IP as if it were a higher level protocol, however, ICMP is actually an integral part of IP, and must be implemented by every IP module.
Being that ICMP "is actually an integral part of IP", one would presume that it would use "Big Endian as well, since the Internet Protocol, defined in RFC 791 very clearly specifies the data transmission order of bytes that way in APPENDIX B: Data Transmission Order. Unfortunately this isn't the case in practice. Some operating systems, notably Windows, uses "Little Endian" for ICMP. If you ever look at ICMP packets in Wireshark, it's the reason you'll see some fields (i.e., identifier and sequence number) displayed in both byte formats, because Wireshark has no way of knowing with absolute certain which byte order was used. At least one other OS's pinger didn't add all ICMP fields as "Big Endian" either. I mentioned this in Comment 21 of an old Bug 5770, which I'll leave you to read if you wish.
In any case, I believe byte order variation, such as with ICMP, is very rare, at least for published protocols. I have encountered protocols used internally where host byte order was employed, but this is just poor design, and the only ways to figure out which byte order was used when dissecting packets of this type is to guess using heuristics or allow the user to explicitly choose the byte order.
So, to answer your final question, "is it possible that, while a protocol-level property of TCP is big-endian, there is no guarantee that a packet that I inspect on Wireshark will have TCP headers that are written in big-endian?", the answer is yes because you could be looking at a faulty implementation of TCP. Of course if that were the case, then the two peers communicating would both have to be faulty; otherwise communication would simply fail. But to be sure, no correct implementation of TCP will use anything but "Big Endian".
And lastly, I don't know anything about the "Cme Futures iLink3 Sb3" protocol, nor would I know where to find the protocol specification for it, but somewhere in that document it must state that byte transmission order of the protocol, which is presumably "Little Endian", assuming whoever wrote the protocol dissector for it did so correctly.
2 | No.2 Revision |
In most cases, the specification for the protocol, and not the host, determines the byte order for the particular protocol, so not all layers will necessarily have the same byte order.
For Ethernet, according to the 802.3-2019 - IEEE Standard for Ethernet, Section 3.2.6 Length/Type field (the field you highlighted from the Ethernet frame) clearly indicates the following regarding the byte ordering:
For numerical evaluation, the first octet is the most significant octet of this field.
In other words, "Big Endian".
Regarding TCP, the protocol is defined in RFC 793, but strangely, it's not clearly stated that "Big Endian" is the byte order for this protocol, although RFC 1700 does clarify this in the INTRODUCTION under Data Notations:
Data Notations The convention in the documentation of Internet Protocols is to express numbers in decimal and to picture data in "big-endian" order [COHEN]. That is, fields are described left to right, with the most significant octet on the left and the least significant octet on the right.
[COHEN], by the way, references the October 1981 IEEE Computer Magazine article written by Danny Cohen. titled, "On Holy Wars and a Plea for Peace", which is an interesting read in itself.
So again, in almost all cases, the specification will define the byte order. One notable exception to this rule is ICMP, which is defined in RFC 792. It clearly states in the Introduction of the RFC that:
ICMP, uses the basic support of IP as if it were a higher level protocol, however, ICMP is actually an integral part of IP, and must be implemented by every IP module.
Being that ICMP "is actually an integral part of IP", one would presume that it would use "Big Endian Endian" as well, since the Internet Protocol, defined in RFC 791 very clearly specifies the data transmission order of bytes that way in APPENDIX B: Data Transmission Order. Unfortunately this isn't the case in practice. Some operating systems, notably Windows, uses "Little Endian" for ICMP. If you ever look at ICMP packets in Wireshark, it's the reason you'll see some fields (i.e., identifier and sequence number) displayed in both byte formats, because Wireshark has no way of knowing with absolute certain certainty which byte order was used. At least one other OS's pinger didn't add all ICMP fields as "Big Endian" either. I mentioned this in Comment 21 of an old Bug 5770, which I'll leave you to read if you wish.
In any case, I believe byte order variation, such as with ICMP, is very rare, at least for published protocols. I have encountered protocols used internally where host byte order was employed, but this is just poor design, and the only ways to figure out which byte order was used when dissecting packets of this type is to guess using heuristics or allow the user to explicitly choose the byte order.
So, to answer your final question, "is it possible that, while a protocol-level property of TCP is big-endian, there is no guarantee that a packet that I inspect on Wireshark will have TCP headers that are written in big-endian?", the answer is yes yes, because you could be looking at a faulty implementation of TCP. Of course if that were the case, then the two peers communicating would both have to be faulty; otherwise communication would simply fail. But to be sure, no correct implementation of TCP will use anything but "Big Endian".
And lastly, I don't know anything about the "Cme Futures iLink3 Sb3" protocol, nor would I know where to find the protocol specification for it, but somewhere in that document it must state that the byte transmission order of the protocol, which is presumably "Little Endian", assuming whoever wrote the protocol dissector for it did so correctly.