Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

floating point issue in wsgd dissector

i have implemented the following function in a wsgd dissector:

function string hex_and_float (in uint32 value)
{
  hide var uint32 b0 = (value & 0xFF000000) >> 8;
  hide var uint32 b1 = (value & 0x00FF0000) << 8;
  hide var uint32 b2 = (value & 0x0000FF00) >> 8 ;
  hide var uint32 b3 = (value & 0x000000FF) << 8;
  hide var uint32 i = b0 + b1 + b2 + b3;
  hide var string str = print("0x%x (%.2e)", i, i);
  return  str;
}

The string returned is correct for the hexadecimal part but not for the floating part. It seems that the variable i is considered as a 64 bits float. But it's a 32 bit float. Example of decoding:

0x428c4dad (5.52e-315)

How to force the print function to consider i as a 32 bits float ?

floating point issue in wsgd dissector

i have implemented the following function in a wsgd dissector:

function string hex_and_float (in uint32 value)
{
  hide var uint32 b0 = (value & 0xFF000000) >> 8;
  hide var uint32 b1 = (value & 0x00FF0000) << 8;
  hide var uint32 b2 = (value & 0x0000FF00) >> 8 ;
  hide var uint32 b3 = (value & 0x000000FF) << 8;
  hide var uint32 i = b0 + b1 + b2 + b3;
  hide var string str = print("0x%x (%.2e)", i, i);
  return  str;
}

The string returned is correct for the hexadecimal part but not for the floating part. It seems that the variable i is considered as a 64 bits float. But it's a 32 bit float. float.

Example of decoding:output of hex_and_float():

0x428c4dad (5.52e-315)

How to force the print function to consider i as a 32 bits float ?