Strange bug in transmission of float values over TCP/IP
axilmar

Hello all.

I have an extremely strange bug.

I have two applications that communicate over TCP/IP.

Application A is the server, and application B is the client.

Application A sends a bunch of float values to application B every 100 milliseconds.

The bug is the following: sometimes some of the float values received by application B are not the same as the values transmitted by application A.

Initially, I thought there was a problem with the Ethernet or TCP/IP drivers (some sort of data corruption). I then tested the code in other Windows machines, but the problem persisted.

I then tested the code on Linux (Ubuntu 10.04.1 LTS) and the problem is still there!!!

The values are logged just before they are sent and just after they are received.

The code is pretty straightforward: the message protocol has a 4 byte header like this:

#SelectExpand
1//message header 2struct MESSAGE_HEADER { 3 unsigned short type; 4 unsigned short length; 5}; 6 7//orientation message 8struct ORIENTATION_MESSAGE : MESSAGE_HEADER 9{ 10 float azimuth; 11 float elevation; 12 float speed_az; 13 float speed_elev; 14}; 15 16//any message 17struct MESSAGE : MESSAGE_HEADER { 18 char buffer[512]; 19}; 20 21//receive specific size of bytes from the socket 22static int receive(SOCKET socket, void *buffer, size_t size) { 23 int r; 24 do { 25 r = recv(socket, (char *)buffer, size, 0); 26 if (r == 0 || r == SOCKET_ERROR) break; 27 buffer = (char *)buffer + r; 28 size -= r; 29 } while (size); 30 return r; 31} 32 33//send specific size of bytes to a socket 34static int send(SOCKET socket, const void *buffer, size_t size) { 35 int r; 36 do { 37 r = send(socket, (const char *)buffer, size, 0); 38 if (r == 0 || r == SOCKET_ERROR) break; 39 buffer = (char *)buffer + r; 40 size -= r; 41 } while (size); 42 return r; 43} 44 45//get message from socket 46static bool receive(SOCKET socket, MESSAGE &msg) { 47 int r = receive(socket, &msg, sizeof(MESSAGE_HEADER)); 48 if (r == SOCKET_ERROR || r == 0) return false; 49 if (ntohs(msg.length) == 0) return true; 50 r = receive(socket, msg.buffer, ntohs(msg.length)); 51 if (r == SOCKET_ERROR || r == 0) return false; 52 return true; 53} 54 55//send message 56static bool send(SOCKET socket, const MESSAGE &msg) { 57 int r = send(socket, &msg, ntohs(msg.length) + sizeof(MESSAGE_HEADER)); 58 if (r == SOCKET_ERROR || r == 0) return false; 59 return true; 60}

When I receive the message 'orientation', sometimes the 'azimuth' value is different from the one sent by the server!

Shouldn't the data be the same all the time? doesn't TCP/IP guarantee delivery of the data uncorrupted? could it be that an exception in the math co-processor affects the TCP/IP stack? is it a problem that I receive a small number of bytes first (4 bytes) and then the message body?

Thomas Fjellstrom

First thing I'd suggest is to not send the struct raw like that. Serialize it properly and send it out. By default structs can have a fair amount of padding between elements depending on types and order. And different versions of compilers, and different compilers may align elements differently. So its best to not send structs directly.

Oscar Giner

Are the server and client compiled with the same compiler and version, and with the same compile flags (ones that affect how certain floating point operations are executed)? And even then, runing each one on a different CPU may lead to slightly different results (IEEE specifies floating point representation, but not operations on them, so a simple 32 bit float -> 80 bit float (as the x86 FPU operates with 80 bit floats) conversion may yield different results between CPU's).

So don't use floats with network applications (or any application where different computers must return exactly the same value). Floating point is not designed for 100% accurate operations.

kazzmir

In my game I shift all floating point values to the left of the decimal by 2 and send an integer, then shift it back on the receiving side.

int to_send = (int)(some_float * 100);
...
float received = recv() / 100.0;

SiegeLord

Hmm... I'm a tad confused. When you send a float like that (via fixed point) you're relying on the integer to be precisely transported over the network. What is the difference between that and storing the float's bit pattern inside the int with full precision? After all, float bit patterns are well standardized.

All the other CPU differences are endemic to using floats period, and have nothing to do with transporting them over a network.

bamccaig

I don't think that you can rely on all machines representing the floating point number in exactly the same bit pattern... It's probably best to send character data and parse it. :)

GullRaDriel

Use double. Float aren't normalized as much as double.

I used to need the same thing as you and so I went in the same problem. The tests showed it to work with double because it's IEEEEEEEEEEEEEE I don't know what.

SiegeLord
bamccaig said:

I don't think that you can rely on all machines representing the floating point number in exactly the same bit pattern... It's probably best to send character data and parse it. :)

I'd need evidence to prove that to me. IEEE 754 standard strictly defines the bit patterns of valid floats (I think it gives leeway for NaN's). I can't think of any system in wide use that does not implement IEEE 754.

kazzmir

You may be right that float/double's can be sent over the network to arbitrary CPU's but I do not know if this is strictly true so I played it safe and just used integers.

SiegeLord
int a = 0;
a = 0; // Set it again, just to be sure

;)

Billybob
SiegeLord said:

int a = 0;
a = 0; // Set it again, just to be sure

QFT.

My bet is on struct padding.

Evert
bamccaig said:

I don't think that you can rely on all machines representing the floating point number in exactly the same bit pattern...

Sure you can. If you know they're using IEEE floats and the processors in question use the same endianness.

Float aren't normalized as much as double.

Bollocks. Single precsision floats are just as standard as double precision floats. Most likely, they're both encoded using IEEE 754, and if one of them isn't, neither is the other one.

Transmitting floats in binary over a network is no different and no less portable than dumping floats to a file in binary (which is certainly something you in principle do want to be careful about because there are computers out there that store floats differently than a consumer PC does).
What is not very portable is relying on the layout of a struct to be the same from one compilation to another.

axilmar

First thing I'd suggest is to not send the struct raw like that. Serialize it properly and send it out. By default structs can have a fair amount of padding between elements depending on types and order. And different versions of compilers, and different compilers may align elements differently. So its best to not send structs directly.

True, but the code uses packing of 1, so that is not the problem.

Furthermore, if it was, the problem would be immediately visible.

Are the server and client compiled with the same compiler and version, and with the same compile flags (ones that affect how certain floating point operations are executed)?

Yes.

Quote:

And even then, runing each one on a different CPU may lead to slightly different results (IEEE specifies floating point representation, but not operations on them, so a simple 32 bit float -> 80 bit float (as the x86 FPU operates with 80 bit floats) conversion may yield different results between CPU's).

True, but can this account for the big differences in value? for example, a value of 0.780193 in the server becomes 0.790193 in the client. Can the value difference be 0.010?

Quote:

So don't use floats with network applications (or any application where different computers must return exactly the same value). Floating point is not designed for 100% accurate operations.

Unfortunately, I have to use floats because it's in the specification protocol given by the client.

Evert said:

Most likely, they're both encoded using IEEE 754

True. The protocol specifies IEEE 754 floats.

Arthur Kalliokoski
axilmar said:

a value of 0.780193 in the server becomes 0.790193 in the client.

Pics or it didn't happen :X

Evert
axilmar said:

for example, a value of 0.780193 in the server becomes 0.790193 in the client. Can the value difference be 0.010?

No.[1]
Remember, you're not doing calculations here, just sending numbers across.

You can do the following experiment: read the float value in as 32-bit integer and examine the bit pattern. These should be identical. If they are, and yet the float values are different... well, I'm not sure what to suggest, except to interpret the float explicitly and reconstruct its value "by hand". If they're not the same, there's a bug somewhere.

References

  1. Yes, in a calculation, you can if you're not careful - especially with single precision.
Arthur Kalliokoski
axilmar said:

0.780193 in the server becomes 0.790193 in the client.

The hex representations are

0.780193 = 0x3F47BABA
0.790193 = 0x3F4A4A17

It's quite a remarkable coincidence to alter the pattern to the second pattern at random.

axilmar

Pics or it didn't happen

I've attached a pic of the problem. The server transmits the value -0.830673, and the client receives the value -0.831650. The pic is from Excel, the columns as set to type 'number' with 6 decimal digits.

It's not a rounding issue with Excel, because the Excel data come from .csv files produced by logging the data directly in the client and server, and the same values exist in the .csv files.

Evert said:

You can do the following experiment: read the float value in as 32-bit integer and examine the bit pattern. These should be identical. If they are, and yet the float values are different... well, I'm not sure what to suggest, except to interpret the float explicitly and reconstruct its value "by hand". If they're not the same, there's a bug somewhere.

Good idea. I am also going to use Wireshark to see what are the actual data transmitted over the network.

EDIT:

I added another picture that shows the transmitted/received bytes at server/client. There is a difference between the bytes transmitted and the bytes received.

Tobias Dammers

If the bytes sent differ from the bytes received, then the only thing I can think of is a firewall or router between client and server that somehow misinterprets the bytes; maybe something along the way is trying to convert between character encodings.

GullRaDriel

maybe something along the way is trying to convert between character encodings.

I wouldn't expect a router from doing that much. TCP is guaranteed to give you the exact buffer you gave in the enter. Expecting it to do some conversion would break it itself.

axilmar said:

There is a difference between the bytes transmitted and the bytes received.

There lies your problem. The data is filled with garbage at the end point.

Edit: Are you checking the return values of your function, are you sure it's not the MESSAGE_HEADER management who's broken and who's causing you to not receive the good amount of data ?

Edit2:

Thomas said:

First thing I'd suggest is to not send the struct raw like that. Serialize it properly and send it out. By default structs can have a fair amount of padding between elements depending on types and order. And different versions of compilers, and different compilers may align elements differently. So its best to not send structs directly.

Quoted for thruth !!! I didn't noticed it before, but you must not send structs directly on the network. The byte order of each computer can be different. Serialize.

My own send and recv is working like that:

-htonl of type
-send type
-htonl of size
-send size
-send buffer who's length is size

-recv type
-type = ntohl type
-recv size
-size = ntohl size
-recv buffer who's length is size

Billybob

You never show how a MESSAGE is constructed before sending, or re-constructed after receiving.

Evert

I'd suggest checking for parity bits, but it's a bit odd if that only affects the one number.
Anyway, if the bit patterns are different, then your problem has nothing to do with floats per se and the same problem would/should show up with integer data. Or any data really.

You do have access to both the server code and the client code? Does the problem persist if you send the data over a local socket?

axilmar

I think I found the problem. The endianess swapping routine does not work for floats.

If this code is run:

#SelectExpand
1#include <iostream> 2using namespace std; 3 4float ntohf(float f) 5{ 6 float r; 7 unsigned char *s = (unsigned char *)&f; 8 unsigned char *d = (unsigned char *)&r; 9 d[0] = s[3]; 10 d[1] = s[2]; 11 d[2] = s[1]; 12 d[3] = s[0]; 13 return r; 14} 15 16int main() { 17 unsigned long l = 3206974079; 18 float f1 = (float &)l; 19 float f2 = ntohf(ntohf(f1)); 20 unsigned char *c1 = (unsigned char *)&f1; 21 unsigned char *c2 = (unsigned char *)&f2; 22 printf("%02X %02X %02X %02X\n", c1[0], c1[1], c1[2], c1[3]); 23 printf("%02X %02X %02X %02X\n", c2[0], c2[1], c2[2], c2[3]); 24 getchar(); 25 return 0; 26}

It outputs the following:

7F 8A 26 BF
7F CA 26 BF

The two lines should be identical, but they are not.

Does anyone have an idea why this is happening?

Thomas Fjellstrom

TCP already has parity bits. If something went wrong the packet never would have made it. Unless the receiver corrupted it after the TCP stack was done with it.

Evert
axilmar said:

Does anyone have an idea why this is happening?

Unless there is something peculiar about C++ I don't know about, the following C program should be identical:

#SelectExpand
1#include <stdio.h> 2 3float ntohf(float f) 4{ 5 float r; 6 unsigned char *s = (unsigned char *)&f; 7 unsigned char *d = (unsigned char *)&r; 8 d[0] = s[3]; 9 d[1] = s[2]; 10 d[2] = s[1]; 11 d[3] = s[0]; 12 return r; 13} 14 15int main() { 16 unsigned long l = 3206974079; 17 float f1 = *((float *)&l); 18 float f2 = ntohf(ntohf(f1)); 19 unsigned char *c1 = (unsigned char *)&f1; 20 unsigned char *c2 = (unsigned char *)&f2; 21 printf("%02X %02X %02X %02X\n", c1[0], c1[1], c1[2], c1[3]); 22 printf("%02X %02X %02X %02X\n", c2[0], c2[1], c2[2], c2[3]); 23 return 0; 24}

(Yes, I basically copied your code; I'd use a union instead of those fugly casts). This gives

7F 8A 26 BF
7F 8A 26 BF

as expected. So I would say that there is either a problem with your compiler, or your hardware...

TCP already has parity bits. If something went wrong the packet never would have made it.

I was thinking something could have stuck an extra layer of parity bits in there. Well, actually, I didn't actually think that, but it was one of the only things I could think of that would give you different numbers on the sender and the receiver.

axilmar

I actually went through the program in assembly...the function ntof, instead of returning the float through a 32-bit register, pushed the value to the floating point stack of the co-processor.

The floating point stack accepts 80-bit values, and therefore the float value expanded from 32 to 80 bits.

When the caller read the value, the value was extracted from the floating point stack, and converted from 80 bits to 32 bits. This caused a rounding problem.

EDIT:

The rounding problem was magnified because it happened on the swapped float, not on the original one.

bamccaig

I get the same results as axilmar with both programs in both 32-bit Linux and 32-bit Windows (compiled with GCC and MinGW).

Arthur Kalliokoski

t.cpp:17: warning: this decimal constant is unsigned only in ISO C90
t.cpp: In function 'int main()':
t.cpp:18: warning: dereferencing type-punned pointer will break strict-aliasing rules

I didn't investigate further

axilmar

The problem is that the float value that has its bytes swapped is pushed to the floating point stack. When popped from the stack, it is rounded by the hardware, but the value has swapped bytes, and therefore the rounding is wrong.

Arthur Kalliokoski

The rounding, if any, would only occur on the least significant bits.

Thomas Fjellstrom

The rounding, if any, would only occur on the least significant bits.

The problem is his endian swapping function is passing the "swapped" value as a float, and it shouldn't. At that point it still has some of the data swapped. Which is bad. Instead it should be passed as an unsigned int, or something.

Arthur Kalliokoski

it should be passed as an unsigned int

Ah, ok. As long as it's not the other way around (nan's)

Thomas Fjellstrom

Or heck, an array of unsigned char might be best. That way nothing should muck with the data before the swapper.

bamccaig

t.cpp:17: warning: this decimal constant is unsigned only in ISO C90
t.cpp: In function 'int main()':
t.cpp:18: warning: dereferencing type-punned pointer will break strict-aliasing rules

I didn't investigate further

I saw this as well with both programs (-Wall). Interestingly, on my server running Gentoo in a XEN VM, I don't get this warning and I get the intended results from both programs.

It's referring to this line, IIRC: unsigned long l = 3206974079;

Thomas Fjellstrom

It probably depends on the optimization level, and the version of GCC.

Evert
bamccaig said:

I get the same results as axilmar with both programs in both 32-bit Linux and 32-bit Windows (compiled with GCC and MinGW).

Interesting:

eglebbk@morgaine:~/tmp>gcc -Wall -m64 -O0 test.c 
eglebbk@morgaine:~/tmp>./a.out 
7F 8A 26 BF
7F 8A 26 BF
eglebbk@morgaine:~/tmp>gcc -Wall -m64 -O2 test.c 
eglebbk@morgaine:~/tmp>./a.out 
7F 8A 26 BF
7F 8A 26 BF
eglebbk@morgaine:~/tmp>gcc -Wall -m32 -O0 test.c 
test.c: In function ‘main’:
test.c:16: warning: this decimal constant is unsigned only in ISO C90
eglebbk@morgaine:~/tmp>./a.out 
7F 8A 26 BF
7F CA 26 BF
eglebbk@morgaine:~/tmp>gcc -Wall -m32 -O2 test.c 
test.c: In function ‘main’:
test.c:16: warning: this decimal constant is unsigned only in ISO C90
eglebbk@morgaine:~/tmp>./a.out 
7F 8A 26 BF
7F CA 26 BF
eglebbk@morgaine:~/tmp>gcc -Wall -m32 -O3 test.c 
test.c: In function ‘main’:
test.c:16: warning: this decimal constant is unsigned only in ISO C90
eglebbk@morgaine:~/tmp>./a.out 
7F 8A 26 BF
7F 8A 26 BF
eglebbk@morgaine:~/tmp>

So 32 bit vs 64 bit makes a difference, and compiler flags make a difference. No real surprise there, I guess.
EDIT: I guess that in the last instance the problem doesn't show up because the compiler optimises away the conversion.

The problem is his endian swapping function is passing the "swapped" value as a float, and it shouldn't. At that point it still has some of the data swapped.

Indeed.
I guess the lesson is to use integer datatypes whenever you're dealing with bit patterns directly in any way.

Or heck, an array of unsigned char might be best. That way nothing should muck with the data before the swapper.

I'd still use a union. :P
Possibly about one of the few things a union is really useful for.

GullRaDriel

As stated before, I had the problem with floats, but not with doubles. Results may vary from target to target.

I assume that the best way to send floating values and keep them as they are is to send them converted into a string, and recv converting them back to a float.

Links that could help:
http://codeidol.com/csharp/csharp-network/Using-The-Csharp-Sockets-Helper-Classes/Moving-Data-across-the-Network/

Last search:

http://www.experts-exchange.com/Programming/Languages/C/Q_20266384.html said:

Some remarks from an old sod who's been there and done that. Never convert a float (or double) to its decimal representation because it's so soft for your hands and it transmits so portably over a socket or whatever.

Most floating point numbers cannot be represented exactly in decimal AND binary radix notation. Rounding errors
will kill you somewhere in the near future.

I consider the lack of htonf, htond and their counterparts
a severe omission from the htonX suite of macros/functions.

And there is more misery showing up here; passing a struct
to another hardware platform doesn't make any sense either; how about internal and trailing padding? How about alignment of the individual members? Consider this:

struct _my_thingy {
int i;
float f;
}

Maybe on my architecture the sizeof(struct _my_thingy) == 8, what all the Intelians expect. What about 64 bit integers then? what about 8 byte alignment on some machines?

As a general rule of thumb: don't pass structs around from earth to mars. Instead, unravel them into their individual members and pass those around instead, which brings us back again at the question -- how do I send a float over a cable somewhere to something else?

Here's a possible portable way of doing this. The function:

double frexp(double x, int* exp);

returns a normalized (double) floatng point number, and stores the (binary) exponent in *exp, given any number x.

We're halfway there, given the htonl() macro/function, we can send the exponent to the other world safely. What about that normalized mantissa? The mantissa happens to be a number in the range [1/2 ... 1) if x was non zero, otherwise this normalized number equals zero also.

A cheap trick (assuming a float number can be stored in
at most four bytes, including the exponent) is this:

long mant = f*0x40000000;

Variable mant contains the mantissa, multiplied by a
huge number, just enough to keep all binary digits.

This long int number can be transmitted to the other
world using htonl() again.

The other world receives the exponent, uses ntohl() to
transform it back to its internal format. Next the mantissa is received, ntohl() is applied again, the result is divided by 0x40000000 and finally the function ldexp:

double ldexp(double mant, int exp);

is applied in order to get the original number back again in the alien format.

I know it's quite a job to get things 'portable', but all MS dependent assumptions are simply show stopppers here ...

kind regards,

Jos

EDIT:
Last minute gem: You can also use the xdr() function which is given to do what you want.

You may also want to have a look at JSON and BSON.

Also, TheBeejGuide has been updated and now have a little "how to send" data ^^

I'm done for it !

Arthur Kalliokoski

If doubles aren't causing a problem, I'd say it's just because there are fewer rounding differences, you just haven't found the values that will mess up yet.

axilmar

I assume that the best way to send floating values and keep them as they are is to send them converted into a string, and recv converting them back to a float.

No, there is no need for that. Just reverse the float in-place and send it over the network. The real problem is when the reversed float is pushed to the FP stack by the callee, and then popped from the FP stack from the caller. Then it gets messed up.

BAF
Quote:

float f2 = ntohf(ntohf(f1));

Why the hell are you calling ntohf twice? Shouldn't that be ntohf(htonf(f1))? ???

GullRaDriel

I'm OK with what Arthur said about the double, but I'm not totally convinced with your quote, axilmar.

Nothing's more portable than the converted to ascii trick.

Bwah ! I don't care.

;D

EDIT: You should really go and read the beej's link I posted before. It's a lot more complete than the previous versions.

Sevalecan
BAF said:

Why the hell are you calling ntohf twice? Shouldn't that be ntohf(htonf(f1))?

Yeah, that'll fuck it up.

Arthur Kalliokoski

How would it fuck it up? Both ntohs() and htonl() simply swap the bytes on little endian machines but leave them intact on big-endian machines, and if you do it twice you have the original again.

#SelectExpand
1#include <stdio.h> 2#include <netinet/in.h> 3 4int a = 0x12345678; 5int b; 6 7int main(void) 8{ 9 printf("Original number is 0x%X\n",a); 10 printf("Converting with ntohl() gives "); 11 b = ntohl(a); 12 printf("0x%X\n",b); 13 printf("Convert back with ntohl() gives "); 14 a = ntohl(b); 15 printf("0x%X\n",a); 16 17 printf("\nNow doing it the \"right\" way with ntohl() and htonl()\n"); 18 printf("Original number is 0x%X\n",a); 19 printf("Converting with ntohl() gives "); 20 b = ntohl(a); 21 printf("0x%X\n",b); 22 printf("Convert back with htonl() gives "); 23 a = htonl(b); 24 printf("0x%X\n",a); 25 26 return 0; 27}

GullRaDriel

And so, in which way calling it twice IS useful ?

axilmar
BAF said:

Why the hell are you calling ntohf twice? Shouldn't that be ntohf(htonf(f1))?

Both ntohf and htonf do the exact same job: they reverse the bytes of the variable.

but I'm not totally convinced with your quote, axilmar.

Nothing's more portable than the converted to ascii trick.

IEEE 754 floats have a specific representation, so there is no need to convert it to ASCII.

And so, in which way calling it twice IS useful ?

It's not useful in an application, I just put it up to demonstrate the problem.

When transmitting float values over the network, the transmitter application does 'htonf(f)' and the receiver does 'ntohf(f)'.

Evert

So, did you solve the problem by not interpreting the value as a float until the bits had been properly unscrambled?

axilmar
Evert said:

So, did you solve the problem by not interpreting the value as a float until the bits had been properly unscrambled?

I solved the problem by doing in-place reversal of bytes in the packet to be transmitted.

bamccaig

Interesting that the ntohf functions written in this thread don't actually account for the host system's endianness. :P

axilmar
bamccaig said:

Interesting that the ntohf functions written in this thread don't actually account for the host system's endianness.

Please elaborate? the endianess swapping function I posted converts a float value from little endian to big endian, and it's useful in 80x86 systems; it's not cross platform.

GullRaDriel
axilmar said:

IEEE 754 floats have a specific representation, so there is no need to convert it to ASCII.

The mainframes in use at our office does not stand IEEE 754. Plus they are EBCDIC ^^

Oscar Giner

If you only support x86 systems, what's the point in converting the values to big endian? Just send everything in little endian.

bamccaig

^ This.

Billybob

What about Mac's!? ... oh wait.

The machines we use in day to day life are no longer x86 only. Look at your phone. ARM is becoming an increasing popular platform, and it can run in either endianness. Depending on the application it may be wise to be endian-neutral in your network code just so you have ability to interface with mobile devices.

That said, if your networking API is designed well, you can skip endian-neutrality for now and add it later with minimal effort.

ImLeftFooted

Wait ARM can be big endian? My code has been assuming little endian without issues.

What causes ARM to switch -- are there any iPhone that would have it switched?

BAF
axilmar said:

Both ntohf and htonf do the exact same job: they reverse the bytes of the variable.

Must be something about C coders. You have absolutely no good reason to not use both functions. Doing it the way you've done it only adds confusion and makes the code harder to read.

I like my code to be self documenting, not as archaic and obscure as possible.

Billybob

Wait ARM can be big endian?

According to WikiPedia's article on Bi-endian Hardware, yes. It can be either changed by software, or on some platforms, locked by the hardware. I would bet Apple's A4, and any other uses of ARM, are hardware locked.

axilmar
BAF said:

Must be something about C coders. You have absolutely no good reason to not use both functions. Doing it the way you've done it only adds confusion and makes the code harder to read.

I like my code to be self documenting, not as archaic and obscure as possible.

I agree, I only did this in the context of the example. In the real code at work, I have two functions (ntohf, htonf), which is helpful: if you want to check incoming values, you can put a breakpoint to 'ntohf', for example.

Thread #605106. Printed from Allegro.cc