On Fri, 19 Jun 2009 16:12 -0400, "Michael Bacon" <[email protected]> wrote: > (Dropping info-cyrus on the followup) > > --On June 19, 2009 3:43:43 PM -0400 Michael Bacon <[email protected]> > wrote: > > > --On June 19, 2009 9:57:03 AM +1000 Bron Gondwana <[email protected]> > > wrote: > ># if defined(_BIG_ENDIAN) && !defined(ntohl) && !defined(__lint) > > /* big-endian */ > ># define ntohl(x) (x) > ># define ntohs(x) (x) > ># define htonl(x) (x) > ># define htons(x) (x) > > > ># elif !defined(ntohl) /* little-endian */ > > > > I think I may give our friends out in CA a call here... > > I've put in a ticket with Sun on this, but in thinking about this, I'm > pretty sure this kind of definition is widespread (on our Linux 2.6.9 > login > cluster it's the same story in netinet/in.h), so while I can point it out > to Sun, expecting strong typing to come out of the byteorder functions is > probably a general mistake. Since the functions explicitly want a > uint32_t > or a uint16_t as the argument, the 100% proper thing to do would seem to > me > to do an explicit typecast in the argument to these functions. If it's > just a null macro, that solves the problem, and if it's a real function, > it's good form anyway.
I think it's entirely our fault for storing the result in a time_t, which was 64 bits, and of course it got mapped to the last 4 bytes as follows: 0 0 0 0 t t t t And then we treated it like a string and wrote just the first 4 bytes. It's not Sun's bug, it was Cyrus'. The correct thing to do (and the change that I made in the patch I sent) was to store it in a 32 bit value: t t t t I'm working on a patch to replace the whole lot with uint32_t anyway - standard types for the win :) Bron. -- Bron Gondwana [email protected]
