https://issues.dlang.org/show_bug.cgi?id=23573
--- Comment #4 from Iain Buclaw <[email protected]> --- (In reply to johanengelen from comment #3) > (In reply to elpenguino+D from comment #2) > > > > D doesn't need two undefined flavours of bitfields. Some people find it > > useful to be able to read bitfields from files or the network regardless of > > cpu architecture. > > OK, fair point. > But then the bugreport should be rewritten without comparing it to C > bitfields, and instead specify exactly the bit layout as desired in bytes. > It's not clear to me what is meant, because (I think) the OP says that the > order of bits _within_ a byte is reversed depending on endianness, which > does not give the architecture independent behavior that you want. Well, the bug is that it *nearly* ended up in Phobos without checking whether it actually worked. https://github.com/dlang/phobos/pull/8478/files#diff-0f4bb61407d729d1f6d76e25e15eb84b89ceae014ccc887149c53fb2290a29ccR377-R392 std.numeric also uses bitfields to extract the sign, exponent, and significand out of a float and double too. mixin(bitfields!( T_sig, "significand", precision, T_exp, "exponent" , exponentWidth, bool , "sign" , flags & flags.signed )); It is not obvious to the observer that the mixin will generate code to match the layout for both big and little endian (depending on which is in effect). --
