On Thursday, 15 February 2018 at 18:30:57 UTC, Jonathan M Davis
wrote:
On Thursday, February 15, 2018 17:53:54 Kyle via
Digitalmars-d-learn wrote:
I want to be able to pass an int to a function, then in the
function ensure that the int is little-endian (whether it
starts out that way or needs to be converted) before
additional stuff is done to the passed int. The end goal is
compliance with a remote console protocol that expects a
little-endian 32-bit signed integer as part of a packet.
Well, in the general case, you can't actually test whether an
integer is little endian or not, though if you know that it's
only allowed to be within a specific range of values, I suppose
that you could infer which it is. And normally, whether a value
is little endian or big endian is supposed to be well-defined
by where it's used, but if you do have some rare case where
that's not true, then it could interesting. That's why UTF-16
files are supposed to have BOMs.
Either way, there's nothing in std.bitmanip geared towards
guessing the endianness of an integral value. It's all based on
the idea that an integral value is in the native endianness of
the system and that the application knows whether a ubyte[n]
contains bytes arranged as little endian or big endian.
- Jonathan M Davis
I was thinking that the client could determine its own endianness
and either convert the passed int to the other if big, or leave
it alone if little, then send it to the server as little-endian
at that point. Regardless, I just came across a vibe packaged
RCON library by Benjamin Schaaf that may work for me, so that's
the new plan, for now. All you guys helping people on the forums
daily are awesome, it's still amazing to me that I can ask
questions here and routinely get answers directly from core
language contributors and D book authors. Thanks for what you do.