On Sun, Mar 24, 2013 at 8:59 PM, Nicholas Thompson <
nickthomp...@earthlink.net> wrote:

> Joshua, ****
>
> ** **
>
> You are absolutely correct.  “higher-order bit” it is.  Even better.  Can
> you imagine what a former English major’s imagination did with that?
>

Part of the history of this term is the "big vs little endian" (spelling
correct) hardware issue.  The hardware can be laid out such that the bits
of an integer are are from low-to-hi (little endian) or the reverse (big
endian)
    http://en.wikipedia.org/wiki/Endianness#Endianness_and_hardware

The reason for little endian is that concatenation of bytes is
more natural .. going from a 8 bit integer to a 16 bit integer to a ... is
simply laying the bits out in order.  It is also the most reasonable for
"streaming" a data array such as an image.  Most desktop browsers, for
example, are little endian.  This was a major bug for me in the AgentScript
library.

Big endian is more natural when considering the integers themselves, the
left-most bit is the MSB .. most significant bit.  IIRC, phones tend to use
this in their browsers.

JavaScript attempts to mask all this via their typed arrays .. but it still
becomes problematic for image/pixel manipulation.

Most libraries depending on pixels now simply create a small, 4-byte array,
filling the bytes with 01, 02, 03, 04 and then test for the 32bit value
having 01 or 04 at the "high end".

In ether format, the high-order-bit is the bit signifying the highest order
of 2 in the bit array.

   -- Owen
============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Reply via email to