On 27 August 2011 21:57, Brandon Allbery <allber...@gmail.com> wrote:
> On Sat, Aug 27, 2011 at 06:57, Andrew Coppin > <andrewcop...@btinternet.com>wrote: > >> On 26/08/2011 10:51 PM, Steve Schafer wrote: >> >>> On Fri, 26 Aug 2011 20:30:02 +0100, you wrote: >>> >>>> You wouldn't want to know how many bits you need to store on disk to >>>> reliably recreate the value? >>>> >>> >>> I can't say that I have cared about that sort of thing in a very long >>> time. Bits are rather cheap these days. I store data on disk, and the >>> space it occupies is whatever it is; I don't worry about it. >>> >> >> I meant if you're trying to *implement* serialisation. The Bits class >> allows you to access bits one by one, but surely you'd want some way to know >> how many bits you need to keep? >> > > I think that falls into the realm of protocol design; if you're doing it > in your program at runtime, you're probably doing it wrong. (The fixed size > version makes sense for marshaling; it's *dynamic* sizes that need to be > thought out beforehand.) > > All search engines deal with compressed integers, all compressors do, and most people doing bit-manipulation. Golomb, gamma, elias, rice coding, they all need this. Heck, even the Intel engineers chose to optimize this function by including the BSR instruction in the 386 architecture. This is a basic building block. Don't underestimate the bit, it is coming back with a vengeance. Bit-coding is everywhere now, because of the memory hierarchy. No haskeller should be left behind. Alexander
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe