On Monday, January 19, 2015 12:04:38 via Digitalmars-d-learn wrote: > As a follow-up to > > http://forum.dlang.org/thread/fdfwrdtjcawprvvko...@forum.dlang.org#post-qxudiyoygnvvbovhjfgt:40forum.dlang.org > > I'm looking for a function that figures out the number of bits > that are needed to represent a zero-based integer: > > value-set => bits > ================= > 0,1 => 1 (*) > 0,1,2 => 2 > 0,1,2,3 => 2 (*) > 0,1,2,3,4 => 3 > 0,1,2,3,4,5 => 3 > 0,1,2,3,4,5,6 => 3 > 0,1,2,3,4,5,6,7 => 3 (*) > 0,1,2,3,4,5,6,7,8 => 4 > ... > > (*) means full bit usage
I had this lying around in one of my projects: /++ Log₂ with integral values rather than real. +/ ulong lg(ulong n) { return n == 1 ? 0 : 1 + lg(n / 2); } /++ Returns the minimum number of bits required to hold the given value. +/ size_t bitsRequired(ulong value) { import std.math; return value == 0 ? 1 : cast(size_t)lg(value) + 1; } unittest { import std.string, std.typetuple; ulong one = 1; assert(bitsRequired(0) == 1); foreach(bits; 0 .. 31) { assert(bitsRequired(one << bits) == bits + 1, format("bits [%s], result [%s]", bits, bitsRequired(bits))); } foreach(T; TypeTuple!(ubyte, ushort, uint, ulong)) { ulong value; foreach(bits; 0 .. T.sizeof * 8) { value <<= 1; value |= 1; } assert(bitsRequired(T.min) == 1); assert(bitsRequired(T.max) == T.sizeof * 8, format("Type [%s], result [%s]", T.stringof, bitsRequired(T.max))); } } - Jonathan M Davis