> Some musings, feel free to ignore this post. > > Sometimes I have to convert enums to integers or integers to enums. I'd like > to do it efficiently (this means with minimal or no runtime overhead), and safely (this means I'd > like the type system to prove I am not introducing bugs, like assigning enums that don't exist). > > > This function classifies every natural number in one of the three classes (deficient numbers, perfect numbers, and abundant nubers, according to the sum of its factors), so I use a 3-enum: > > enum NumberClass : int { deficient=-1, perfect=0, abundant=1 } > > NumberClass classifyNumber(int n) > auto factors = filter!((i){ return n % i == 0; })(iota(1, n)); > int difference = reduce!q{a + b}(0, factors) - n; > return cast(NumberClass)sgn(difference); > } > > > std.math.sgn() returns a value in {-1, 0, 1}, so this first version of the function uses just a cast, after carefully defining the same values for the NumberClass enums. But > casts stop the type system, so it can't guaranteed the code is working > correctly or safely, so if I change the values of the enums the type system doesn't catch the bug. > > This version is safer, works with any value associated to the enum items, but > it performs even two tests at run-time: > > NumberClass classifyNumber(int n) > auto factors = filter!((i){ return n % i == 0; })(iota(1, n)); > int diff = sgn(reduce!q{a + b}(0, factors) - n); > if (diff == -1) > return NumberClass.deficient; > else if (diff == 0) > return NumberClass.perfect; > else > return NumberClass.abundant; > } > > > This version is about as safe, and uses one array access on immutable array (I have not used an emum array to avoid wasting even more run time): > > NumberClass classifyNumber(int n) > static immutable res = [NumberClass.deficient, NumberClass.perfect, NumberClass.abundant]; > auto factors = filter!((i){ return n % i == 0; })(iota(1, n)); > int sign = sgn(reduce!q{a + b}(0, factors) - n); > return res[sign + 1]; > } > > > Using a switch is another safe option, I can't use a final switch. This too > has some run-time overhead: > > NumberClass classifyNumber(int n) > auto factors = filter!((i){ return n % i == 0; })(iota(1, n)); > int sign = sgn(reduce!q{a + b}(0, factors) - n); > switch (sign) { > case -1: return NumberClass.deficient; > case 0: return NumberClass.perfect; > default: return NumberClass.abundant; > } > } > > > In theory a bit better type system (with ranged integers too as first-class types) knows that sgn() returns the same values as the enum NumberClass, this allows the first > version without cast and compile-time proof of correctness: > > > NumberClass classifyNumber(int n) > auto factors = filter!((i){ return n % i == 0; })(iota(1, n)); > int difference = reduce!q{a + b}(0, factors) - n; > return sgn(difference); > } > > I don't know what to think. > > Bye, > bearophile
That overhead you are referring to gets negligible even for moderately large n. An entirely safe version of the code without any overhead would be: enum NumberClass : int { deficient=-1, perfect=0, abundant=1 } NumberClass classifyNumber(int n) auto factors = filter!((i){ return n % i == 0; })(iota(1, n)); int difference = reduce!q{a + b}(0, factors) - n; //guard the cast: static assert(NumberClass.min==-1 && NumberClass.max==1); static assert(cast(int)NumberClass.deficient==-1 && cast(int)NumberClass.perfect==0 && cast(int)NumberClass.abundant==1); return cast(NumberClass)sgn(difference); } This is not the way of least resistance though.