well not sure it's only for "interpreter loops" ; seems to be applicable any time you have a switch on a finite range (eg enum; maybe also 8 bit integers in case all integers are specified);
so the question remains: is there any disadvantage to always turn on this optimization? (which as I mentioned, will only apply for backends that support it)
