On Thu, 28 Feb 2013 10:31:42 -0500, monarch_dodra <[email protected]> wrote:

On Thursday, 28 February 2013 at 14:31:08 UTC, Steven Schveighoffer wrote:
On Thu, 28 Feb 2013 06:23:45 -0500, deadalnix

I have to say I agree with deadalnix.

you have essentially in lexer.c:

outer:
while(1)
{
   switch(r.front)
   {
      case 0:
        break outer;
      ...
   }
}

whereas a range where empty is defined as r.front == 0:

while(!r.empty)  // inlined to r.front != 0
{
   switch(r.front) // why would another load occur here?
   {
      // no need to check for 0, already done
      ...
   }
}

If this doesn't translate to the same code, I don't know why not[SNIP].


The difference is that by doing "!r.empty" first, you are actually executing "r.front == 0" each and every time, before doing the rest of the checks proper.

Doing it the other way around however, the only time you actually check the sentinel value is once you've reached the actual sentinel value, or you've run out of values "of interest": EG not actually on every element.

You are basically re-ordering the tests to speed up the usual case.

If you look at lexer.c, case 0 is the first test. How is it that the compiler knows that's the least likely case to check? Even it if it does, sure, the compiler could reorder it. But then it could include the while loop test into the switch statement, as others have suggested, and we get the same result.

Besides, I don't think a jnz instruction is that expensive.

My impression from Walter's other posts is to avoid the double load/double check, not to change the ordering.

-Steve

Reply via email to