From: "Bill Fairchild" <[email protected]>
Sent: Thursday, 26 August 2010 1:07 AM

I don't remember if the OP specified in what respect the
suggested implementation should be efficient.
 It could be efficient in terms of elapsed run time,
That's the usual meaning.

CPU time used, average virtual storage working set size,
average real storage working set size,
or programmer development time (presumably the OP).

That's irrelevant, as the OP was looking for a more-efficient method.
If he wanted to minimise his own effort, he wouldn't
rewrite anything at all.

 The suggestion I made, for the 512MB bit string, maximizes the
efficiency of the programmer developing the scheme.

That's not relevant.  In any cast what minimises the programmer's
time is typically produces the most inefficient algorithm.

 Other suggestions seen so far emphasize some of the
other four goals listed above.  I personally am far more concerned
these days with my own development time than I am with execution time 
efficiency.
 45 years ago, the reverse was true.  The only exception now
is when my piece of code must execute one or more million times per second
with interrupts disabled, holding a lock, or in some other exotic state.

The bit string scheme will require 512 MB of virtual storage, but far less than 
that in real storage backing the
virtual storage working set, as the OP has himself indicated that the total 
number of input items, each of which could
cause a separate page fault and short term page fixing, is far smaller than 
128K, which is the number of 4K pages in
512 MB.

Maximising  page faults is scarcely going to let the program
run faster than the original, which probably didn't produce any at all.

Reply via email to