On Thu, 07 Oct 2010 19:07:55 -0400, Rainer Deyke <rain...@eldwood.com> wrote:

On 10/7/2010 14:33, Steven Schveighoffer wrote:
On Thu, 07 Oct 2010 16:23:47 -0400, Rainer Deyke <rain...@eldwood.com>
wrote:
I can't say I've ever cared about the big-O complexity of an operation.

Then you don't understand how important it is.

Let me rephrase that.  I care about performance.  Big-O complexity can
obviously have a significant effect on performance, so I so do care
about it, but only to the extend that it affects performance.  Low big-O
complexity is a means to an end, not a goal in and of itself.  If 'n' is
low enough, then a O(2**n) algorithm may well be faster than an O(1)
algorithm.

You'd be surprised at how important it is. I remember competing on TopCoder when they just barely added C++ as a possible language. I remember wondering "how are all the Java guys going to even compete with C++?" But the big-O complexity was always way way more important than the performance differences of native vs. JVM'd code.

The thing about big-O complexity is that it gives you a good idea of how well your library will perform under defined circumstances. And libraries must be completely aware and rigid about it, or else you have situations where things are nice and easy syntax-wise but perform horribly when actually used. What you end up with when libraries don't care about it are "mysterious" slowdowns, or cases where people just start blaming the language for being so slow.

Imagine if a database backend developer said "you know, who cares about big-O complexity, I think linear performance is good enough, most people have small databases anyways" who would ever use that backend? This is akin to how phobos needs to strive for the best performance.


I also believe that, in the absence of a sophisticated system that
actually verifies performance guarantees, the language and standard
library should trust the programmer to know what he is doing.  The
standard library should only provide transitive performance guarantees,
e.g. this algorithm calls function 'f' 'n' times, so the algorithm's
performance is O(n * complexity(f)).  If 'f' runs in constant time, the
algorithm runs in linear time.  If 'f' runs in exponential time, the
algorithm still runs.

big O complexity is very important when you are writing libraries.  Not
so much when you are writing applications -- if you can live with it in
your application, then fine.  But Phobos should not have these problems
for people who *do* care.

What I'd suggest is to write your own function that uses in when
possible and find when not possible.  Then use that in your code.

The issue is that algorithms may use 'in' internally, so I may have to
rewrite large parts of Phobos.  (And the issue isn't about 'in'
specifically, but complexity guarantees in general.)\

You have two options, define opIn how you want if you don't care about complexity guarantees (not recommended), or define a wrapper function that uses the best available option, which your code can use.

Despite phobos defining opIn to require lg(n) or better complexity, it does not restrict you from defining opIn how you want (even on arrays if you wish). I personally find the tools phobos provides completely adequate for the tasks you would need to do.

-Steve

Reply via email to