In article <[EMAIL PROTECTED]>,
        Eugene van der Pijll <[EMAIL PROTECTED]> writes:
> En op 03 juli 2002 sprak Steffen Mueller:
> <snip>
>> This really isn't a new rule, I think. The rules never stated that your
>> entry isn't supposed to work on a very large number of nodes. In fact, we
>> deliberately did not limit the number of nodes. Hence, we cannot allow
>> hardcoded limits.
> 
> What do you mean by hardcoded limits? Code like 'if($x++>100){exit}'?
> Would 'if($x++>$^T){exit}' be OK? Would using the stash size as a limit
> be OK, as someone did in the Cantor/Kolakoski match?
> 

The way I normally interpret it is rougly like this:
(this is only my opinion, I'm not a referee here. For all I know they
completely disagree)

Suppose you use some algorithm that duplicates the input in 10 arrays.
One array element is something like 40 bytes. When you run it on a
machine with 2**32 = 4*10**9 bytes, it means you can process roughly
10**7 elements.  The rules say you have "ample" play memory, so it's ok
to hardcode a limit of 10**7. And what's more, if this 10 array method
is "reasonable", other people may also hardcode 10**7, even if their 
program could in principle handle more, but the use of 10**7 allows them 
to shave one stroke.

Sure, these are not exact rules, but give the kind of thinking I consider
reasonable and will in most cases not lead to disputes.

It becomes more interesting when you can solve the problem in O(n) memory
but also in O(n**2) memory. Should the O(n**2) method be called ok ?

Both cases came up in the christmas ircnet golf. There I allowed $^T to
top the number of expected input elements, and rejected solutions that 
were O(n**2) in memory. Sure, both are somewhat arbitrary, but you have to
do *something*. This is the kind of thing referees are for.

In the la.pm 1 golf another interesting case came up, a program that is 
basically correct but will not finish before the solar system is dead.
That's clearly a bit long, so in the current rules for the ircnet golfs
http://www.xs4all.nl/~thospel/golf/rules.html I have:

"The program is expected to finish in a reasonable time."

(expected is in there for programs based on rand that are not 
guaranteed to finish, but whose expectation value of the finishing time
is reasonable)

The formulation is intentionally vague to give the referees leeway and
not depend on computer speeds.
Finishing in a week on average hardware would probably be rejected,
finishing in a day *may* be ok.

In this case the referees decided to go for "rather fast". It's up to them
set up exactly what they use as limits. But as usual i think the spirit 
of the rule is more important than the EXACT value. If the program
detoriates very badly with input size (which is: a *lot* worse than
the time complexity of other not insanely complicated algorithms), I'd
probably reject it if I was a referee.
(And this does not mean rejecting everything that doesn't run in
O(inverse(ackerman)). That would probably reject almost everybody)

Reply via email to