> > My point is not about giving or not giving multiple attempts for large > set. It is about "if we give multiple chances to correct logical flaws > we should also give multiple attempts to correct efficiency flaws". >
I don't agree. Finding all logical flaws without testing is very, very hard so they don't expect us to do that (though the large set may still contain boundary cases that the small didn't), but properly estimating the efficiency of your algorithm is something you should definitely be able to do before you run a single test. Is your algorithm O(logn) or O(n^2)? You should know that before downloading the large dataset. In fact you should know that before you even write a single line of code. That's what they're judging you on by only allowing you to try the large set once. -- You received this message because you are subscribed to the Google Groups "google-codejam" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/google-code?hl=en.
