A: Yeah, this is my first attempt at "speed coding", and I'm quickly 
learning that there are a lot of useful techniques here that one wouldn't 
necessarily go straight to in more mundane circumstances.

B: By backwards chaining, I just meant starting the search at the end 
square (of which there is one) rather than the "start" squares (the set of 
squares reachable before the tide starts falling).  I was thinking that 
that might somehow make the search simpler or faster, though I'm not sure 
that really makes much sense now.


On Saturday, May 5, 2012 7:07:28 PM UTC-5, [email protected] wrote:
>
> The "leveling" idea for A is a good approach.  As you noted, if there 
> are high scores that "stick out" it doesn't work since we can't 
> subtract points from them, but in this case you may assume they get 0 
> extra points, and adjust the "water level" accordingly for the others. 
>
> Though this surely would take some time to see compared to the binary 
> search, which is obvious once you've seen enough of this kind of 
> problem (I certainly couldn't do it in 4 mins though). 
>
> For B, I'm not sure what you mean by "chain backwards".  You are right 
> that a standard search is all that you needed; I pretty much did 
> Dijkstra on the implicit graph.  I think the bounds were small enough 
> that you could afford to explicitly simulate the water draining (steps 
> of 0.1s), but you don't need to (my solution would work for quite 
> large ceiling, floor, and initial water height values). 
>
> For C, I led myself down the garden path with the idea that we can get 
> rid of a bunch of numbers while still maintaining the property that 
> there is a solution, but I only got it down to 46 numbers and couldn't 
> see how to solve it efficiently.  I took a look at some others' 
> solutions and as you say, they seem to rely on the idea that a "small" 
> solution (where each subset doesn't include many numbers) exists, but 
> I'm not sure how to prove this is necessarily the case.  Perhaps for 
> each subset size k we could imagine trying to build a set of 500 
> numbers such that subsets of size k or less have distinct sums, then 
> show that the max number exceeds 10^12 for some small value of k... 
>
> Too bad I didn't just try it :) 
>
> On May 5, 5:08 pm, tutufan <[email protected]> wrote: 
> > The official analysis hasn't been posted (yet).  Sometimes they don't 
> post 
> > one, so I'm not sure whether we get one this time. 
> > 
> > In any case, I'm impatient and interested in any insights, so here's 
> some 
> > half-baked speculation.  (Disclosure: I didn't successfully answer any 
> of 
> > these, so all of this might be crap.) 
> > 
> > (spoilers, of course) 
> > 
> > For A, I initially conjectured that it was sufficient when looking at a 
> > contestant to consider only the "other" contestant having the lowest 
> score, 
> > and to take enough votes from them to guarantee that they cannot beat 
> us. 
> >  This is false, though, as input like "0 0 100" shows. 
> > 
> > The next idea was that maybe it's sufficient to claim 1/N of 2X total 
> > score, imagining a sort of global leveling (imagine a landscape filling 
> > with water).  That doesn't work either, as some high scorers are like 
> > "peaks" that will stick up, ruining the simplicity of this calculation. 
> > 
> > Later, I had the idea of imagining that I had X buckets of water (or 
> > whatever) to distribute, with the goal of ensuring that my candidate 
> > doesn't end up with less than anyone else.  So, for example, if we have 
> "0 
> > 2 5 98", we will have 105 buckets to distribute (based on audience 
> vote). 
> >  We first distribute two buckets to "0", to bring it up to the next 
> level, 
> > 2.  Then we distribute three buckets to each of the first two, to bring 
> > them all up to five.  This leaves us with 97 buckets, which we split 
> across 
> > the first three, bringing them up to (5 + 105/3) each.  This indicates 
> the 
> > proportion that my candidate must get not to lose. (Corner case: with 
> one 
> > candidate, the answer is 0.)  I think this might be correct, but didn't 
> > have time to code it. 
> > 
> > The #1 rank solution to this problem was submitted in 4:03/4:34, and to 
> me 
> > it uses a quite impressive insight.  The idea is that since we're trying 
> to 
> > estimate a proportion, and we have a reasonably easy way to check our 
> > estimates, and furthermore that the result needs to be really close, but 
> > not exact, that we can simply run a binary search to converge on the 
> > answer.  (This also requires some sort of convexity property or 
> something, 
> > which happens to hold here.)  Amazing.  Also amazing, or more like 
> > incomprehensible, is being able to read the problem, conceive the 
> solution, 
> > type it, compile it, and go through the small/large submit cycles in 
> under 
> > five minutes.  Is this the work of an AI?  :-) 
> > 
> > For B, this looks like a reasonably straightforward search.  We know 
> > there's no point in visiting any square more than once.  Search depth 
> can 
> > be bounded by the best result found at any given point.  Perhaps A* or 
> IDA* 
> > could be used to some benefit, but probably it's fast enough without. 
>  It 
> > would be nice to figure out the set of initially reachable squares and 
> then 
> > chain backwards from the end square, but I don't think this is possible, 
> > because of the need to count the clock forward. 
> > 
> > For C, I assumed that the small input case could be easily brute-forced, 
> > and that the large case could not, and required some sort of number 
> theory 
> > insight.  (Why else would this problem be worth so many points?)  It 
> would 
> > seem that the large case would take a spectacular amount of time to 
> brute 
> > force (O(2^500)), but apparently this was not the case.  I believe the 
> > reason is that for the inputs given, a solution was always possible, and 
> > could be found within the alotted time (because the solution space was 
> > dense with solutions).  Indeed, at least one solution doesn't even have 
> any 
> > provision for printing "Impossible".  Needless to say I'm slightly 
> annoyed 
> > I didn't catch this.  So, I'm wondering, is this actually a provable 
> > property of sets of distinct whole numbers <= 10**12 having size 500, or 
> > was it just luck that these all had a quick solution?  (I also saw that 
> a 
> > number of solvers made use of a random number generator--did this 
> actually 
> > matter?) 
> > 
> > Mike

-- 
You received this message because you are subscribed to the Google Groups 
"Google Code Jam" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-code/-/JYwE5fSsuoIJ.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-code?hl=en.

Reply via email to