No, I implemented it the "correct" way (pre-allocating the entire vector with the correct size).
For the large test case, there can be up to 10^7 stacks per node. If you allocate two vector<long long>s, you need 16 bytes per stack. Which means you'll definitely run out of memory, even with the smartest allocation strategy in the world. -- David On Tuesday, May 31, 2016 at 10:19:26 PM UTC-5, Stanislav Zholnin wrote: > Caching was not a problem, caching was a good idea - you have to go through > that array twice, so it's better to save it for future use. Problem was with > implementation. > I did caching myself, but I allocated it with > > auto M = vector<long long>(N) > > where N is number of stacks per node. > Even faster, probably, to do this: > > auto M = vector<long long>(N); > M.resize(N); > > worst thing to do is to do repeating push_backs(). First, it is much slower - > same O(n) on average, but much slower because of constant. More important in > this case is that you lose control over exact amount of memory consumed by > vector. In your case likely problem was that after pushing back 8,388,609th > item, size of vector doubled (as dynamic arrays do) and immeditely went from > 64Mb to 128Mb. That's memory limit and RTE. -- You received this message because you are subscribed to the Google Groups "Google Code Jam" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/google-code/cf15fc35-3518-44a8-bb6f-6d93e16cf521%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
