Pedro's question was not about whether the alphabet size should be considered 
constant or not. That's an entirely separate discussion. The question was how 
we arrive at O(N×√M) if we consider the alphabet size constant.

The idea is that for a specific word length, we can run over the input string 
once, and at each step calculate a key for the current substring. The key 
consists of the first letter, last letter, and the frequency table of the 
letters in the substring. We can update this key in constant time, which is 
where the O(N×√M) complexity comes from.

Now the question is what we we do with the key at each step. I agree with Pedro 
that the analysis glosses over this important detail. We could have a map<key, 
list of words> and use our key to find the list of matched words at each step. 
If the map is implemented as a hash-map, then searching for a key takes O(1) 
time. However, the list of words could be O(√M) long; if we just iterate over 
it every step we'd end up with an O(N×M) solution. The trick is to remove the 
words the first time we encounter them. That way, the complexity is really 
bounded by O(M + N×√M).

An example implementation is here: 
https://gist.github.com/maksverver/92b43b579edc0ee80bac7032e47bac23 (C++). In 
this solution I store a map <key, number of words>, since we only need to count 
the number of matches, not enumerate them. That's just a small simplification 
that doesn't change the time complexity at all.

-- 
You received this message because you are subscribed to the Google Groups 
"Google Code Jam" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-code/e715a5dd-3b6f-4a90-8323-c7f953a75a61%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to