On Thu, Jun 9, 2011 at 6:34 PM, Chandler Carruth <[email protected]>wrote:

> On Thu, Jun 9, 2011 at 6:18 PM, Kaelyn Uhrain <[email protected]> wrote:
>
>> Quick question about doing this: would it be more cost effective to use a
>> std::multimap to allow iterating through the namespaces in ascending order
>> of NNS length and exiting the loop once NNS length + edit distance are worse
>> than the best, or to use an llvm::SmallVector instead and always looping
>> over all of the namespaces, but skipping the lookup if (NNS length + ED) is
>> too long?
>
>
> Iterating over a std::map or std::multimap is rather slow. Why can't we
> just keep the SmallVector sorted by the NNS length?
>

I had a feeling that was the case. The issue I'm having is in sorting the
NNSes by length--they vary based on the context of the typo correction, plus
the NNS, its length, and the DeclContext it points to have to remain
associated. Right now the list of known namespaces is a simple SmallPtrSet,
and the NNS and its length are calculated when needed--between when a symbol
is successfully looked up in a given DeclContext and when it is added with
its qualifier to the TypoCorrectionConsumer--and cached in a std::map for
the duration of the call to CorrectTypo (for use with other identifiers in
the same DeclContext). I'm not sure how the cost of calculating the
NestedNameSpecifiers for every namespace is affected by PCH files, but I'm
guessing the cost is less than what is saved by avoiding lookups in
namespaces stored in PCH files.
_______________________________________________
cfe-commits mailing list
[email protected]
http://lists.cs.uiuc.edu/mailman/listinfo/cfe-commits

Reply via email to