On Tuesday, 30 June 2015 at 21:17:13 UTC, H. S. Teoh wrote:
While investigating:

         https://issues.dlang.org/show_bug.cgi?id=4244

I found that the druntime function for computing the hash of static arrays (this also applies to dynamic arrays, btw) is horrendously slow: about 8-9 times slower than the equivalent operation on a POD struct of the same size.

The problem is caused by the call to hasCustomToHash() inside getArrayHash() in object.d, which in turn calls getElement(), which walks the typeinfo tree to find the TypeInfo for the first non-array / typedef type info definition, in order to determine if array elements have a custom toHash method. This walk is done *every single time* the array is hashed, even though the return value never changes for each array type.

So I tried to modify getArrayHash() to cache this information in the TypeInfo, but ran into some roadblocks: since TypeInfo's are supposed to be const, this operation is illegal. Unless I cast away const, but that's a rather dirty hack. The other problem is that the compiler hardcodes the sizes of each TypeInfo instance, so it will refuse to compile object.d anyway if the TypeInfo is expanded to have an extra field for caching the result of hasCustomTohash(). But since we have to modify the compiler now, my reaction was, why not have the compiler compute this value itself? Since the compiler already has all the information needed to compute this value. We don't have to wait till runtime. The only drawback is adding more complexity to the compiler, making it hard for other efforts like SDC to implement D.

What do you guys think? Should hasCustomToHash() be cached somehow in object.o? Or is caching a poor solution, and we should do something else?


T

This should not use typeinfo at all IMO.

We have template to do exactly that.

Reply via email to