Tarman:

We're doing some "super computing" "big data" style stuff with D. We have a system where we're comparing associative arrays with billions of entries.

Built-in associative arrays were never tested with so many pairs, so perform exhaustive performance benchmarks first, and report in Bugzilla the performance and memory problems you hit.


We've tried copying the keys into a non-associative array and sure this works, but it is far far far less optimal than an equivalent C++ solution we wrote where we use an std::unordered_set and can simply store the iterator.

Have you tried the simplest thing, to let std.parallelism chunk a AA.byKey.zip(AA.byValue) ?

Bye,
bearophile

Reply via email to