Hi,
On 11/16/2012 10:58 PM, François Dumont wrote:
We can see that inserting the same elements again, that is to say
detecting the collisions, is slower in the new implementation. It is
the problem I had already signaled in bugzilla entry. In the new
implementation when we need to look for
Attached patch applied.
2012-11-16 François Dumont
* include/bits/hashtable_policy.h (_Prime_rehash_policy): Remove
automatic shrink.
(_Prime_rehash_policy::_M_bkt_for_elements): Do not call
_M_next_bkt anymore.
(_Prime_rehash_policy::_M_next_bkt): Move usage of
_S_gro
Hi,
On 11/14/2012 10:27 PM, François Dumont wrote:
We do not cache if the following conditions are all met:
- key type is an integral
- hash functor is empty and not final
- hash functor doesn't throw
Can somebody remind me why *exactly* we have a condition having to do
with the empty-ness of t
On 11/13/2012 11:53 PM, Paolo Carlini wrote:
Regarding performance, I have done a small evolution of the 54075.cc
test proposed last time. It is now checking performance with and
without cache of hash code. Result is:
54075.cc std::unordered_set 30 Foo insertions
witho
Hi,
On 11/13/2012 10:40 PM, François Dumont wrote:
2012-11-13 François Dumont
* include/bits/hashtable_policy.h (_Prime_rehash_policy): Remove
automatic shrink.
(_Prime_rehash_policy::_M_bkt_for_elements): Do not call
_M_next_bkt anymore.
(_Prime_rehash_policy::_M_next_bkt
On 11/13/2012 11:53 PM, Paolo Carlini wrote:
To summarize my intuitions are (again, leaving out the final
technicalities)
a- std::hash specializations for scalar types -> no cache
b- std::hash specialization for for std::string (or maybe
everything else, for simplicity) -> cache
c
Hi,
On 11/13/2012 10:40 PM, François Dumont wrote:
Here is the proposal to remove shrinking feature from hash policy. I
have also considered your remark regarding usage of lower_bound so
_M_bkt_for_elements doesn't call _M_next_bkt (calling lower_bound)
anymore. For 2 of the 3 calls it was onl
Here is the proposal to remove shrinking feature from hash policy.
I have also considered your remark regarding usage of lower_bound so
_M_bkt_for_elements doesn't call _M_next_bkt (calling lower_bound)
anymore. For 2 of the 3 calls it was only a source of redundant
lower_bound invocations,
Attached patch applied to trunk and 4.7 branch.
2012-11-08 François Dumont
PR libstdc++/54075
* include/bits/hashtable.h (_Hashtable<>::rehash): Reset hash
policy state if no rehash.
* testsuite/23_containers/unordered_set/modifiers/reserve.cc
(test02): New.
François
On
On 7 November 2012 22:02, François Dumont wrote:
>
> Ok to commit ? If so, where ?
That patch is OK for trunk and 4.7, thanks.
Here is the patch to fix the redundant rehash/reserve issue.
2012-11-07 François Dumont
PR libstdc++/54075
* include/bits/hashtable.h (_Hashtable<>::rehash): Reset hash
policy state if no rehash.
* testsuite/23_containers/unordered_set/modifiers/reserve.cc
(test02): New.
11 matches
Mail list logo