Re: [PATCH] libstdc++: hashtable: No need to update before begin node in _M_remove_bucket_begin

2024-01-21 Thread François Dumont
Thanks, nice result, I'll try to run the performance benchmarks that are 
coming with libstdc++ to see if they spot anything.


That's tests in testsuite/performance folder in case you want to have a 
try yourself.


François


On 18/01/2024 10:26, Huanghui Nie wrote:


Yes, I have. I did a benchmark today.

The conclusion is: the time consumption can be reduced by 0.4% ~ 1.2% 
when unordered_set erase(begin()), and 1.2% ~ 2.4% when erase(begin(), 
end()).



My test environment:

CPU: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz, 2393.365 MHz, 56 CPUs

MEM: 256G

OS: CentOS-8.2

g++: gcc version 8.3.1 20191121 (Red Hat 8.3.1-5) (GCC)

Compile flags: -O3 -std=c++17


Test conclusion data (time taken to delete every 100 million elements):

erase(begin()):

|size of unordered_set |100 |1,000 |10,000|100,000 |1,000,000|10,000,000|

|base time consuming (ms)|3827.736|3807.725|3830.168|3807.373|3798.713 
|3854.168|


|test time consuming (ms)|3783.406|3789.460|3791.146|3778.033|3783.494 
|3808.137|


|Time-consuming reduction|1.16% |0.48% |1.02% |0.77% |0.40%|1.19% |

erase(begin(),end()):

|size of unordered_set |100 |1,000 |10,000|100,000 |1,000,000|10,000,000|

|base time consuming (ms)|2779.229|2768.550|2795.778|2767.385|2761.521 
|2804.099|


|test time consuming (ms)|2712.759|2726.578|2752.224|2732.140|2718.953 
|2739.727|


|Time-consuming reduction|2.39% |1.52% |1.56% |1.27% |1.54%|2.30% |


Please see the attachment for test code and detailed test result.


2024年1月18日(木) 4:04 François Dumont :

Hi

Looks like a great finding to me, this is indeed a useless check,
thanks!

Have you any figures on the performance enhancement ? It might
help to get proper approval as gcc is currently in dev stage 4
that is to say only bug fixes normally.

François

On 17/01/2024 09:11, Huanghui Nie wrote:


Hi.

When I implemented a hash table with reference to the C++ STL, I
found that when the hash table in the C++ STL deletes elements,
if the first element deleted is the begin element, the before
begin node is repeatedly assigned. This creates unnecessary
performance overhead.


First, let’s see the code implementation:

In _M_remove_bucket_begin, _M_before_begin._M_nxt is assigned
when &_M_before_begin == _M_buckets[__bkt]. That also means
_M_buckets[__bkt]->_M_nxt is assigned under some conditions.

_M_remove_bucket_begin is called by _M_erase and _M_extract_node:

 1. Case _M_erase a range: _M_remove_bucket_begin is called in a
for loop when __is_bucket_begin is true. And if
__is_bucket_begin is true and &_M_before_begin ==
_M_buckets[__bkt], __prev_n must be &_M_before_begin.
__prev_n->_M_nxt is always assigned in _M_erase. That means
_M_before_begin._M_nxt is always assigned, if
_M_remove_bucket_begin is called and &_M_before_begin ==
_M_buckets[__bkt]. So there’s no need to assign
_M_before_begin._M_nxt in _M_remove_bucket_begin.
 2. Other cases: _M_remove_bucket_begin is called when __prev_n
== _M_buckets[__bkt]. And __prev_n->_M_nxt is always assigned
in _M_erase and _M_before_begin. That means
_M_buckets[__bkt]->_M_nxt is always assigned. So there's no
need to assign _M_buckets[__bkt]->_M_nxt in
_M_remove_bucket_begin.

In summary, there’s no need to check &_M_before_begin ==
_M_buckets[__bkt] and assign _M_before_begin._M_nxt in
_M_remove_bucket_begin.


Then let’s see the responsibility of each method:

The hash table in the C++ STL is composed of hash buckets and a
node list. The update of the node list is responsible for
_M_erase and _M_extract_node method. _M_remove_bucket_begin
method only needs to update the hash buckets. The update of
_M_before_begin belongs to the update of the node list. So
_M_remove_bucket_begin doesn’t need to update _M_before_begin.


Existing tests listed below cover this change:

23_containers/unordered_set/allocator/copy.cc

23_containers/unordered_set/allocator/copy_assign.cc

23_containers/unordered_set/allocator/move.cc

23_containers/unordered_set/allocator/move_assign.cc

23_containers/unordered_set/allocator/swap.cc

23_containers/unordered_set/erase/1.cc

23_containers/unordered_set/erase/24061-set.cc

23_containers/unordered_set/modifiers/extract.cc

23_containers/unordered_set/operations/count.cc

23_containers/unordered_set/requirements/exception/basic.cc

23_containers/unordered_map/allocator/copy.cc

23_containers/unordered_map/allocator/copy_assign.cc

23_containers/unordered_map/allocator/move.cc

23_containers/unordered_map/allocator/move_assign.cc

23_containers/unordered_map/allocator/swap.cc

23_containers/unordered_map/erase/1.cc

23_containers/unordered_map/erase/24061-map.cc

23_containers/unordered_map/modifiers/extract.cc


Re: [PATCH] libstdc++: hashtable: No need to update before begin node in _M_remove_bucket_begin

2024-01-18 Thread Huanghui Nie
Yes, I have. I did a benchmark today.

The conclusion is: the time consumption can be reduced by 0.4% ~ 1.2% when
unordered_set erase(begin()), and 1.2% ~ 2.4% when erase(begin(), end()).


My test environment:

CPU: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz, 2393.365 MHz, 56 CPUs

MEM: 256G

OS: CentOS-8.2

g++: gcc version 8.3.1 20191121 (Red Hat 8.3.1-5) (GCC)

Compile flags: -O3 -std=c++17


Test conclusion data (time taken to delete every 100 million elements):

erase(begin()):

|size of unordered_set   |100 |1,000   |10,000  |100,000
|1,000,000|10,000,000|

|base time consuming (ms)|3827.736|3807.725|3830.168|3807.373|3798.713
|3854.168  |

|test time consuming (ms)|3783.406|3789.460|3791.146|3778.033|3783.494
|3808.137  |

|Time-consuming reduction|1.16%   |0.48%   |1.02%   |0.77%   |0.40%|1.19%
|

erase(begin(),end()):

|size of unordered_set   |100 |1,000   |10,000  |100,000
|1,000,000|10,000,000|

|base time consuming (ms)|2779.229|2768.550|2795.778|2767.385|2761.521
|2804.099  |

|test time consuming (ms)|2712.759|2726.578|2752.224|2732.140|2718.953
|2739.727  |

|Time-consuming reduction|2.39%   |1.52%   |1.56%   |1.27%   |1.54%|2.30%
|


Please see the attachment for test code and detailed test result.

2024年1月18日(木) 4:04 François Dumont :

> Hi
>
> Looks like a great finding to me, this is indeed a useless check, thanks!
>
> Have you any figures on the performance enhancement ? It might help to get
> proper approval as gcc is currently in dev stage 4 that is to say only bug
> fixes normally.
>
> François
> On 17/01/2024 09:11, Huanghui Nie wrote:
>
> Hi.
>
> When I implemented a hash table with reference to the C++ STL, I found
> that when the hash table in the C++ STL deletes elements, if the first
> element deleted is the begin element, the before begin node is repeatedly
> assigned. This creates unnecessary performance overhead.
>
>
> First, let’s see the code implementation:
>
> In _M_remove_bucket_begin, _M_before_begin._M_nxt is assigned when
> &_M_before_begin == _M_buckets[__bkt]. That also means
> _M_buckets[__bkt]->_M_nxt is assigned under some conditions.
>
> _M_remove_bucket_begin is called by _M_erase and _M_extract_node:
>
>1. Case _M_erase a range: _M_remove_bucket_begin is called in a for
>loop when __is_bucket_begin is true. And if __is_bucket_begin is true and
>&_M_before_begin == _M_buckets[__bkt], __prev_n must be &_M_before_begin.
>__prev_n->_M_nxt is always assigned in _M_erase. That means
>_M_before_begin._M_nxt is always assigned, if _M_remove_bucket_begin is
>called and &_M_before_begin == _M_buckets[__bkt]. So there’s no need to
>assign _M_before_begin._M_nxt in _M_remove_bucket_begin.
>2. Other cases: _M_remove_bucket_begin is called when __prev_n ==
>_M_buckets[__bkt]. And __prev_n->_M_nxt is always assigned in _M_erase and
>_M_before_begin. That means _M_buckets[__bkt]->_M_nxt is always assigned.
>So there's no need to assign _M_buckets[__bkt]->_M_nxt in
>_M_remove_bucket_begin.
>
> In summary, there’s no need to check &_M_before_begin == _M_buckets[__bkt]
> and assign _M_before_begin._M_nxt in _M_remove_bucket_begin.
>
>
> Then let’s see the responsibility of each method:
>
> The hash table in the C++ STL is composed of hash buckets and a node list.
> The update of the node list is responsible for _M_erase and _M_extract_node
> method. _M_remove_bucket_begin method only needs to update the hash
> buckets. The update of _M_before_begin belongs to the update of the node
> list. So _M_remove_bucket_begin doesn’t need to update _M_before_begin.
>
>
> Existing tests listed below cover this change:
>
> 23_containers/unordered_set/allocator/copy.cc
>
> 23_containers/unordered_set/allocator/copy_assign.cc
>
> 23_containers/unordered_set/allocator/move.cc
>
> 23_containers/unordered_set/allocator/move_assign.cc
>
> 23_containers/unordered_set/allocator/swap.cc
>
> 23_containers/unordered_set/erase/1.cc
>
> 23_containers/unordered_set/erase/24061-set.cc
>
> 23_containers/unordered_set/modifiers/extract.cc
>
> 23_containers/unordered_set/operations/count.cc
>
> 23_containers/unordered_set/requirements/exception/basic.cc
>
> 23_containers/unordered_map/allocator/copy.cc
>
> 23_containers/unordered_map/allocator/copy_assign.cc
>
> 23_containers/unordered_map/allocator/move.cc
>
> 23_containers/unordered_map/allocator/move_assign.cc
>
> 23_containers/unordered_map/allocator/swap.cc
>
> 23_containers/unordered_map/erase/1.cc
>
> 23_containers/unordered_map/erase/24061-map.cc
>
> 23_containers/unordered_map/modifiers/extract.cc
>
> 23_containers/unordered_map/modifiers/move_assign.cc
>
> 23_containers/unordered_map/operations/count.cc
>
> 23_containers/unordered_map/requirements/exception/basic.cc
>
>
> Regression tested on x86_64-pc-linux-gnu. Is it OK to commit?
>
>
> ---
>
> ChangeLog:
>
>
> libstdc++: hashtable: No need to update before begin node in
> _M_remove_bucket_begin
>
>
> 2024-01-16  Huanghui 

Re: [PATCH] libstdc++: hashtable: No need to update before begin node in _M_remove_bucket_begin

2024-01-17 Thread François Dumont

Hi

Looks like a great finding to me, this is indeed a useless check, thanks!

Have you any figures on the performance enhancement ? It might help to 
get proper approval as gcc is currently in dev stage 4 that is to say 
only bug fixes normally.


François

On 17/01/2024 09:11, Huanghui Nie wrote:


Hi.

When I implemented a hash table with reference to the C++ STL, I found 
that when the hash table in the C++ STL deletes elements, if the first 
element deleted is the begin element, the before begin node is 
repeatedly assigned. This creates unnecessary performance overhead.



First, let’s see the code implementation:

In _M_remove_bucket_begin, _M_before_begin._M_nxt is assigned when 
&_M_before_begin == _M_buckets[__bkt]. That also means 
_M_buckets[__bkt]->_M_nxt is assigned under some conditions.


_M_remove_bucket_begin is called by _M_erase and _M_extract_node:

 1. Case _M_erase a range: _M_remove_bucket_begin is called in a for
loop when __is_bucket_begin is true. And if __is_bucket_begin is
true and &_M_before_begin == _M_buckets[__bkt], __prev_n must be
&_M_before_begin. __prev_n->_M_nxt is always assigned in _M_erase.
That means _M_before_begin._M_nxt is always assigned, if
_M_remove_bucket_begin is called and &_M_before_begin ==
_M_buckets[__bkt]. So there’s no need to assign
_M_before_begin._M_nxt in _M_remove_bucket_begin.
 2. Other cases: _M_remove_bucket_begin is called when __prev_n ==
_M_buckets[__bkt]. And __prev_n->_M_nxt is always assigned in
_M_erase and _M_before_begin. That means _M_buckets[__bkt]->_M_nxt
is always assigned. So there's no need to assign
_M_buckets[__bkt]->_M_nxt in _M_remove_bucket_begin.

In summary, there’s no need to check &_M_before_begin == 
_M_buckets[__bkt] and assign _M_before_begin._M_nxt in 
_M_remove_bucket_begin.



Then let’s see the responsibility of each method:

The hash table in the C++ STL is composed of hash buckets and a node 
list. The update of the node list is responsible for _M_erase and 
_M_extract_node method. _M_remove_bucket_begin method only needs to 
update the hash buckets. The update of _M_before_begin belongs to the 
update of the node list. So _M_remove_bucket_begin doesn’t need to 
update _M_before_begin.



Existing tests listed below cover this change:

23_containers/unordered_set/allocator/copy.cc

23_containers/unordered_set/allocator/copy_assign.cc

23_containers/unordered_set/allocator/move.cc

23_containers/unordered_set/allocator/move_assign.cc

23_containers/unordered_set/allocator/swap.cc

23_containers/unordered_set/erase/1.cc

23_containers/unordered_set/erase/24061-set.cc

23_containers/unordered_set/modifiers/extract.cc

23_containers/unordered_set/operations/count.cc

23_containers/unordered_set/requirements/exception/basic.cc

23_containers/unordered_map/allocator/copy.cc

23_containers/unordered_map/allocator/copy_assign.cc

23_containers/unordered_map/allocator/move.cc

23_containers/unordered_map/allocator/move_assign.cc

23_containers/unordered_map/allocator/swap.cc

23_containers/unordered_map/erase/1.cc

23_containers/unordered_map/erase/24061-map.cc

23_containers/unordered_map/modifiers/extract.cc

23_containers/unordered_map/modifiers/move_assign.cc

23_containers/unordered_map/operations/count.cc

23_containers/unordered_map/requirements/exception/basic.cc


Regression tested on x86_64-pc-linux-gnu. Is it OK to commit?


---

ChangeLog:


libstdc++: hashtable: No need to update before begin node in 
_M_remove_bucket_begin



2024-01-16Huanghui Nie


gcc/

* libstdc++-v3/include/bits/hashtable.h


---


diff --git a/libstdc++-v3/include/bits/hashtable.h 
b/libstdc++-v3/include/bits/hashtable.h


index b48610036fa..6056639e663 100644

--- a/libstdc++-v3/include/bits/hashtable.h

+++ b/libstdc++-v3/include/bits/hashtable.h

@@ -872,13 +872,10 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

      if (!__next_n || __next_bkt != __bkt)

        {

          // Bucket is now empty

-         // First update next bucket if any

+         // Update next bucket if any

          if (__next_n)

            _M_buckets[__next_bkt] = _M_buckets[__bkt];

-         // Second update before begin node if necessary

-         if (&_M_before_begin == _M_buckets[__bkt])

-           _M_before_begin._M_nxt = __next_n;

          _M_buckets[__bkt] = nullptr;

        }

    }



Re: [PATCH] libstdc++: hashtable: No need to update before begin node in _M_remove_bucket_begin

2024-01-17 Thread Huanghui Nie
I'm sorry for CC the gcc@ list. Waiting for your review results there.
Thanks.

2024年1月17日(水) 16:18 Jonathan Wakely :

>
>
> On Wed, 17 Jan 2024, 08:14 Huanghui Nie via Gcc,  wrote:
>
>> Thanks. Done.
>>
>
> And don't CC the main gcc@ list, that's not for patch discussion. And if
> you CC the right list, you don't need to CC the individual maintainers.
>
> Anyway, it's on the right list now so we'll review it there, thanks.
>
>
>
>> 2024年1月17日(水) 12:39 Sam James :
>>
>> >
>> > Huanghui Nie  writes:
>> >
>> > > Hi.
>> >
>> > Please CC the libstdc++ LM for libstdc++ patches, per
>> >
>> >
>> https://gcc.gnu.org/onlinedocs/libstdc++/manual/appendix_contributing.html#list.patches
>> > .
>> >
>> > > [...]
>> >
>> >
>>
>


Re: [PATCH] libstdc++: hashtable: No need to update before begin node in _M_remove_bucket_begin

2024-01-17 Thread Jonathan Wakely
On Wed, 17 Jan 2024, 08:14 Huanghui Nie via Gcc,  wrote:

> Thanks. Done.
>

And don't CC the main gcc@ list, that's not for patch discussion. And if
you CC the right list, you don't need to CC the individual maintainers.

Anyway, it's on the right list now so we'll review it there, thanks.



> 2024年1月17日(水) 12:39 Sam James :
>
> >
> > Huanghui Nie  writes:
> >
> > > Hi.
> >
> > Please CC the libstdc++ LM for libstdc++ patches, per
> >
> >
> https://gcc.gnu.org/onlinedocs/libstdc++/manual/appendix_contributing.html#list.patches
> > .
> >
> > > [...]
> >
> >
>


Re: [PATCH] libstdc++: hashtable: No need to update before begin node in _M_remove_bucket_begin

2024-01-17 Thread Huanghui Nie
Thanks. Done.

2024年1月17日(水) 12:39 Sam James :

>
> Huanghui Nie  writes:
>
> > Hi.
>
> Please CC the libstdc++ LM for libstdc++ patches, per
>
> https://gcc.gnu.org/onlinedocs/libstdc++/manual/appendix_contributing.html#list.patches
> .
>
> > [...]
>
>


[PATCH] libstdc++: hashtable: No need to update before begin node in _M_remove_bucket_begin

2024-01-17 Thread Huanghui Nie
Hi.

When I implemented a hash table with reference to the C++ STL, I found that
when the hash table in the C++ STL deletes elements, if the first element
deleted is the begin element, the before begin node is repeatedly assigned.
This creates unnecessary performance overhead.


First, let’s see the code implementation:

In _M_remove_bucket_begin, _M_before_begin._M_nxt is assigned when
&_M_before_begin == _M_buckets[__bkt]. That also means
_M_buckets[__bkt]->_M_nxt is assigned under some conditions.

_M_remove_bucket_begin is called by _M_erase and _M_extract_node:

   1. Case _M_erase a range: _M_remove_bucket_begin is called in a for loop
   when __is_bucket_begin is true. And if __is_bucket_begin is true and
   &_M_before_begin == _M_buckets[__bkt], __prev_n must be &_M_before_begin.
   __prev_n->_M_nxt is always assigned in _M_erase. That means
   _M_before_begin._M_nxt is always assigned, if _M_remove_bucket_begin is
   called and &_M_before_begin == _M_buckets[__bkt]. So there’s no need to
   assign _M_before_begin._M_nxt in _M_remove_bucket_begin.
   2. Other cases: _M_remove_bucket_begin is called when __prev_n ==
   _M_buckets[__bkt]. And __prev_n->_M_nxt is always assigned in _M_erase and
   _M_before_begin. That means _M_buckets[__bkt]->_M_nxt is always assigned.
   So there's no need to assign _M_buckets[__bkt]->_M_nxt in
   _M_remove_bucket_begin.

In summary, there’s no need to check &_M_before_begin == _M_buckets[__bkt]
and assign _M_before_begin._M_nxt in _M_remove_bucket_begin.


Then let’s see the responsibility of each method:

The hash table in the C++ STL is composed of hash buckets and a node list.
The update of the node list is responsible for _M_erase and _M_extract_node
method. _M_remove_bucket_begin method only needs to update the hash
buckets. The update of _M_before_begin belongs to the update of the node
list. So _M_remove_bucket_begin doesn’t need to update _M_before_begin.


Existing tests listed below cover this change:

23_containers/unordered_set/allocator/copy.cc

23_containers/unordered_set/allocator/copy_assign.cc

23_containers/unordered_set/allocator/move.cc

23_containers/unordered_set/allocator/move_assign.cc

23_containers/unordered_set/allocator/swap.cc

23_containers/unordered_set/erase/1.cc

23_containers/unordered_set/erase/24061-set.cc

23_containers/unordered_set/modifiers/extract.cc

23_containers/unordered_set/operations/count.cc

23_containers/unordered_set/requirements/exception/basic.cc

23_containers/unordered_map/allocator/copy.cc

23_containers/unordered_map/allocator/copy_assign.cc

23_containers/unordered_map/allocator/move.cc

23_containers/unordered_map/allocator/move_assign.cc

23_containers/unordered_map/allocator/swap.cc

23_containers/unordered_map/erase/1.cc

23_containers/unordered_map/erase/24061-map.cc

23_containers/unordered_map/modifiers/extract.cc

23_containers/unordered_map/modifiers/move_assign.cc

23_containers/unordered_map/operations/count.cc

23_containers/unordered_map/requirements/exception/basic.cc


Regression tested on x86_64-pc-linux-gnu. Is it OK to commit?


---

ChangeLog:


libstdc++: hashtable: No need to update before begin node in
_M_remove_bucket_begin


2024-01-16  Huanghui Nie  


gcc/

* libstdc++-v3/include/bits/hashtable.h


---


diff --git a/libstdc++-v3/include/bits/hashtable.h
b/libstdc++-v3/include/bits/hashtable.h

index b48610036fa..6056639e663 100644

--- a/libstdc++-v3/include/bits/hashtable.h

+++ b/libstdc++-v3/include/bits/hashtable.h

@@ -872,13 +872,10 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

if (!__next_n || __next_bkt != __bkt)

  {

// Bucket is now empty

-   // First update next bucket if any

+   // Update next bucket if any

if (__next_n)

  _M_buckets[__next_bkt] = _M_buckets[__bkt];



-   // Second update before begin node if necessary

-   if (&_M_before_begin == _M_buckets[__bkt])

- _M_before_begin._M_nxt = __next_n;

_M_buckets[__bkt] = nullptr;

  }

   }


Re: [PATCH] libstdc++: hashtable: No need to update before begin node in _M_remove_bucket_begin

2024-01-16 Thread Sam James


Huanghui Nie  writes:

> Hi.

Please CC the libstdc++ LM for libstdc++ patches, per
https://gcc.gnu.org/onlinedocs/libstdc++/manual/appendix_contributing.html#list.patches.

> [...]



[PATCH] libstdc++: hashtable: No need to update before begin node in _M_remove_bucket_begin

2024-01-16 Thread Huanghui Nie
Hi.

When I implemented a hash table with reference to the C++ STL, I found that
when the hash table in the C++ STL deletes elements, if the first element
deleted is the begin element, the before begin node is repeatedly assigned.
This creates unnecessary performance overhead.


First, let’s see the code implementation:

In _M_remove_bucket_begin, _M_before_begin._M_nxt is assigned when
&_M_before_begin == _M_buckets[__bkt]. That also means
_M_buckets[__bkt]->_M_nxt is assigned under some conditions.

_M_remove_bucket_begin is called by _M_erase and _M_extract_node:

   1. Case _M_erase a range: _M_remove_bucket_begin is called in a for loop
   when __is_bucket_begin is true. And if __is_bucket_begin is true and
   &_M_before_begin == _M_buckets[__bkt], __prev_n must be &_M_before_begin.
   __prev_n->_M_nxt is always assigned in _M_erase. That means
   _M_before_begin._M_nxt is always assigned, if _M_remove_bucket_begin is
   called and &_M_before_begin == _M_buckets[__bkt]. So there’s no need to
   assign _M_before_begin._M_nxt in _M_remove_bucket_begin.
   2. Other cases: _M_remove_bucket_begin is called when __prev_n ==
   _M_buckets[__bkt]. And __prev_n->_M_nxt is always assigned in _M_erase and
   _M_before_begin. That means _M_buckets[__bkt]->_M_nxt is always assigned.
   So there's no need to assign _M_buckets[__bkt]->_M_nxt in
   _M_remove_bucket_begin.

In summary, there’s no need to check &_M_before_begin == _M_buckets[__bkt]
and assign _M_before_begin._M_nxt in _M_remove_bucket_begin.


Then let’s see the responsibility of each method:

The hash table in the C++ STL is composed of hash buckets and a node list.
The update of the node list is responsible for _M_erase and _M_extract_node
method. _M_remove_bucket_begin method only needs to update the hash
buckets. The update of _M_before_begin belongs to the update of the node
list. So _M_remove_bucket_begin doesn’t need to update _M_before_begin.


Existing tests listed below cover this change:

23_containers/unordered_set/allocator/copy.cc

23_containers/unordered_set/allocator/copy_assign.cc

23_containers/unordered_set/allocator/move.cc

23_containers/unordered_set/allocator/move_assign.cc

23_containers/unordered_set/allocator/swap.cc

23_containers/unordered_set/erase/1.cc

23_containers/unordered_set/erase/24061-set.cc

23_containers/unordered_set/modifiers/extract.cc

23_containers/unordered_set/operations/count.cc

23_containers/unordered_set/requirements/exception/basic.cc

23_containers/unordered_map/allocator/copy.cc

23_containers/unordered_map/allocator/copy_assign.cc

23_containers/unordered_map/allocator/move.cc

23_containers/unordered_map/allocator/move_assign.cc

23_containers/unordered_map/allocator/swap.cc

23_containers/unordered_map/erase/1.cc

23_containers/unordered_map/erase/24061-map.cc

23_containers/unordered_map/modifiers/extract.cc

23_containers/unordered_map/modifiers/move_assign.cc

23_containers/unordered_map/operations/count.cc

23_containers/unordered_map/requirements/exception/basic.cc


Regression tested on x86_64-pc-linux-gnu. Is it OK to commit?


---


diff --git a/libstdc++-v3/include/bits/hashtable.h
b/libstdc++-v3/include/bits/hashtable.h

index b48610036fa..6056639e663 100644

--- a/libstdc++-v3/include/bits/hashtable.h

+++ b/libstdc++-v3/include/bits/hashtable.h

@@ -872,13 +872,10 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION

if (!__next_n || __next_bkt != __bkt)

  {

// Bucket is now empty

-   // First update next bucket if any

+   // Update next bucket if any

if (__next_n)

  _M_buckets[__next_bkt] = _M_buckets[__bkt];



-   // Second update before begin node if necessary

-   if (&_M_before_begin == _M_buckets[__bkt])

- _M_before_begin._M_nxt = __next_n;

_M_buckets[__bkt] = nullptr;

  }

   }


no-need-to-update-before-begin-node.diff
Description: Binary data