testsuite allocators patch

2014-06-20 Thread François Dumont

Hi

I would like to finally propose this patch before the one on 
_Rb_tree, as a separate one.


I have adopted the same evolution on the tracker_allocator with 
even a perfect forwarding constructor to allow its usage on top of the 
uneq_allocator  which take a personality parameter. Doing so I realized 
that move_assign_neg.cc tests were not accurate enough as they needed a 
non move propagating allocator and the uneq_allocator were not 
explicitly non propagating.


2014-06-21  François Dumont  fdum...@gcc.gnu.org

* testsuite/util/testsuite_allocator.h
(tracker_allocator_counter::allocate): Remove new invocation, only
collect information.
(tracker_allocator_counter::deallocate): Remove delete invocation, only
collect information.
(check_inconsistent_alloc_value_type): New.
(tracker_allocator): Transform as a facade for any allocator type.
(uneq_allocator): Likewise.
(propagating_allocator): Likewise.
* testsuite/23_containers/forward_list/debug/move_assign_neg.cc: Use an
explicitly non propagating allocator.
* testsuite/23_containers/map/debug/move_assign_neg.cc: Likewise.
* testsuite/23_containers/multimap/debug/move_assign_neg.cc: likewise.
* testsuite/23_containers/multiset/debug/move_assign_neg.cc: Likewise.
* testsuite/23_containers/set/debug/move_assign_neg.cc: Likewise.
* testsuite/23_containers/unordered_map/debug/move_assign_neg.cc:
Likewise.
* testsuite/23_containers/unordered_multimap/debug/move_assign_neg.cc:
Likewise.
* testsuite/23_containers/unordered_multiset/debug/move_assign_neg.cc:
Likewise.
* testsuite/23_containers/unordered_set/debug/move_assign_neg.cc:
Likewise.
* testsuite/23_containers/vector/debug/move_assign_neg.cc: Likewise.

Tested under Linux x86_64.

Ok to commit ?

François



Re: testsuite allocators patch

2014-06-26 Thread François Dumont

On 26/06/2014 12:33, Jonathan Wakely wrote:


The _GLIBCXX_USE_NOEXCEPT macro expands to nothing in C++03 mode, so
you might as well omit it in the #else branch.

OK for trunk if you make the tracker_allocator comment correct.

Thanks!



Committed with:

  // An allocator facade that intercepts 
allocate/deallocate/construct/destroy

  // calls and track them through the tracker_allocator_counter class. This
  // class is templated on the target object type, but tracker isn't.
  templatetypename T, typename Alloc = std::allocatorT 
class tracker_allocator : public Alloc

Thanks for feedback.

François



Re: testsuite allocators patch

2014-06-27 Thread François Dumont

On 27/06/2014 21:48, Paolo Carlini wrote:

Hi,

On 06/27/2014 07:33 PM, Jonathan Wakely wrote:

I didn't see an obvious fix (I'm not sure if the templated constructor
can deduce its argument since the change) but have been out all day
and not had a chance to look into it.

Ok, thanks. I'm reverting the last two libstdc++-v3 commits.

Paolo.

It run fine on my side but maybe because of other modifications I have. 
I will revert those and reapply the patch to see what is wrong.


Sorry

François



Re: [patch] Simplify allocator use

2014-06-27 Thread François Dumont

On 26/06/2014 13:31, Jonathan Wakely wrote:

On 25/06/14 21:56 +0100, Jonathan Wakely wrote:

The other adds an RAII type to help manage pointers obtained from
allocators. The new type means I can remove several ugly try-catch
blocks that are all very similar in structure and have been bothering
me for some time. The new type also makes it trivial to support
allocators with fancy pointers, fixing long-standing (but not very
important) bugs in std::promise and std::shared_ptr.


This patch applies the __allocated_ptr type to hashtable_policy.h to
remove most explicit deallocation (yay!)  The buckets are still
allocated and deallocated manually, because __allocated_ptr only works
for allocations of single objects, not arrays.

As well as __allocated_ptr this change relies on two things:

1) the node type has a trivial destructor, so we don't actually need
  to call it, we can just reuse or release its storage.
  (See 3.8 [basic.life] p1)

2) allocator_traits::construct and allocator_traits::destroy can be
  used with an allocator that has a different value_type, so we don't
  need to create a rebound copy to destroy every element, we can just
  use the node-allocator.
  (See http://cplusplus.github.io/LWG/lwg-active.html#2218 which is
  Open, but I've discussed the issue with Howard, Pablo and others,
  and I think libc++ already relies on this assumption).

François, could you check it, and let me know if you see anything
wrong or have any comments?


That looks fine to me, nice simplification.

François



Re: testsuite allocators patch

2014-07-23 Thread François Dumont

On 27/06/2014 21:48, Paolo Carlini wrote:

Hi,

On 06/27/2014 07:33 PM, Jonathan Wakely wrote:

I didn't see an obvious fix (I'm not sure if the templated constructor
can deduce its argument since the change) but have been out all day
and not had a chance to look into it.

Ok, thanks. I'm reverting the last two libstdc++-v3 commits.

Paolo.



Hi

Back from vacation, ready to have this patch committed eventually.

Here is the new version with the missing default and copy constructor.

I have a small question regarding some code next to the one I am 
modifying in this patch. I can see lines like:


  propagating_allocator() noexcept = default;

When using a default implementation shouldn't we let the compiler 
decide if it should be noexcept or not depending on the member fields or 
base class default constructors ?


Tested under Linux x86_64.

Ok to commit ?

François

Index: testsuite/util/testsuite_allocator.h
===
--- testsuite/util/testsuite_allocator.h	(revision 212959)
+++ testsuite/util/testsuite_allocator.h	(working copy)
@@ -29,6 +29,7 @@
 #include tr1/unordered_map
 #include bits/move.h
 #include ext/pointer.h
+#include ext/alloc_traits.h
 #include testsuite_hooks.h
 
 namespace __gnu_test
@@ -38,26 +39,19 @@
   public:
 typedef std::size_tsize_type; 
 
-static void*
+static void
 allocate(size_type blocksize)
-{
-  void* p = ::operator new(blocksize);
-  allocationCount_ += blocksize;
-  return p;
-}
+{ allocationCount_ += blocksize; }
 
 static void
-construct() { constructCount_++; }
+construct() { ++constructCount_; }
 
 static void
-destroy() { destructCount_++; }
+destroy() { ++destructCount_; }
 
 static void
-deallocate(void* p, size_type blocksize)
-{
-  ::operator delete(p);
-  deallocationCount_ += blocksize;
-}
+deallocate(size_type blocksize)
+{ deallocationCount_ += blocksize; }
 
 static size_type
 get_allocation_count() { return allocationCount_; }
@@ -87,103 +81,142 @@
 static intdestructCount_;
   };
 
-  // A simple basic allocator that just forwards to the
-  // tracker_allocator_counter to fulfill memory requests.  This class
-  // is templated on the target object type, but tracker isn't.
-  templateclass T
-  class tracker_allocator
-  {
-  private:
-typedef tracker_allocator_counter counter_type;
+  // Helper to detect inconsistency between type used to instantiate an
+  // allocator and the underlying allocator value_type.
+  templatetypename T, typename Alloc,
+	   typename = typename Alloc::value_type
+struct check_consistent_alloc_value_type;
 
-  public:
-typedef T  value_type;
-typedef T* pointer;
-typedef const T*   const_pointer;
-typedef T reference;
-typedef const T   const_reference;
-typedef std::size_tsize_type; 
-typedef std::ptrdiff_t difference_type; 
-
-templateclass U struct rebind { typedef tracker_allocatorU other; };
-
-pointer
-address(reference value) const _GLIBCXX_NOEXCEPT
-{ return std::__addressof(value); }
+  templatetypename T, typename Alloc
+struct check_consistent_alloc_value_typeT, Alloc, T
+{ typedef T value_type; };
 
-const_pointer
-address(const_reference value) const _GLIBCXX_NOEXCEPT
-{ return std::__addressof(value); }
+  // An allocator facade that intercepts allocate/deallocate/construct/destroy
+  // calls and track them through the tracker_allocator_counter class. This
+  // class is templated on the target object type, but tracker isn't.
+  templatetypename T, typename Alloc = std::allocatorT 
+class tracker_allocator : public Alloc
+{
+private:
+  typedef tracker_allocator_counter counter_type;
 
-tracker_allocator() _GLIBCXX_USE_NOEXCEPT
-{ }
+  typedef __gnu_cxx::__alloc_traitsAlloc AllocTraits;
 
-tracker_allocator(const tracker_allocator) _GLIBCXX_USE_NOEXCEPT
-{ }
+public:
+  typedef typename
+  check_consistent_alloc_value_typeT, Alloc::value_type value_type;
+  typedef typename AllocTraits::pointer pointer;
+  typedef typename AllocTraits::size_type size_type;
 
-templateclass U
-  tracker_allocator(const tracker_allocatorU) _GLIBCXX_USE_NOEXCEPT
+  templateclass U
+	struct rebind
+	{
+	  typedef tracker_allocatorU,
+		typename AllocTraits::template rebindU::other other;
+	};
+
+#if __cplusplus = 201103L
+  tracker_allocator() = default;
+  tracker_allocator(const tracker_allocator) = default;
+  tracker_allocator(tracker_allocator) = default;
+
+  // Perfect forwarding constructor.
+  templatetypename... _Args
+	tracker_allocator(_Args... __args)
+	  : Alloc(std::forward_Args(__args)...)
+	{ }
+#else
+  tracker_allocator()
   { }
 
-~tracker_allocator() _GLIBCXX_USE_NOEXCEPT
-{ 

Re: testsuite allocators patch

2014-07-24 Thread François Dumont

On 24/07/2014 10:55, Jonathan Wakely wrote:

On 23/07/14 22:33 +0200, François Dumont wrote:
   I have a small question regarding some code next to the one I am 
modifying in this patch. I can see lines like:


 propagating_allocator() noexcept = default;

   When using a default implementation shouldn't we let the compiler 
decide if it should be noexcept or not depending on the member fields 
or base class default constructors ?


Stating it explicitly means you get an error if the default
implementation is not noexcept. That can be useful, to ensure you
don't silently start getting a throwing constructor by mistake because
of a change to a base class.

I'm not sure if I added the noexcept above, but if I did that might
have been what I was intending it to do. I don't remember.

I'll review the rest of the patch ASAP. Did you test it with no other
changes in your tree, and run the entire testsuite?


Ok, thanks for the explanation, it is clear now.

Yes I have tested with no other changes in my tree and got only those 
pretty printers errors which are unrelated I think:


Python Exception class 'TypeError' iter() returned non-iterator of 
type '_contained':

$2 = std::experimental::optionalint [no contained value]
skipping: Python Exception class 'TypeError' iter() returned 
non-iterator of type '_contained':

got: $2 = std::experimental::optionalint [no contained value]
PASS: libstdc++-prettyprinters/libfundts.cc print o
Python Exception class 'TypeError' iter() returned non-iterator of 
type '_contained':

$3 = std::experimental::optionalbool
skipping: Python Exception class 'TypeError' iter() returned 
non-iterator of type '_contained':

got: $3 = std::experimental::optionalbool
FAIL: libstdc++-prettyprinters/libfundts.cc print ob
Python Exception class 'TypeError' iter() returned non-iterator of 
type '_contained':

$4 = std::experimental::optionalint
skipping: Python Exception class 'TypeError' iter() returned 
non-iterator of type '_contained':

got: $4 = std::experimental::optionalint
FAIL: libstdc++-prettyprinters/libfundts.cc print oi

François



Re: [patch] No allocation for empty unordered containers

2014-07-25 Thread François Dumont

Hi

I think I never get feedback regarding this patch proposal. Note 
that if accepted the doc will have to be updated regarding the default 
hint value.


Thanks


On 03/06/2014 22:44, François Dumont wrote:

Hi

Thanks to the single bucket introduced to make move semantic 
noexcept we can also avoid some over allocations. Here is a patch to 
avoid any allocation on default instantiation, on range constructor 
when range is empty and on construction from an initialization list 
when this list is empty too. I had to make all default hint value to 0 
so that if this value is used the rehash policy next bucket returns 1 
bucket.


I don't know if you had in mind to noexcept qualify the default 
constructor but it would mean to have a real default constructor and 
another to deal with the hint which wouldn't match the Standard so no 
noexcept qualification at the moment.


Tested under Linux x86_64.normal debug and profile modes.


2014-06-03  François Dumont fdum...@gcc.gnu.org

* include/bits/hashtable.h: Make use of the internal single bucket to
limit allocation as long as container remains empty.
* include/bits/unordered_map.h: Set default bucket hint to 0 in
constructors to avoid allocation.
* include/bits/unordered_set.h: Likewise.
* include/debug/unordered_map: Likewise.
* include/debug/unordered_set: Likewise.
* include/profile/unordered_map: Likewise.
* include/profile/unordered_set: Likewise.
* src/c++11/hashtable_c++0x.cc (_Prime_rehash_policy::_M_next_bkt):
Returns 1 for hint 0.
* testsuite/23_containers/unordered_map/allocator/
empty_instantiation.cc:New.
* testsuite/23_containers/unordered_multimap/allocator/
empty_instantiation.cc:New.
* testsuite/23_containers/unordered_set/allocator/
empty_instantiation.cc: New.
* testsuite/23_containers/unordered_multiset/allocator/
empty_instantiation.cc: New.

Ok to commit ?

François




Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 211144)
+++ include/bits/hashtable.h	(working copy)
@@ -407,12 +407,12 @@
   // Use delegating constructors.
   explicit
   _Hashtable(const allocator_type __a)
-  : _Hashtable(10, _H1(), _H2(), _Hash(), key_equal(),
+  : _Hashtable(0, _H1(), _H2(), _Hash(), key_equal(),
 		   __key_extract(), __a)
   { }
 
   explicit
-  _Hashtable(size_type __n = 10,
+  _Hashtable(size_type __n = 0,
 		 const _H1 __hf = _H1(),
 		 const key_equal __eql = key_equal(),
 		 const allocator_type __a = allocator_type())
@@ -792,14 +792,18 @@
 	   const _Equal __eq, const _ExtractKey __exk,
 	   const allocator_type __a)
 : __hashtable_base(__exk, __h1, __h2, __h, __eq),
-  __map_base(),
-  __rehash_base(),
   __hashtable_alloc(__node_alloc_type(__a)),
+  _M_buckets(_M_single_bucket),
+  _M_bucket_count(1),
   _M_element_count(0),
-  _M_rehash_policy()
+  _M_single_bucket(nullptr)
 {
-  _M_bucket_count = _M_rehash_policy._M_next_bkt(__bucket_hint);
-  _M_buckets = _M_allocate_buckets(_M_bucket_count);
+  auto __bkt_count = _M_rehash_policy._M_next_bkt(__bucket_hint);
+  if (_M_bucket_count != __bkt_count)
+	{
+	  _M_bucket_count = __bkt_count;
+	  _M_buckets = _M_allocate_buckets(_M_bucket_count);
+	}
 }
 
   templatetypename _Key, typename _Value,
@@ -815,19 +819,24 @@
 		 const _Equal __eq, const _ExtractKey __exk,
 		 const allocator_type __a)
   : __hashtable_base(__exk, __h1, __h2, __h, __eq),
-	__map_base(),
-	__rehash_base(),
 	__hashtable_alloc(__node_alloc_type(__a)),
+	_M_buckets(_M_single_bucket),
+	_M_bucket_count(1),
 	_M_element_count(0),
-	_M_rehash_policy()
+	_M_single_bucket(nullptr)
   {
 	auto __nb_elems = __detail::__distance_fw(__f, __l);
-	_M_bucket_count =
+	auto __bkt_count =
 	  _M_rehash_policy._M_next_bkt(
 	std::max(_M_rehash_policy._M_bkt_for_elements(__nb_elems),
 		 __bucket_hint));
+ 
+	if (_M_bucket_count != __bkt_count)
+	  {
+	_M_bucket_count = __bkt_count;
+	_M_buckets = _M_allocate_buckets(_M_bucket_count);
+	  }
 
-	_M_buckets = _M_allocate_buckets(_M_bucket_count);
 	__try
 	  {
 	for (; __f != __l; ++__f)
@@ -864,14 +873,15 @@
 	  {
 		// Replacement allocator cannot free existing storage.
 		this-_M_deallocate_nodes(_M_begin());
-		_M_before_begin._M_nxt = nullptr;
 		_M_deallocate_buckets();
-		_M_buckets = nullptr;
+		__hashtable_base::operator=(__ht);
 		std::__alloc_on_copy(__this_alloc, __that_alloc);
-		__hashtable_base::operator=(__ht);
-		_M_bucket_count = __ht._M_bucket_count;
+		_M_buckets = _M_single_bucket;
+		_M_bucket_count = 1;
+		_M_before_begin._M_nxt = nullptr;
 		_M_element_count = __ht._M_element_count;
 		_M_rehash_policy = __ht._M_rehash_policy;
+		_M_single_bucket = nullptr;
 		__try
 		  {
 		_M_assign(__ht,
@@ -946,8 +956,14 @@
   _M_assign(const

Re: [Bug libstdc++/61107] stl_algo.h: std::__inplace_stable_partition() doesn't process the whole data range

2014-11-10 Thread François Dumont
I introduced the random tests after Christopher Jefferson request 
to have more intensive tests on those algos. Is it the whole stuff of 
tests using random numbers that you don't like or just the usage of 
mt19937 ? If second is this new version using the usual random_device I 
used so far better ?


If it is the whole usage of random numbers that you don't like I will 
simply get rid of the new tests files.


François

On 10/11/2014 22:45, Jonathan Wakely wrote:

On 10/11/14 21:50 +0100, François Dumont wrote:

Any news about this one ?

Here is another version with additional random tests on algos just to 
challenge other combinations of tests.


   PR libstdc++/61107
   * include/bits/stl_algo.h (__inplace_stable_partition): Delete.
   (__stable_partition_adaptive): Return __first is range length is 1.


The first is should be if.

The change to stl_algo.h looks OK.

I don't like the use of mt19937 in the tests, I know you committed a
test I wrote recently that uses mt19937, but that was only meant to
demonstrate the bug for bugzilla, not necessarily as the final test.

The PRNG produces the exact same sequence of numbers every time (when
you don't seed it) so if you can make the test fail using a few
iterations with the PRNG then you can find the input that fails and
just add that input to the testsuite. I didn't do that for the test I
put in bugzilla because I didn't have time to work out which input
caused the memory leak, only that it leaked for *some* easily
reproducible input. I wasn't trying to start a trend where we use
fixed sequences of pseudorandom numbers in lots of tests.




Index: include/bits/stl_algo.h
===
--- include/bits/stl_algo.h	(revision 217320)
+++ include/bits/stl_algo.h	(working copy)
@@ -1512,34 +1512,6 @@
   // partition
 
   /// This is a helper function...
-  /// Requires __len != 0 and !__pred(*__first),
-  /// same as __stable_partition_adaptive.
-  templatetypename _ForwardIterator, typename _Predicate, typename _Distance
-_ForwardIterator
-__inplace_stable_partition(_ForwardIterator __first,
-			   _Predicate __pred, _Distance __len)
-{
-  if (__len == 1)
-	return __first;
-  _ForwardIterator __middle = __first;
-  std::advance(__middle, __len / 2);
-  _ForwardIterator __left_split =
-	std::__inplace_stable_partition(__first, __pred, __len / 2);
-  // Advance past true-predicate values to satisfy this
-  // function's preconditions.
-  _Distance __right_len = __len - __len / 2;
-  _ForwardIterator __right_split =
-	std::__find_if_not_n(__middle, __right_len, __pred);
-  if (__right_len)
-	__right_split = std::__inplace_stable_partition(__middle,
-			__pred,
-			__right_len);
-  std::rotate(__left_split, __middle, __right_split);
-  std::advance(__left_split, std::distance(__middle, __right_split));
-  return __left_split;
-}
-
-  /// This is a helper function...
   /// Requires __first != __last and !__pred(__first)
   /// and __len == distance(__first, __last).
   ///
@@ -1554,10 +1526,14 @@
 _Pointer __buffer,
 _Distance __buffer_size)
 {
+  if (__len == 1)
+	return __first;
+
   if (__len = __buffer_size)
 	{
 	  _ForwardIterator __result1 = __first;
 	  _Pointer __result2 = __buffer;
+
 	  // The precondition guarantees that !__pred(__first), so
 	  // move that element to the buffer before starting the loop.
 	  // This ensures that we only call __pred once per element.
@@ -1575,31 +1551,33 @@
 		*__result2 = _GLIBCXX_MOVE(*__first);
 		++__result2;
 	  }
+
 	  _GLIBCXX_MOVE3(__buffer, __result2, __result1);
 	  return __result1;
 	}
-  else
-	{
-	  _ForwardIterator __middle = __first;
-	  std::advance(__middle, __len / 2);
-	  _ForwardIterator __left_split =
-	std::__stable_partition_adaptive(__first, __middle, __pred,
-	 __len / 2, __buffer,
-	 __buffer_size);
-	  // Advance past true-predicate values to satisfy this
-	  // function's preconditions.
-	  _Distance __right_len = __len - __len / 2;
-	  _ForwardIterator __right_split =
-	std::__find_if_not_n(__middle, __right_len, __pred);
-	  if (__right_len)
-	__right_split =
-	  std::__stable_partition_adaptive(__right_split, __last, __pred,
-	   __right_len,
-	   __buffer, __buffer_size);
-	  std::rotate(__left_split, __middle, __right_split);
-	  std::advance(__left_split, std::distance(__middle, __right_split));
-	  return __left_split;
-	}
+
+  _ForwardIterator __middle = __first;
+  std::advance(__middle, __len / 2);
+  _ForwardIterator __left_split =
+	std::__stable_partition_adaptive(__first, __middle, __pred,
+	 __len / 2, __buffer,
+	 __buffer_size);
+
+  // Advance past true-predicate values to satisfy this
+  // function's preconditions.
+  _Distance __right_len = __len - __len / 2;
+  _ForwardIterator __right_split =
+	std::__find_if_not_n(__middle

Re: [Bug libstdc++/61107] stl_algo.h: std::__inplace_stable_partition() doesn't process the whole data range

2014-11-10 Thread François Dumont
No the random tests didn't show any problem. I had demonstrated the 
problems with the modifications on the existing tests simulating 
constraint memory context.


So unless specified otherwise I will commit tomorrow without the tests 
using random numbers.


François


On 10/11/2014 23:20, Jonathan Wakely wrote:

On 10/11/14 23:14 +0100, François Dumont wrote:
   I introduced the random tests after Christopher Jefferson request 
to have more intensive tests on those algos. Is it the whole stuff of 
tests using random numbers that you don't like or just the usage of 
mt19937 ?


The use of random number in general.

If second is this new version using the usual random_device I used so 
far better ?


That would be much worse because failures would not be reproducible!

If it is the whole usage of random numbers that you don't like I will 
simply get rid of the new tests files.


Did the new tests fail before your fix to stl_algo.h?

If yes, you could extract the values generated in the case that fails
and add a test using those values (this is what I should have done for
the leaking set tests)

If no, they aren't really testing anything useful.






PR 13631 Problems in messages

2014-11-23 Thread François Dumont

Hello

As we are at doing some evolution in the ABI I would like to take 
the opportunity to merge branch libstdcxx_so_7-2. The first fix was 
about a messages facet issue. So here is the version for the trunk which 
is the one from the branch plus management of the charset part. This way 
messageswchar_t works too.


There are still some uncovered points in this patch:
- I had to make codecvt _M_c_locale_codecvt public to access it from the 
messages facet code. How do you want to handle it properly ? A friend 
function to access it or do I make messages facet friend ?
- I haven't use the ABI tag yet. I know that there is a plan to tag 
locale facets, will it work for what I am doing ?


Note that I could use std::tuple instead of combination of 
std::pair and std::unique_ptr instead of wchar_buffer if we were 
building it in C++11 mode. Is it possible ?


François

Index: config/locale/gnu/messages_members.cc
===
--- config/locale/gnu/messages_members.cc	(revision 217816)
+++ config/locale/gnu/messages_members.cc	(working copy)
@@ -31,54 +31,281 @@
 #include locale
 #include bits/c++locale_internal.h
 
+#include algorithm
+#include utility
+#include ext/concurrence.h
+
+namespace
+{
+  using namespace std;
+
+  typedef messages_base::catalog catalog;
+  typedef paircatalog, pairconst char*, locale  _MapEntry;
+  typedef pairbool, pairconst char*, const locale*  _SearchRes;
+
+  struct Comp
+  {
+bool operator()(catalog __cat, const _MapEntry __entry) const
+{ return __cat  __entry.first; }
+
+bool operator()(const _MapEntry __entry, catalog __cat) const
+{ return __entry.first  __cat; }
+  };
+
+  class Catalogs
+  {
+  public:
+Catalogs() : _M_counter(0), _M_nb_entry(0) { }
+
+~Catalogs()
+{
+  if (_M_nb_entry)
+	{
+	  for (size_t i = 0; i != _M_nb_entry; ++i)
+	delete[] _M_map[i].second.first;
+	  delete[] _M_map;
+	}
+}
+
+catalog
+_M_add(const string __s, locale __l)
+{
+  __gnu_cxx::__scoped_lock lock(_M_mutex);
+
+  _MapEntry* __new_map = new _MapEntry[_M_nb_entry + 1];
+  __try
+	{
+	  copy(_M_map, _M_map + _M_nb_entry, __new_map);
+	  char* __s_copy = new char[__s.size() + 1];
+	  __s.copy(__s_copy, __s.size());
+	  __s_copy[__s.size()] = 0;
+	  __new_map[_M_nb_entry]
+	= make_pair(_M_counter, make_pair(__s_copy, __l));
+	}
+  __catch(...)
+	{
+	  delete[] __new_map;
+	  __throw_exception_again;
+	}
+
+  // The counter is not likely to roll unless catalogs keep on being
+  // open/close which is consider as an application mistake for the moment.
+  catalog __cat = _M_counter++;
+  delete[] _M_map;
+  _M_map = __new_map;
+  ++_M_nb_entry;
+
+  return __cat;
+}
+
+void
+_M_erase(catalog __c)
+{
+  __gnu_cxx::__scoped_lock lock(_M_mutex);
+
+  _MapEntry* __entry =
+	lower_bound(_M_map, _M_map + _M_nb_entry, __c, Comp());
+  if (__entry == _M_map + _M_nb_entry || __entry-first != __c)
+	return;
+  
+  _MapEntry* __new_map =
+	_M_nb_entry  1 ? new _MapEntry[_M_nb_entry - 1] : 0;
+  copy(__entry + 1, _M_map + _M_nb_entry,
+	   copy(_M_map, __entry, __new_map));
+
+  delete[] __entry-second.first;
+  delete[] _M_map;
+  _M_map = __new_map;
+  --_M_nb_entry;
+}
+
+_SearchRes
+_M_get(catalog __c) const
+{
+  __gnu_cxx::__scoped_lock lock(_M_mutex);
+
+  const _MapEntry* __entry =
+	lower_bound(_M_map, _M_map + _M_nb_entry, __c, Comp());
+  if (__entry != _M_map + _M_nb_entry  __entry-first == __c)
+	return _SearchRes(true,
+			  make_pair(__entry-second.first, (__entry-second.second)));
+  return _SearchRes(false, make_pair((const char*)0, (locale*)0));
+}
+
+  private:
+mutable __gnu_cxx::__mutex _M_mutex;
+catalog _M_counter;
+_MapEntry* _M_map;
+size_t _M_nb_entry;
+  };
+
+  Catalogs
+  get_catalogs()
+  {
+static Catalogs __catalogs;
+return __catalogs;
+  }
+
+  struct wchar_buffer
+  {
+wchar_buffer(std::size_t __size)
+  : _M_buffer(new wchar_t[__size])
+{ }
+
+~wchar_buffer()
+{ delete[] _M_buffer; }
+
+wchar_t*
+get()
+{ return _M_buffer; }
+
+  private:
+wchar_t* _M_buffer;
+  };
+
+  const char*
+  get_glibc_msg(__c_locale __attribute__((unused)) __locale_messages,
+		const char* __domainname,
+		const char* __dfault)
+  {
+#if __GLIBC__  2 || (__GLIBC__ == 2  __GLIBC_MINOR__  2)
+std::__c_locale __old = __uselocale(__locale_messages);
+const char* __msg =
+  const_castconst char*(dgettext(__domainname, __dfault));
+__uselocale(__old);
+#else
+char* __old = setlocale(LC_ALL, 0);
+const size_t __len = strlen(__old) + 1;
+char* __sav = new char[__len];
+memcpy(__sav, __old, __len);
+setlocale(LC_ALL, _M_name_messages);
+const char* __msg = dgettext(__domainname, __dfault);
+setlocale(LC_ALL, __sav);
+delete [] __sav;
+#endif

Re: PR 13631 Problems in messages

2014-11-24 Thread François Dumont

On 24/11/2014 01:23, Jonathan Wakely wrote:

On 24/11/14 00:13 +0100, François Dumont wrote:

Hello

   As we are at doing some evolution in the ABI I would like to take 
the opportunity to merge branch libstdcxx_so_7-2. The first fix was 


I don't think we want to merge everything, but it's certainly worth
looking to see if there are some fixes on that branch worth taking.


Indeed, there are only 2 patches on this branch and haven't plan to 
merge the other one for the debug mode. We will just be able to close 
the branch then.




It would have been better to do during stage 1 though :-\


Sorry about that, I submit patches when I can and you can delay them as 
you want to. Just tell me about when we will be able to make it in.




about a messages facet issue. So here is the version for the trunk 
which is the one from the branch plus management of the charset part. 
This way messageswchar_t works too.


   There are still some uncovered points in this patch:
- I had to make codecvt _M_c_locale_codecvt public to access it from 
the messages facet code. How do you want to handle it properly ? A 
friend function to access it or do I make messages facet friend ?
- I haven't use the ABI tag yet. I know that there is a plan to tag 
locale facets, will it work for what I am doing ?


Unless I'm missing something you're not making any ABI changes to
std::messages, just making the definitiosn of some functions no longer
inline.


Yes, this is indeed what had been identified as the ABI breaking change, 
not a big one. Considering it now I think it is not a real one. 
Applications built with the former library will have an inlined wrong 
behavior. Application rebuilt will use the new correct one, is there 
really a problem ?




   Note that I could use std::tuple instead of combination of 
std::pair and std::unique_ptr instead of wchar_buffer if we were 
building it in C++11 mode. Is it possible ?


Yes, the symlink to the messages_members.cc file would need to be
moved from src/c++98/Makefile.am to src/c++11/Makefile.am


Index: include/bits/locale_facets_nonio.h
===
--- include/bits/locale_facets_nonio.h(revision 217816)
+++ include/bits/locale_facets_nonio.h(working copy)
@@ -1842,22 +1842,6 @@
  */
  virtual void
  do_close(catalog) const;
-
-  // Returns a locale and codeset-converted string, given a 
char* message.

-  char*
-  _M_convert_to_char(const string_type __msg) const
-  {
-// XXX
-return reinterpret_castchar*(const_cast_CharT*(__msg.c_str()));
-  }
-
-  // Returns a locale and codeset-converted string, given a 
char* message.

-  string_type
-  _M_convert_from_char(char*) const
-  {
-// XXX
-return string_type();
-  }
 };


Those members are used by the ieee_1003.1-2001 locale.



Yes, I had plan to check but forgot.

I will prepare an updated version of this patch with those information 
and we will see how to handle it.


François



Re: PR 13631 Problems in messages

2014-11-27 Thread François Dumont

Hi

Here is a revisited version. I finally go without compiling in 
C++11 as I prefer to use __builtin_alloca instead of using 
std::unique_ptr and heap memory. I also realized that I could use 
codecvt::max_length to know the max number of multibytes.


Tested under Linux x86_64 with all locales installed. There are 
some failures but not coming from this patch, will surely be the subject 
of another mail.


2014-11-28  François Dumont  fdum...@gcc.gnu.org

DR libstdc++/13631
* include/bits/codecvt.h (__get_c_locale): New.
* config/locale/gnu/messages_member.h, messages_member.cc: Prefer
dgettext usage to gettext to allow usage of several catalogs at the
same time. Add an internal cache to make catalog names to catalog ids.
Add charset management.
* testsuite/22_locale/messages/13631.cc: New.
* testsuite/22_locale/messages/members/char/2.cc: Use also fr_FR locale
for charset conversion to get the expected accentuated character.

Do we need any abi tag ? Do we need to wait ?

François


On 24/11/2014 01:23, Jonathan Wakely wrote:

On 24/11/14 00:13 +0100, François Dumont wrote:

Hello

   As we are at doing some evolution in the ABI I would like to take 
the opportunity to merge branch libstdcxx_so_7-2. The first fix was 


I don't think we want to merge everything, but it's certainly worth
looking to see if there are some fixes on that branch worth taking.

It would have been better to do during stage 1 though :-\

about a messages facet issue. So here is the version for the trunk 
which is the one from the branch plus management of the charset part. 
This way messageswchar_t works too.


   There are still some uncovered points in this patch:
- I had to make codecvt _M_c_locale_codecvt public to access it from 
the messages facet code. How do you want to handle it properly ? A 
friend function to access it or do I make messages facet friend ?
- I haven't use the ABI tag yet. I know that there is a plan to tag 
locale facets, will it work for what I am doing ?


Unless I'm missing something you're not making any ABI changes to
std::messages, just making the definitiosn of some functions no longer
inline.

   Note that I could use std::tuple instead of combination of 
std::pair and std::unique_ptr instead of wchar_buffer if we were 
building it in C++11 mode. Is it possible ?


Yes, the symlink to the messages_members.cc file would need to be
moved from src/c++98/Makefile.am to src/c++11/Makefile.am


Index: include/bits/locale_facets_nonio.h
===
--- include/bits/locale_facets_nonio.h(revision 217816)
+++ include/bits/locale_facets_nonio.h(working copy)
@@ -1842,22 +1842,6 @@
  */
  virtual void
  do_close(catalog) const;
-
-  // Returns a locale and codeset-converted string, given a 
char* message.

-  char*
-  _M_convert_to_char(const string_type __msg) const
-  {
-// XXX
-return reinterpret_castchar*(const_cast_CharT*(__msg.c_str()));
-  }
-
-  // Returns a locale and codeset-converted string, given a 
char* message.

-  string_type
-  _M_convert_from_char(char*) const
-  {
-// XXX
-return string_type();
-  }
 };


Those members are used by the ieee_1003.1-2001 locale.




Index: config/locale/gnu/messages_members.cc
===
--- config/locale/gnu/messages_members.cc	(revision 218027)
+++ config/locale/gnu/messages_members.cc	(working copy)
@@ -31,54 +31,272 @@
 #include locale
 #include bits/c++locale_internal.h
 
+#include algorithm
+#include utility
+
+#include ext/concurrence.h
+
+namespace
+{
+  using namespace std;
+
+  class Catalogs
+  {
+  public:
+typedef messages_base::catalog catalog_id;
+
+struct catalog_info
+{
+  catalog_info()
+  { }
+
+  catalog_info(catalog_id __id, const char* __domain, locale __loc)
+	: _M_id(__id), _M_domain(__domain), _M_locale(__loc)
+  { }
+
+  catalog_id _M_id;
+  const char* _M_domain;
+  locale _M_locale;
+};
+
+typedef pairconst char*, locale result_type;
+
+Catalogs() : _M_counter(0), _M_nb_entry(0) { }
+
+~Catalogs()
+{
+  if (_M_nb_entry)
+	{
+	  for (size_t i = 0; i != _M_nb_entry; ++i)
+	delete[] _M_map[i]._M_domain;
+	  delete[] _M_map;
+	}
+}
+
+catalog_id
+_M_add(const string __domain, locale __l)
+{
+  __gnu_cxx::__scoped_lock lock(_M_mutex);
+
+  catalog_info* __new_map = new catalog_info[_M_nb_entry + 1];
+  __try
+	{
+	  copy(_M_map, _M_map + _M_nb_entry, __new_map);
+	  char* __domain_copy = new char[__domain.size() + 1];
+	  __domain.copy(__domain_copy, __domain.size());
+	  __domain_copy[__domain.size()] = 0;
+	  __new_map[_M_nb_entry] = catalog_info(_M_counter, __domain_copy, __l);
+	}
+  __catch(...)
+	{
+	  delete[] __new_map;
+	  __throw_exception_again;
+	}
+
+  // The counter is not likely

Re: [Bug libstdc++/62313] Data race in debug iterators

2014-09-25 Thread François Dumont



Apart from those minor adjustments I think this looks good, but I'd
like to know that it has been tested with -fsanitize=thread, even if
only lightly tested.




Hi

Dmitry, who reported the bug, confirmed the fix. Can I go ahead and 
commit ?


François


Add myself as libstdc++ special modes maintainer

2014-09-29 Thread François Dumont
I added myself as libstdc++ special modes maintainer. Special modes are 
debug, profile and parallel modes.


Thanks for your trust.

François



Re: [Bug libstdc++/62313] Data race in debug iterators

2014-09-30 Thread François Dumont

I forgot to check pretty printer tests indeed, I will.

François

On 30/09/2014 17:32, Jonathan Wakely wrote:

On 26/09/14 11:05 +0100, Jonathan Wakely wrote:

On 26/09/14 00:00 +0200, François Dumont wrote:



Apart from those minor adjustments I think this looks good, but I'd
like to know that it has been tested with -fsanitize=thread, even if
only lightly tested.




Hi

  Dmitry, who reported the bug, confirmed the fix. Can I go ahead 
and commit ?


Yes, OK.


This caused some failures in the printer tests:

Running
/home/jwakely/src/gcc/gcc/libstdc++-v3/testsuite/libstdc++-prettyprinters/prettyprinters.exp 
...

FAIL: libstdc++-prettyprinters/debug.cc print deqiter
FAIL: libstdc++-prettyprinters/debug.cc print lstiter
FAIL: libstdc++-prettyprinters/debug.cc print lstciter
FAIL: libstdc++-prettyprinters/debug.cc print mpiter
FAIL: libstdc++-prettyprinters/debug.cc print spciter






Re: [Bug libstdc++/62313] Data race in debug iterators

2014-09-30 Thread François Dumont

Hi

I prefer to submit this patch to you cause I am not very 
comfortable with Python stuff.


I simply rely on Python cast feature. It doesn't really matter but 
is it going to simply consider the debug iterator as a normal one or is 
it going through the C++ explicit cast operator on debug iterators ?


François


On 30/09/2014 17:32, Jonathan Wakely wrote:

On 26/09/14 11:05 +0100, Jonathan Wakely wrote:

On 26/09/14 00:00 +0200, François Dumont wrote:



Apart from those minor adjustments I think this looks good, but I'd
like to know that it has been tested with -fsanitize=thread, even if
only lightly tested.




Hi

  Dmitry, who reported the bug, confirmed the fix. Can I go ahead 
and commit ?


Yes, OK.


This caused some failures in the printer tests:

Running
/home/jwakely/src/gcc/gcc/libstdc++-v3/testsuite/libstdc++-prettyprinters/prettyprinters.exp 
...

FAIL: libstdc++-prettyprinters/debug.cc print deqiter
FAIL: libstdc++-prettyprinters/debug.cc print lstiter
FAIL: libstdc++-prettyprinters/debug.cc print lstciter
FAIL: libstdc++-prettyprinters/debug.cc print mpiter
FAIL: libstdc++-prettyprinters/debug.cc print spciter




Index: python/libstdcxx/v6/printers.py
===
--- python/libstdcxx/v6/printers.py	(revision 215741)
+++ python/libstdcxx/v6/printers.py	(working copy)
@@ -460,7 +460,7 @@
 # and return the wrapped iterator value.
 def to_string (self):
 itype = self.val.type.template_argument(0)
-return self.val['_M_current'].cast(itype)
+return self.val.cast(itype)
 
 class StdMapPrinter:
 Print a std::map or std::multimap


Re: Profile mode maintenance patch

2014-10-04 Thread François Dumont

On 23/09/2014 13:27, Jonathan Wakely wrote:


Yes, OK for trunk - thanks very much.


Hi

There was in fact one last test failing, ext/profile/mh.cc, a 
profile mode specific test. It must have been failing for quite a while 
since malloc hooks has been deprecated. It is normally testing the 
profile mode protection against recursion if memory allocation functions 
are redefined. It was based on malloc but we use in fact new operator. 
So I rewrite the test using new/delete operators.


This new test version is attached, I removed those 2 lines at the 
beginning:


// { dg-do compile { target *-*-linux* *-*-gnu* } }
// { dg-xfail-if  { uclibc } { * } {  } }

I think that this test can now be executed and see no reason why it 
should fail with uclibc. Do you confirm ?


I attached the full patch again. I also remove useless virtual 
destructor or methods, no need for polymorphism.


François



profile.patch.bz2
Description: application/bzip
// -*- C++ -*-

// Copyright (C) 2006-2014 Free Software Foundation, Inc.
//
// This file is part of the GNU ISO C++ Library.  This library is free
// software; you can redistribute it and/or modify it under the
// terms of the GNU General Public License as published by the
// Free Software Foundation; either version 3, or (at your option)
// any later version.
//
// This library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
// GNU General Public License for more details.
//
// You should have received a copy of the GNU General Public License along
// with this library; see the file COPYING3.  If not see
// http://www.gnu.org/licenses/.

// { dg-require-profile-mode  }

#include vector

using std::vector;

void* operator new(std::size_t size) throw(std::bad_alloc)
{
  void* p = std::malloc(size);
  if (!p)
throw std::bad_alloc();
  return p;
}

void* operator new (std::size_t size, const std::nothrow_t) throw()
{
  // With _GLIBCXX_PROFILE, the instrumentation of the vector constructor
  // will call back into this new operator.
  vectorint v;
  return std::malloc(size);
}

void operator delete(void* p) throw()
{
  if (p)
std::free(p);
}

int
main() 
{
  vectorint v;
  return 0;
}


[Bug libstdc++/63456] unordered_map incorrectly frees _M_single_bucket. Patch Included

2014-10-05 Thread François Dumont

Hi

I just committed this trivial bug fix.

Shall I go ahead and apply it to 4.9 branch too ?

2014-10-05  François Dumont  fdum...@gcc.gnu.org

PR libstdc++/63456
* include/bits/hashtable.h (_M_uses_single_bucket(__bucket_type*)): 
Test

the parameter.
* testsuite/23_containers/unordered_set/63456.cc: New.

François
Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 215902)
+++ include/bits/hashtable.h	(working copy)
@@ -326,7 +326,7 @@
 
   bool
   _M_uses_single_bucket(__bucket_type* __bkts) const
-  { return __builtin_expect(_M_buckets == _M_single_bucket, false); }
+  { return __builtin_expect(__bkts == _M_single_bucket, false); }
 
   bool
   _M_uses_single_bucket() const
Index: testsuite/23_containers/unordered_set/63456.cc
===
--- testsuite/23_containers/unordered_set/63456.cc	(revision 0)
+++ testsuite/23_containers/unordered_set/63456.cc	(working copy)
@@ -0,0 +1,36 @@
+// Copyright (C) 2014 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// http://www.gnu.org/licenses/.
+
+// { dg-options -std=gnu++11 }
+
+#include unordered_set
+
+#include testsuite_hooks.h
+
+void test01()
+{
+  std::unordered_setint s1, s2;
+  s2.insert(2);
+
+  s1 = s2;
+}
+
+int main()
+{
+  test01();
+  return 0;
+}


Re: [Bug libstdc++/63456] unordered_map incorrectly frees _M_single_bucket. Patch Included

2014-10-05 Thread François Dumont

On 05/10/2014 21:37, Paolo Carlini wrote:

Hi,

On 10/05/2014 08:50 PM, François Dumont wrote:

+#include testsuite_hooks.h

Seems redundant.

Thanks!
Paolo.

Yes it is and in fact I had remove it before the real commit, I should 
have update the patch.


François


Re: sort_heap complexity guarantee

2014-10-06 Thread François Dumont

On 05/10/2014 22:54, Marc Glisse wrote:

On Sun, 5 Oct 2014, François Dumont wrote:

   I took a look at PR 61217 regarding pop_heap complexity guarantee. 
Looks like we have no test to check complexity of our algos so I 
start writing some starting with the heap operations. I found no 
issue with make_heap, push_heap and pop_heap despite what the bug 
report is saying however the attached testcase for sort_heap is failing.


   Standard is saying std::sort_heap shall use less than N * log(N) 
comparisons but with my test using 1000 random values the test is 
showing:


8687 comparisons on 6907.76 max allowed

   Is this a known issue of sort_heap ? Do you confirm that the test 
is valid ?


I would first look for confirmation that the standard didn't just 
forget a big-O or something. I would expect an implementation as n 
calls to pop_heap to be legal, and if pop_heap makes 2*log(n) 
comparisons, that naively sums to too much. And I don't expect the 
standard to contain an advanced amortized analysis or anything like 
that...


Good point, with n calls to pop_heap it means that limit must be 
2*log(1) + 2*log(2) +... + 2*log(n) which is 2*log(n!) and  which is 
also necessarily  2*n*log(n). I guess Standard comittee has forgotten 
the factor 2 in the limit so this is what I am using as limit in the 
final test, unless someone prefer the stricter 2*log(n!) ?


Ok to commit those new tests ?

2014-10-06  François Dumont  fdum...@gcc.gnu.org

* testsuite/util/testsuite_counter_type.h
(counter_type::operator(const counter_type)): Update
less_compare_count when called.
* testsuite/25_algorithms/make_heap/complexity.cc: New.
* testsuite/25_algorithms/pop_heap/complexity.cc: New.
* testsuite/25_algorithms/push_heap/complexity.cc: New.
* testsuite/25_algorithms/sort_heap/complexity.cc: New.


Tested under Linux x86_64.

François
Index: testsuite/25_algorithms/make_heap/complexity.cc
===
--- testsuite/25_algorithms/make_heap/complexity.cc	(revision 0)
+++ testsuite/25_algorithms/make_heap/complexity.cc	(working copy)
@@ -0,0 +1,50 @@
+// Copyright (C) 2014 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// http://www.gnu.org/licenses/.
+
+// { dg-options -std=gnu++11 }
+
+#include random
+#include vector
+#include algorithm
+
+#include testsuite_counter_type.h
+#include testsuite_hooks.h
+
+void test01()
+{
+  using __gnu_test::counter_type;
+  const std::size_t nb_values = 1000;
+
+  std::random_device dev;
+  std::uniform_int_distributionint dist;
+  std::vectorcounter_type values;
+  values.reserve(nb_values);
+  for (std::size_t i = 0; i != nb_values; ++i)
+values.push_back(dist(dev));
+
+  counter_type::reset();
+
+  std::make_heap(values.begin(), values.end());
+
+  VERIFY( counter_type::less_compare_count = 3.0 * nb_values );
+}
+
+int main()
+{
+  test01();
+  return 0;
+}
Index: testsuite/25_algorithms/pop_heap/complexity.cc
===
--- testsuite/25_algorithms/pop_heap/complexity.cc	(revision 0)
+++ testsuite/25_algorithms/pop_heap/complexity.cc	(working copy)
@@ -0,0 +1,53 @@
+// Copyright (C) 2014 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// http://www.gnu.org/licenses/.
+
+// { dg-options -std=gnu++11 }
+
+#include cmath
+#include random
+#include vector
+#include algorithm
+
+#include testsuite_counter_type.h
+#include testsuite_hooks.h
+
+void test01()
+{
+  using __gnu_test::counter_type;
+  const std::size_t nb_values = 1000;
+
+  std::random_device dev;
+  std::uniform_int_distributionint dist;
+  std::vectorcounter_type

Re: sort_heap complexity guarantee

2014-10-07 Thread François Dumont

On 06/10/2014 23:05, Daniel Krügler wrote:

2014-10-06 23:00 GMT+02:00 François Dumont frs.dum...@gmail.com:

On 05/10/2014 22:54, Marc Glisse wrote:

On Sun, 5 Oct 2014, François Dumont wrote:


I took a look at PR 61217 regarding pop_heap complexity guarantee.
Looks like we have no test to check complexity of our algos so I start
writing some starting with the heap operations. I found no issue with
make_heap, push_heap and pop_heap despite what the bug report is saying
however the attached testcase for sort_heap is failing.

Standard is saying std::sort_heap shall use less than N * log(N)
comparisons but with my test using 1000 random values the test is showing:

8687 comparisons on 6907.76 max allowed

Is this a known issue of sort_heap ? Do you confirm that the test is
valid ?

I would first look for confirmation that the standard didn't just forget a
big-O or something. I would expect an implementation as n calls to pop_heap
to be legal, and if pop_heap makes 2*log(n) comparisons, that naively sums
to too much. And I don't expect the standard to contain an advanced
amortized analysis or anything like that...


Good point, with n calls to pop_heap it means that limit must be 2*log(1) +
2*log(2) +... + 2*log(n) which is 2*log(n!) and  which is also necessarily 
2*n*log(n). I guess Standard comittee has forgotten the factor 2 in the
limit so this is what I am using as limit in the final test, unless someone
prefer the stricter 2*log(n!) ?

François, could you please submit a corresponding LWG issue by sending
an email using the recipe described here:

http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-active.html#submit_issue

?


I just did requesting to use 2N log(N).

And is it ok to commit those ?

François



[Bug libstdc++/63500] [4.9/5 Regression] bug in debug version of std::make_move_iterator?

2014-10-14 Thread François Dumont

Hi

Here is a proposal to fix the issue with iterators which do not 
expose lvalue references when dereferenced. I simply chose to detect 
such an issue in c++11 mode thanks to the is_lvalue_reference meta function.


2014-10-15  François Dumont  fdum...@gcc.gnu.org

PR libstdc++/63500
* include/bits/cpp_type_traits.h (__true_type): Add __value constant.
(__false_type): Likewise.
* include/debug/functions.h (__foreign_iterator_aux2): Do not check for
foreign iterators if input iterators returns rvalue reference.
* testsuite/23_containers/vector/63500.cc: New.

Tested under Linux x86_64.

François

Index: include/bits/cpp_type_traits.h
===
--- include/bits/cpp_type_traits.h	(revision 216158)
+++ include/bits/cpp_type_traits.h	(working copy)
@@ -79,9 +79,12 @@
 {
 _GLIBCXX_BEGIN_NAMESPACE_VERSION
 
-  struct __true_type { };
-  struct __false_type { };
+  struct __true_type
+  { enum { __value = 1 }; };
 
+  struct __false_type
+  { enum { __value = 0 }; };
+
   templatebool
 struct __truth_type
 { typedef __false_type __type; };
Index: include/debug/functions.h
===
--- include/debug/functions.h	(revision 216158)
+++ include/debug/functions.h	(working copy)
@@ -34,7 +34,7 @@
 	  // _Iter_base
 #include bits/cpp_type_traits.h	  // for __is_integer
 #include bits/move.h// for __addressof and addressof
-# include bits/stl_function.h		  // for less
+#include bits/stl_function.h		  // for less
 #if __cplusplus = 201103L
 # include type_traits			  // for is_lvalue_reference and __and_
 #endif
@@ -252,8 +252,21 @@
 			const _InputIterator __other,
 			const _InputIterator __other_end)
 {
+#if __cplusplus = 201103L
+  typedef std::iterator_traits_InputIterator _InputIteTraits;
+  typedef typename _InputIteTraits::reference _InputIteRefType;
+#endif
   return __foreign_iterator_aux3(__it, __other, __other_end,
+#if __cplusplus  201103L
  _Is_contiguous_sequence_Sequence());
+#else
+  typename std::conditional
+	std::__and_std::integral_constant
+	  bool, _Is_contiguous_sequence_Sequence::__value,
+		std::is_lvalue_reference_InputIteRefType ::value,
+	std::__true_type,
+	std::__false_type::type());
+#endif
 }
 
   /* Handle the case where we aren't really inserting a range after all */
Index: testsuite/23_containers/vector/63500.cc
===
--- testsuite/23_containers/vector/63500.cc	(revision 0)
+++ testsuite/23_containers/vector/63500.cc	(working copy)
@@ -0,0 +1,39 @@
+// -*- C++ -*-
+
+// Copyright (C) 2014 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// http://www.gnu.org/licenses/.
+
+// { dg-options -std=gnu++11 }
+// { dg-do compile }
+
+#include memory
+#include iterator
+#include debug/vector
+
+class Foo
+{};
+
+void
+test01()
+{
+  __gnu_debug::vectorstd::unique_ptrFoo v;
+  __gnu_debug::vectorstd::unique_ptrFoo w;
+
+  v.insert(end(v),
+	   make_move_iterator(begin(w)),
+	   make_move_iterator(end(w)));
+}


Re: [Bug libstdc++/63500] [4.9/5 Regression] bug in debug version of std::make_move_iterator?

2014-10-15 Thread François Dumont

On 15/10/2014 13:10, Jonathan Wakely wrote:


I find this much easier to read:

#if __cplusplus  201103L
 typedef _Is_contiguous_sequence_Sequence __tag;
#else
 using __lvalref = std::is_lvalue_reference
   typename std::iterator_traits_InputIterator::reference;
 using __contiguous = _Is_contiguous_sequence_Sequence;
 using __tag = typename std::conditional__lvalref::value, 
__contiguous,

std::__false_type::type;
#endif
 return __foreign_iterator_aux3(__it, __other, __other_end, __tag());

It only has one preprocessor condition and it avoids mismatched
parentheses caused by opening the function parameter list once but
closing it twice in two different branches.



That's much better indeed.

Shall we go with this ? Of course we are simply considering that we 
can't check for foreign iterators when some iterator adapters comes 
in-between. I hope one day to detect invalid usages even in this context.


2014-10-16  François Dumont  fdum...@gcc.gnu.org

PR libstdc++/63500
* include/debug/functions.h (__foreign_iterator_aux2): Do not check for
foreign iterators if input iterators returns rvalue reference.
* testsuite/23_containers/vector/63500.cc: New.

François

Index: include/debug/functions.h
===
--- include/debug/functions.h	(revision 216279)
+++ include/debug/functions.h	(working copy)
@@ -34,7 +34,7 @@
 	  // _Iter_base
 #include bits/cpp_type_traits.h	  // for __is_integer
 #include bits/move.h// for __addressof and addressof
-# include bits/stl_function.h		  // for less
+#include bits/stl_function.h		  // for less
 #if __cplusplus = 201103L
 # include type_traits			  // for is_lvalue_reference and __and_
 #endif
@@ -252,8 +252,16 @@
 			const _InputIterator __other,
 			const _InputIterator __other_end)
 {
-  return __foreign_iterator_aux3(__it, __other, __other_end,
- _Is_contiguous_sequence_Sequence());
+#if __cplusplus  201103L
+  typedef _Is_contiguous_sequence_Sequence __tag;
+#else
+  using __lvalref = std::is_lvalue_reference
+	typename std::iterator_traits_InputIterator::reference;
+  using __contiguous = _Is_contiguous_sequence_Sequence;
+  using __tag = typename std::conditional__lvalref::value, __contiguous,
+	  std::__false_type::type;
+#endif
+  return __foreign_iterator_aux3(__it, __other, __other_end, __tag());
 }
 
   /* Handle the case where we aren't really inserting a range after all */
Index: testsuite/23_containers/vector/63500.cc
===
--- testsuite/23_containers/vector/63500.cc	(revision 0)
+++ testsuite/23_containers/vector/63500.cc	(working copy)
@@ -0,0 +1,39 @@
+// -*- C++ -*-
+
+// Copyright (C) 2014 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// http://www.gnu.org/licenses/.
+
+// { dg-options -std=gnu++11 }
+// { dg-do compile }
+
+#include memory
+#include iterator
+#include debug/vector
+
+class Foo
+{};
+
+void
+test01()
+{
+  __gnu_debug::vectorstd::unique_ptrFoo v;
+  __gnu_debug::vectorstd::unique_ptrFoo w;
+
+  v.insert(end(v),
+	   make_move_iterator(begin(w)),
+	   make_move_iterator(end(w)));
+}


[Bug libstdc++/61107] stl_algo.h: std::__inplace_stable_partition() doesn't process the whole data range

2014-10-17 Thread François Dumont

Hi

As proposed in the bug report I just removed the 
__inplace_stable_partition as __stable_partition_adaptive is able to 
handle a 0 buffer size.


To test this bug I introduced overloads of new/delete operators in 
the testsuite utils. The existing set_memory_limits has no impact on new 
operator. I wonder if some test using it really have the expected behavior.


I also tests other algos that try to use a buffer and didn't found 
any issue. Those algos however can't be simplified like stable_partition.


2014-10-16  François Dumont fdum...@gcc.gnu.org

PR libstdc++/61107
* include/bits/stl_algo.h (__inplace_stable_partition): Delete.
(__stable_partition): Adapt.
* testsuite/util/testsuite_new_operators.h: New.
* testsuite/25_algorithms/stable_sort/1.cc: Test algo in simulated
constraint memory context.
* testsuite/25_algorithms/inplace_merge/1.cc: Likewise.
* testsuite/25_algorithms/stable_partition/1.cc: Likewise.

Tested under Linux x86_64.

Ok to commit ?

François
Index: include/bits/stl_algo.h
===
--- include/bits/stl_algo.h	(revision 216348)
+++ include/bits/stl_algo.h	(working copy)
@@ -1512,34 +1512,6 @@
   // partition
 
   /// This is a helper function...
-  /// Requires __len != 0 and !__pred(*__first),
-  /// same as __stable_partition_adaptive.
-  templatetypename _ForwardIterator, typename _Predicate, typename _Distance
-_ForwardIterator
-__inplace_stable_partition(_ForwardIterator __first,
-			   _Predicate __pred, _Distance __len)
-{
-  if (__len == 1)
-	return __first;
-  _ForwardIterator __middle = __first;
-  std::advance(__middle, __len / 2);
-  _ForwardIterator __left_split =
-	std::__inplace_stable_partition(__first, __pred, __len / 2);
-  // Advance past true-predicate values to satisfy this
-  // function's preconditions.
-  _Distance __right_len = __len - __len / 2;
-  _ForwardIterator __right_split =
-	std::__find_if_not_n(__middle, __right_len, __pred);
-  if (__right_len)
-	__right_split = std::__inplace_stable_partition(__middle,
-			__pred,
-			__right_len);
-  std::rotate(__left_split, __middle, __right_split);
-  std::advance(__left_split, std::distance(__middle, __right_split));
-  return __left_split;
-}
-
-  /// This is a helper function...
   /// Requires __first != __last and !__pred(__first)
   /// and __len == distance(__first, __last).
   ///
@@ -1554,10 +1526,14 @@
 _Pointer __buffer,
 _Distance __buffer_size)
 {
+  if (__len == 1)
+	return __first;
+
   if (__len = __buffer_size)
 	{
 	  _ForwardIterator __result1 = __first;
 	  _Pointer __result2 = __buffer;
+
 	  // The precondition guarantees that !__pred(__first), so
 	  // move that element to the buffer before starting the loop.
 	  // This ensures that we only call __pred once per element.
@@ -1575,11 +1551,11 @@
 		*__result2 = _GLIBCXX_MOVE(*__first);
 		++__result2;
 	  }
+
 	  _GLIBCXX_MOVE3(__buffer, __result2, __result1);
 	  return __result1;
 	}
-  else
-	{
+
 	  _ForwardIterator __middle = __first;
 	  std::advance(__middle, __len / 2);
 	  _ForwardIterator __left_split =
@@ -1586,21 +1562,23 @@
 	std::__stable_partition_adaptive(__first, __middle, __pred,
 	 __len / 2, __buffer,
 	 __buffer_size);
+
 	  // Advance past true-predicate values to satisfy this
 	  // function's preconditions.
 	  _Distance __right_len = __len - __len / 2;
 	  _ForwardIterator __right_split =
 	std::__find_if_not_n(__middle, __right_len, __pred);
+
 	  if (__right_len)
 	__right_split =
 	  std::__stable_partition_adaptive(__right_split, __last, __pred,
 	   __right_len,
 	   __buffer, __buffer_size);
+
 	  std::rotate(__left_split, __middle, __right_split);
 	  std::advance(__left_split, std::distance(__middle, __right_split));
 	  return __left_split;
 	}
-}
 
   templatetypename _ForwardIterator, typename _Predicate
 _ForwardIterator
@@ -1618,16 +1596,11 @@
 	_DistanceType;
 
   _Temporary_buffer_ForwardIterator, _ValueType __buf(__first, __last);
-  if (__buf.size()  0)
 	return
 	  std::__stable_partition_adaptive(__first, __last, __pred,
 	   _DistanceType(__buf.requested_size()),
 	   __buf.begin(),
 	   _DistanceType(__buf.size()));
-  else
-	return
-	  std::__inplace_stable_partition(__first, __pred,
-	  _DistanceType(__buf.requested_size()));
 }
 
   /**
@@ -2471,6 +2444,7 @@
  __gnu_cxx::__ops::__val_comp_iter(__comp));
 	  __len11 = std::distance(__first, __first_cut);
 	}
+
 	  _BidirectionalIterator __new_middle
 	= std::__rotate_adaptive(__first_cut, __middle, __second_cut,
  __len1 - __len11, __len22, __buffer,
@@ -2496,6 +2470,7 @@
 {
   if (__len1 == 0 || __len2 == 0)
 	return;
+
   if (__len1 + __len2 == 2)
 	{
 	  if (__comp(__middle, __first))
@@ -2502,6

Re: debug container patch

2014-05-02 Thread François Dumont

Hi Jonathan

I just wanted to make sure that you are aware that I preferred to 
wait for another validation of the small modification I have done.


François


On 28/04/2014 23:07, François Dumont wrote:

On 27/04/2014 15:39, Jonathan Wakely wrote:

On 17/04/14 22:43 +0200, François Dumont wrote:

Hi

   Here is a patch to globally enhance debug containers implementation.


François, sorry for the delay, this is a large patch and I wanted to
give it the time it deserves to review properly.


No problem, I see that there are a lot of proposals lately.



I understand why this is needed, but it changes the layout of the
classes in very significant ways, meaning the debug containers will
not be compatible across GCC releases. I'm OK with that now, but from
the next major GCC release I'd like to avoid that in future.


I remember Paolo saying that there is no abi guaranty for debug mode 
this is why I didn't hesitate in making this proposal. Will there be 
one in the future ? I plan also breaking changes for profile mode to 
fix its very bad performance.




   I noticed also that in std/c++11/debug.cc we have some methods 
qualified with noexcept while in a C++03 user code those methods 
will have a throw() qualification. Is that fine ?


As I said in my last mail, yes, those specifications are compatible.
But I don't think your changes are doing what you think they are doing
in all cases. Using _GLIBCXX_NOEXCEPT does not expand to throw() in
C++03 mode, it expands to nothing.


Yes, I discover this difference in one of your recent mail.




   * include/debug/safe_unordered_container.tcc


N.B. This file has no changes listed in the changelog entry.


I reviewed the ChangeLog and limit modifications like in this file. 
Note however that patch have been generated with '-x -b' option to 
hide white space modifications. I clean usage of white chars in 
impacted files, replaced some white spaces with tabs and remove 
useless white spaces.





@@ -69,8 +75,26 @@

  // 23.2.1.1 construct/copy/destroy:

-  deque() : _Base() { }
+#if __cplusplus  201103L
+  deque()
+  : _Base() { }

+  deque(const deque __x)
+  : _Base(__x) { }
+
+  ~deque() _GLIBCXX_NOEXCEPT { }


In C++03 mode the _GLIBCXX_NOEXCEPT macro expands to an empty string,
so it is useless in this chunk of code, which is only compiled for
C++03 mode. It should probably just be removed here (and in all the
other debug containers which use it in C++03-only code).


Ok, I cleaned those. Did you mean removing the whole explicit 
destructor ? Is it a coding Standard to always explicitly implement 
the destructor or just a way to have Doxygen generate ?



+ * before-begin ownership.*/

+   templatetypename _SafeSequence
+void
+_Safe_forward_list_SafeSequence::
+_M_swap(_Safe_sequence_base __other) noexcept
+{
+  __gnu_cxx::__scoped_lock sentry(_M_this()._M_get_mutex());


Shouldn't we be locking both containers' mutexes here?
As we do in src/c++11/debug.cc


Good point, not a regression but nice to fix in this patch.




+ forward_list(forward_list __list, const allocator_type __al)
+: _Safe(std::move(__list), __al),
+  _Base(std::move(__list), __al)
+  { }


This makes me feel uneasy, seeing a moved-from object being used
again, but I don't think changing it to use static_casts to the two
base classes would look better, so let's leave it like that.


That indeed looks scary, we replaced with:

  forward_list(forward_list __list, const allocator_type __al)
: _Safe(std::move(__list._M_safe()), __al),
  _Base(std::move(__list._M_base()), __al)
  { }

it makes clearer the fact that we move each part.




Index: include/debug/safe_base.h
===
--- include/debug/safe_base.h(revision 209446)
+++ include/debug/safe_base.h(working copy)
@@ -188,22 +188,18 @@

  protected:
// Initialize with a version number of 1 and no iterators
-_Safe_sequence_base()
+_Safe_sequence_base() _GLIBCXX_NOEXCEPT


This use of _GLIBCXX_NOEXCEPT are correct, if the intention is to be
noexcept in C++11 and have no exception specification in C++98/C++03.


Yes, I preferred to use default implementation for special function in 
C++11 so I qualified as many things as possible noexcept so that 
resulting noexcept qualification depends only on the normal mode 
noexcept qualification.





: _M_iterators(0), _M_const_iterators(0), _M_version(1)
{ }

#if __cplusplus = 201103L
_Safe_sequence_base(const _Safe_sequence_base) noexcept
  : _Safe_sequence_base() { }
-
-_Safe_sequence_base(_Safe_sequence_base __x) noexcept
-  : _Safe_sequence_base()
-{ _M_swap(__x); }
#endif

/** Notify all iterators that reference this sequence that the
sequence is being destroyed. */
-~_Safe_sequence_base()
+~_Safe_sequence_base() _GLIBCXX_NOEXCEPT


This is redundant. In C++03 the macro expands to nothing

Re: profile mode maintenance patch

2014-05-12 Thread François Dumont

On 12/05/2014 22:42, Paolo Carlini wrote:

Hi,

On 05/12/2014 10:14 PM, François Dumont wrote:
Regarding Makefile.in I miss last time. I moved to a new system 
lately, a Ubuntu based one, and still need to find out what version 
of automake/autoreconf I need to install. For the moment I have 
updated Makefile.in manually.

Isn't this clear enough

http://gcc.gnu.org/install/prerequisites.html

?

Paolo.


Perfectly clear, thanks.

François



[patch] libstdc++/61143 make unordered containers usable after move

2014-05-15 Thread François Dumont

Hi

Here is a proposal to fix PR 61143. As explained in the PR I 
finally prefer to leave the container in a valid state that is to say 
with a non zero number of bucket, that is to say 1, after a move. This 
bucket is stored directly in the container so not allocated to leave the 
move operations noexcept qualified. With this evolution we could even 
make the default constructor noexcept but I don't think it has any interest.


2014-05-15  François Dumont  fdum...@gcc.gnu.org

PR libstdc++/61143
* include/bits/hashtable.h: Fix move semantic to leave hashtable in a
usable state.
* testsuite/23_containers/unordered_set/61143.cc: New.

I run unordered containers test for the moment with no error. I 
will run the others before to commit.


François
Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 210479)
+++ include/bits/hashtable.h	(working copy)
@@ -316,14 +316,37 @@
   size_type			_M_element_count;
   _RehashPolicy		_M_rehash_policy;
 
+  // A single bucket used when only need 1 bucket. After move the hashtable
+  // is left with only 1 bucket which is not allocated so that we can have a
+  // noexcept move constructor.
+  // Note that we can't leave hashtable with 0 bucket without adding
+  // numerous checks in the code to avoid 0 modulus.
+  __bucket_type		_M_single_bucket;
+
   __hashtable_alloc
   _M_base_alloc() { return *this; }
 
-  using __hashtable_alloc::_M_deallocate_buckets;
+  __bucket_type*
+  _M_allocate_buckets(size_type __n)
+  {
+	if (__builtin_expect(__n == 1, false))
+	  return _M_single_bucket;
 
+	return __hashtable_alloc::_M_allocate_buckets(__n);
+  }
+
   void
+  _M_deallocate_buckets(__bucket_type* __bkts, size_type __n)
+  {
+	if (__builtin_expect(__bkts == _M_single_bucket, false))
+	  return;
+
+	__hashtable_alloc::_M_deallocate_buckets(__bkts, __n);
+  }
+
+  void
   _M_deallocate_buckets()
-  { this-_M_deallocate_buckets(_M_buckets, _M_bucket_count); }
+  { _M_deallocate_buckets(_M_buckets, _M_bucket_count); }
 
   // Gets bucket begin, deals with the fact that non-empty buckets contain
   // their before begin node.
@@ -703,11 +726,7 @@
 
   size_type
   erase(const key_type __k)
-  {
-	if (__builtin_expect(_M_bucket_count == 0, false))
-	  return 0;
-	return _M_erase(__unique_keys(), __k);
-  }
+  { return _M_erase(__unique_keys(), __k); }
 
   iterator
   erase(const_iterator, const_iterator);
@@ -768,7 +787,7 @@
   _M_rehash_policy()
 {
   _M_bucket_count = _M_rehash_policy._M_next_bkt(__bucket_hint);
-  _M_buckets = this-_M_allocate_buckets(_M_bucket_count);
+  _M_buckets = _M_allocate_buckets(_M_bucket_count);
 }
 
   templatetypename _Key, typename _Value,
@@ -796,7 +815,7 @@
 	std::max(_M_rehash_policy._M_bkt_for_elements(__nb_elems),
 		 __bucket_hint));
 
-	_M_buckets = this-_M_allocate_buckets(_M_bucket_count);
+	_M_buckets = _M_allocate_buckets(_M_bucket_count);
 	__try
 	  {
 	for (; __f != __l; ++__f)
@@ -833,9 +852,9 @@
 	  {
 		// Replacement allocator cannot free existing storage.
 		this-_M_deallocate_nodes(_M_begin());
-		if (__builtin_expect(_M_bucket_count != 0, true))
-		  _M_deallocate_buckets();
-		_M_reset();
+		_M_before_begin._M_nxt = nullptr;
+		_M_deallocate_buckets();
+		_M_buckets = nullptr;
 		std::__alloc_on_copy(__this_alloc, __that_alloc);
 		__hashtable_base::operator=(__ht);
 		_M_bucket_count = __ht._M_bucket_count;
@@ -867,7 +886,7 @@
 	if (_M_bucket_count != __ht._M_bucket_count)
 	  {
 	__former_buckets = _M_buckets;
-	_M_buckets = this-_M_allocate_buckets(__ht._M_bucket_count);
+	_M_buckets = _M_allocate_buckets(__ht._M_bucket_count);
 	_M_bucket_count = __ht._M_bucket_count;
 	  }
 	else
@@ -885,8 +904,7 @@
 		  [__roan](const __node_type* __n)
 		  { return __roan(__n-_M_v()); });
 	if (__former_buckets)
-	  this-_M_deallocate_buckets(__former_buckets,
-	  __former_bucket_count);
+	  _M_deallocate_buckets(__former_buckets, __former_bucket_count);
 	  }
 	__catch(...)
 	  {
@@ -917,7 +935,7 @@
   {
 	__bucket_type* __buckets = nullptr;
 	if (!_M_buckets)
-	  _M_buckets = __buckets = this-_M_allocate_buckets(_M_bucket_count);
+	  _M_buckets = __buckets = _M_allocate_buckets(_M_bucket_count);
 
 	__try
 	  {
@@ -964,8 +982,9 @@
 _M_reset() noexcept
 {
   _M_rehash_policy._M_reset();
-  _M_bucket_count = 0;
-  _M_buckets = nullptr;
+  _M_bucket_count = 1;
+  _M_buckets = _M_single_bucket;
+  _M_single_bucket = nullptr;
   _M_before_begin._M_nxt = nullptr;
   _M_element_count = 0;
 }
@@ -980,12 +999,16 @@
 _M_move_assign(_Hashtable __ht, std::true_type)
 {
   this-_M_deallocate_nodes(_M_begin());
-  if (__builtin_expect(_M_bucket_count != 0, true

Re: [patch] libstdc++/61143 make unordered containers usable after move

2014-05-19 Thread François Dumont

On 15/05/2014 22:52, Jonathan Wakely wrote:

On 15/05/14 22:20 +0200, François Dumont wrote:

Hi

   Here is a proposal to fix PR 61143. As explained in the PR I 
finally prefer to leave the container in a valid state that is to say 
with a non zero number of bucket, that is to say 1, after a move. 
This bucket is stored directly in the container so not allocated to 
leave the move operations noexcept qualified.


Thanks for fixing this, I like the direction very much. I have a few
comments below ...

With this evolution we could even make the default constructor 
noexcept but I don't think it has any interest.


I tend to agree with Paolo that a noexcept default constructor might
be useful, but let's fix the bug first and consider that later.


ok, we will have to review the hint values but it should be easy.




Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h(revision 210479)
+++ include/bits/hashtable.h(working copy)
@@ -316,14 +316,37 @@
  size_type_M_element_count;
  _RehashPolicy_M_rehash_policy;

+  // A single bucket used when only need 1 bucket. After move 
the hashtable
+  // is left with only 1 bucket which is not allocated so that 
we can have a

+  // noexcept move constructor.
+  // Note that we can't leave hashtable with 0 bucket without 
adding

+  // numerous checks in the code to avoid 0 modulus.
+  __bucket_type_M_single_bucket;


Does this get initialized in the constructors?
Would it make sense to give it an initializer?

 __bucket_type_M_single_bucket = nullptr;


This bucket is replacing those normally allocated and when they are 
allocated they are 0 initialised. So, you were right, there were one 
place where this initialization was missing which is fixed in this new 
patch. So I don't think this additional initialization is necessary.





@@ -980,12 +999,16 @@
_M_move_assign(_Hashtable __ht, std::true_type)
{
  this-_M_deallocate_nodes(_M_begin());
-  if (__builtin_expect(_M_bucket_count != 0, true))
-_M_deallocate_buckets();
-
+  _M_deallocate_buckets();
  __hashtable_base::operator=(std::move(__ht));
  _M_rehash_policy = __ht._M_rehash_policy;
-  _M_buckets = __ht._M_buckets;
+  if (__builtin_expect(__ht._M_buckets != 
__ht._M_single_bucket, true))

+_M_buckets = __ht._M_buckets;


What is the value of this-_M_single_bucket now?

Should it be set to nullptr, if only to help debugging?


We are not resetting buckets to null when rehashing so unless I add more 
checks I won't be able to reset it each time.




+  if (__builtin_expect(__ht._M_buckets == 
__ht._M_single_bucket, false))


This check appears in a few places, I wonder if it is worth creating a
private member function to hide the details:

 bool _M_moved_from() const noexcept
 {
   return __builtin_expect(_M_buckets == _M_single_bucket, false);
 }

Then just test if (__ht._M_moved_from())

Usually I would think the __builtin_expect should not be inside the
member function, so the caller decides what the expected result is,
but I think in all cases the result is expected to be false. That
matches how move semantics are designed: the object that gets moved
from is expected to be going out of scope, and so will be reused in a
minority of cases.


@@ -1139,7 +1170,14 @@
{
  if (__ht._M_node_allocator() == this-_M_node_allocator())
{
-  _M_buckets = __ht._M_buckets;
+  if (__builtin_expect(__ht._M_buckets == 
__ht._M_single_bucket, false))


This could be if (__ht._M_moved_from())


I hesitated in doing so and finally do so. I only prefer 
_M_use_single_bucket as we might not limit its usage to moved instances.





@@ -1193,11 +1231,21 @@
  std::swap(_M_bucket_count, __x._M_bucket_count);
  std::swap(_M_before_begin._M_nxt, __x._M_before_begin._M_nxt);
  std::swap(_M_element_count, __x._M_element_count);
+  std::swap(_M_single_bucket, __x._M_single_bucket);

+  // Fix buckets if any is pointing to its single bucket that 
can't be

+  // swapped.
+  if (_M_buckets == __x._M_single_bucket)
+_M_buckets = _M_single_bucket;
+
+  if (__x._M_buckets == _M_single_bucket)
+__x._M_buckets = __x._M_single_bucket;
+


Does this do more work than necessary, swapping the _M_buckets
members, then updating them again?

How about removing the std::swap(_M_buckets, __x._M_buckets) above and
doing (untested):

 if (this-_M_moved_from())
   {
 if (__x._M_moved_from())
   _M_buckets = _M_single_bucket;
 else
   _M_buckets = __x._M_buckets;
 __x._M_buckets = __x._M_single_bucket;
   }
 else if (__x._M_moved_from())
   {
 __x._M_buckets = _M_buckets;
 _M_buckets = _M_single_bucket;
   }
 else
   std::swap(_M_buckets, __x._M_buckets);

Is that worth it?  I'm not sure.


Yes, with the newly introduced _M_use_single_bucket it worth it I think

Re: [patch] libstdc++/61143 make unordered containers usable after move

2014-05-20 Thread François Dumont

On 20/05/2014 21:36, Jonathan Wakely wrote:


OK. My sketch above avoided calling _M_moved_from() more than once per
object, but the compiler should be able to optimise your version to
avoid multiple calls anyway.


Here is the new patch limited to what I really want to commit this time.


Great. Please commit to trunk and the 4.9 branch - thanks!


As I have integrated your remarks I won't have time to commit this 
evening. So I submit this update here, just in case you want to see it 
again and I will commit tomorrow.


I finally use a simplified version of your sketch in the swap 
implementation.


François

Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 210657)
+++ include/bits/hashtable.h	(working copy)
@@ -316,14 +316,49 @@
   size_type			_M_element_count;
   _RehashPolicy		_M_rehash_policy;
 
+  // A single bucket used when only need for 1 bucket. Especially
+  // interesting in move semantic to leave hashtable with only 1 buckets
+  // which is not allocated so that we can have those operations noexcept
+  // qualified.
+  // Note that we can't leave hashtable with 0 bucket without adding
+  // numerous checks in the code to avoid 0 modulus.
+  __bucket_type		_M_single_bucket;
+
+  bool
+  _M_uses_single_bucket(__bucket_type* __bkts) const
+  { return __builtin_expect(_M_buckets == _M_single_bucket, false); }
+
+  bool
+  _M_uses_single_bucket() const
+  { return _M_uses_single_bucket(_M_buckets); }
+
   __hashtable_alloc
   _M_base_alloc() { return *this; }
 
-  using __hashtable_alloc::_M_deallocate_buckets;
+  __bucket_type*
+  _M_allocate_buckets(size_type __n)
+  {
+	if (__builtin_expect(__n == 1, false))
+	  {
+	_M_single_bucket = nullptr;
+	return _M_single_bucket;
+	  }
 
+	return __hashtable_alloc::_M_allocate_buckets(__n);
+  }
+
   void
+  _M_deallocate_buckets(__bucket_type* __bkts, size_type __n)
+  {
+	if (_M_uses_single_bucket(__bkts))
+	  return;
+
+	__hashtable_alloc::_M_deallocate_buckets(__bkts, __n);
+  }
+
+  void
   _M_deallocate_buckets()
-  { this-_M_deallocate_buckets(_M_buckets, _M_bucket_count); }
+  { _M_deallocate_buckets(_M_buckets, _M_bucket_count); }
 
   // Gets bucket begin, deals with the fact that non-empty buckets contain
   // their before begin node.
@@ -703,11 +738,7 @@
 
   size_type
   erase(const key_type __k)
-  {
-	if (__builtin_expect(_M_bucket_count == 0, false))
-	  return 0;
-	return _M_erase(__unique_keys(), __k);
-  }
+  { return _M_erase(__unique_keys(), __k); }
 
   iterator
   erase(const_iterator, const_iterator);
@@ -768,7 +799,7 @@
   _M_rehash_policy()
 {
   _M_bucket_count = _M_rehash_policy._M_next_bkt(__bucket_hint);
-  _M_buckets = this-_M_allocate_buckets(_M_bucket_count);
+  _M_buckets = _M_allocate_buckets(_M_bucket_count);
 }
 
   templatetypename _Key, typename _Value,
@@ -796,7 +827,7 @@
 	std::max(_M_rehash_policy._M_bkt_for_elements(__nb_elems),
 		 __bucket_hint));
 
-	_M_buckets = this-_M_allocate_buckets(_M_bucket_count);
+	_M_buckets = _M_allocate_buckets(_M_bucket_count);
 	__try
 	  {
 	for (; __f != __l; ++__f)
@@ -833,9 +864,9 @@
 	  {
 		// Replacement allocator cannot free existing storage.
 		this-_M_deallocate_nodes(_M_begin());
-		if (__builtin_expect(_M_bucket_count != 0, true))
-		  _M_deallocate_buckets();
-		_M_reset();
+		_M_before_begin._M_nxt = nullptr;
+		_M_deallocate_buckets();
+		_M_buckets = nullptr;
 		std::__alloc_on_copy(__this_alloc, __that_alloc);
 		__hashtable_base::operator=(__ht);
 		_M_bucket_count = __ht._M_bucket_count;
@@ -867,7 +898,7 @@
 	if (_M_bucket_count != __ht._M_bucket_count)
 	  {
 	__former_buckets = _M_buckets;
-	_M_buckets = this-_M_allocate_buckets(__ht._M_bucket_count);
+	_M_buckets = _M_allocate_buckets(__ht._M_bucket_count);
 	_M_bucket_count = __ht._M_bucket_count;
 	  }
 	else
@@ -885,8 +916,7 @@
 		  [__roan](const __node_type* __n)
 		  { return __roan(__n-_M_v()); });
 	if (__former_buckets)
-	  this-_M_deallocate_buckets(__former_buckets,
-	  __former_bucket_count);
+	  _M_deallocate_buckets(__former_buckets, __former_bucket_count);
 	  }
 	__catch(...)
 	  {
@@ -917,7 +947,7 @@
   {
 	__bucket_type* __buckets = nullptr;
 	if (!_M_buckets)
-	  _M_buckets = __buckets = this-_M_allocate_buckets(_M_bucket_count);
+	  _M_buckets = __buckets = _M_allocate_buckets(_M_bucket_count);
 
 	__try
 	  {
@@ -964,8 +994,9 @@
 _M_reset() noexcept
 {
   _M_rehash_policy._M_reset();
-  _M_bucket_count = 0;
-  _M_buckets = nullptr;
+  _M_bucket_count = 1;
+  _M_single_bucket = nullptr;
+  _M_buckets = _M_single_bucket;
   _M_before_begin._M_nxt = nullptr;
   _M_element_count = 0;
 }
@@ -980,12 

libstdc++ automake version

2014-05-23 Thread François Dumont

On 12/05/2014 22:42, Paolo Carlini wrote:

Hi,

On 05/12/2014 10:14 PM, François Dumont wrote:
Regarding Makefile.in I miss last time. I moved to a new system 
lately, a Ubuntu based one, and still need to find out what version 
of automake/autoreconf I need to install. For the moment I have 
updated Makefile.in manually.

Isn't this clear enough

http://gcc.gnu.org/install/prerequisites.html

?

Paolo.


In fact not that much. It is saying:

For directories that use automake, GCC requires the latest release in 
the 1.11 series, which is currently 1.11.1. When regenerating a 
directory to a newer version, please update all the directories using an 
older 1.11 to the latest released version.


But now latest 1.11 version is at least 1.11.6, the version Ubuntu is 
proposing when installing automake1.11 package. And considering all 
impacts it has on the Makefile.in i guess it is not correct, is it ? 
Looks like I will have to build automake 1.11.1 myself otherwise.


François


Re: profile mode maintenance patch

2014-05-24 Thread François Dumont

On 24/05/2014 13:33, Jonathan Wakely wrote:

On 12/05/14 22:14 +0200, François Dumont wrote:

Hi

   Here is a maintenance patch for profile mode. It does:

- Use inheritance to limit duplication of code in constructors to 
register for the different profiling mode diagnostics data structure.
- Remove many code keeping only instrumented methods or methods that 
where the container type itself appears in the signature..

- Extend the map to unordered_map to all ordered containers.

   And of course code cleanup and usage of default implementation for 
special methods as much as possible.


   Regarding Makefile.in I miss last time. I moved to a new system 
lately, a Ubuntu based one, and still need to find out what version 
of automake/autoreconf I need to install. For the moment I have 
updated Makefile.in manually.


This is OK

(I'm in favour of any change that reduces the amount of code in the
Profile Mode :)

Please correct a minor spelling mistake (in two places) before
committing:


+  /** If hint is used we consider that the map and unordered_map
+   * operations have equivalent insertion cost so we do not 
update metrics

+   * about it.
+   * Note that to find out if hint has been used is libstdc++
+   * implementation dependant.


s/dependant/dependent/

Thanks!


Done but I forgot to fix the spelling. I will fix it in the future patch.

François



[patch] No allocation for empty unordered containers

2014-06-03 Thread François Dumont

Hi

Thanks to the single bucket introduced to make move semantic 
noexcept we can also avoid some over allocations. Here is a patch to 
avoid any allocation on default instantiation, on range constructor when 
range is empty and on construction from an initialization list when this 
list is empty too. I had to make all default hint value to 0 so that if 
this value is used the rehash policy next bucket returns 1 bucket.


I don't know if you had in mind to noexcept qualify the default 
constructor but it would mean to have a real default constructor and 
another to deal with the hint which wouldn't match the Standard so no 
noexcept qualification at the moment.


Tested under Linux x86_64.normal debug and profile modes.


2014-06-03  François Dumont fdum...@gcc.gnu.org

* include/bits/hashtable.h: Make use of the internal single bucket to
limit allocation as long as container remains empty.
* include/bits/unordered_map.h: Set default bucket hint to 0 in
constructors to avoid allocation.
* include/bits/unordered_set.h: Likewise.
* include/debug/unordered_map: Likewise.
* include/debug/unordered_set: Likewise.
* include/profile/unordered_map: Likewise.
* include/profile/unordered_set: Likewise.
* src/c++11/hashtable_c++0x.cc (_Prime_rehash_policy::_M_next_bkt):
Returns 1 for hint 0.
* testsuite/23_containers/unordered_map/allocator/
empty_instantiation.cc:New.
* testsuite/23_containers/unordered_multimap/allocator/
empty_instantiation.cc:New.
* testsuite/23_containers/unordered_set/allocator/
empty_instantiation.cc: New.
* testsuite/23_containers/unordered_multiset/allocator/
empty_instantiation.cc: New.

Ok to commit ?

François


Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 211144)
+++ include/bits/hashtable.h	(working copy)
@@ -407,12 +407,12 @@
   // Use delegating constructors.
   explicit
   _Hashtable(const allocator_type __a)
-  : _Hashtable(10, _H1(), _H2(), _Hash(), key_equal(),
+  : _Hashtable(0, _H1(), _H2(), _Hash(), key_equal(),
 		   __key_extract(), __a)
   { }
 
   explicit
-  _Hashtable(size_type __n = 10,
+  _Hashtable(size_type __n = 0,
 		 const _H1 __hf = _H1(),
 		 const key_equal __eql = key_equal(),
 		 const allocator_type __a = allocator_type())
@@ -792,14 +792,18 @@
 	   const _Equal __eq, const _ExtractKey __exk,
 	   const allocator_type __a)
 : __hashtable_base(__exk, __h1, __h2, __h, __eq),
-  __map_base(),
-  __rehash_base(),
   __hashtable_alloc(__node_alloc_type(__a)),
+  _M_buckets(_M_single_bucket),
+  _M_bucket_count(1),
   _M_element_count(0),
-  _M_rehash_policy()
+  _M_single_bucket(nullptr)
 {
-  _M_bucket_count = _M_rehash_policy._M_next_bkt(__bucket_hint);
-  _M_buckets = _M_allocate_buckets(_M_bucket_count);
+  auto __bkt_count = _M_rehash_policy._M_next_bkt(__bucket_hint);
+  if (_M_bucket_count != __bkt_count)
+	{
+	  _M_bucket_count = __bkt_count;
+	  _M_buckets = _M_allocate_buckets(_M_bucket_count);
+	}
 }
 
   templatetypename _Key, typename _Value,
@@ -815,19 +819,24 @@
 		 const _Equal __eq, const _ExtractKey __exk,
 		 const allocator_type __a)
   : __hashtable_base(__exk, __h1, __h2, __h, __eq),
-	__map_base(),
-	__rehash_base(),
 	__hashtable_alloc(__node_alloc_type(__a)),
+	_M_buckets(_M_single_bucket),
+	_M_bucket_count(1),
 	_M_element_count(0),
-	_M_rehash_policy()
+	_M_single_bucket(nullptr)
   {
 	auto __nb_elems = __detail::__distance_fw(__f, __l);
-	_M_bucket_count =
+	auto __bkt_count =
 	  _M_rehash_policy._M_next_bkt(
 	std::max(_M_rehash_policy._M_bkt_for_elements(__nb_elems),
 		 __bucket_hint));
+ 
+	if (_M_bucket_count != __bkt_count)
+	  {
+	_M_bucket_count = __bkt_count;
+	_M_buckets = _M_allocate_buckets(_M_bucket_count);
+	  }
 
-	_M_buckets = _M_allocate_buckets(_M_bucket_count);
 	__try
 	  {
 	for (; __f != __l; ++__f)
@@ -864,14 +873,15 @@
 	  {
 		// Replacement allocator cannot free existing storage.
 		this-_M_deallocate_nodes(_M_begin());
-		_M_before_begin._M_nxt = nullptr;
 		_M_deallocate_buckets();
-		_M_buckets = nullptr;
+		__hashtable_base::operator=(__ht);
 		std::__alloc_on_copy(__this_alloc, __that_alloc);
-		__hashtable_base::operator=(__ht);
-		_M_bucket_count = __ht._M_bucket_count;
+		_M_buckets = _M_single_bucket;
+		_M_bucket_count = 1;
+		_M_before_begin._M_nxt = nullptr;
 		_M_element_count = __ht._M_element_count;
 		_M_rehash_policy = __ht._M_rehash_policy;
+		_M_single_bucket = nullptr;
 		__try
 		  {
 		_M_assign(__ht,
@@ -946,8 +956,14 @@
   _M_assign(const _Hashtable __ht, const _NodeGenerator __node_gen)
   {
 	__bucket_type* __buckets = nullptr;
-	if (!_M_buckets)
-	  _M_buckets = __buckets = _M_allocate_buckets(_M_bucket_count);
+	if (_M_bucket_count != __ht

Re: [patch] libstdc++/29988 Rb_Tree reuse allocated nodes

2014-06-11 Thread François Dumont
For the testsuite allocator I though that for an internal allocator 
used in our tests it was ok. But alright, I will make it better and 
compatible with SimpleAllocator.



On 11/06/2014 14:02, Jonathan Wakely wrote:

Index: include/bits/stl_tree.h
===
--- include/bits/stl_tree.h(revision 211388)
+++ include/bits/stl_tree.h(working copy)
@@ -330,6 +330,111 @@
   const _Rb_tree_const_iterator_Val __y) 
_GLIBCXX_NOEXCEPT

{ return __x._M_node != __y._M_node; }

+  // Functor recycling a pool of nodes and using allocation once the 
pool is

+  // empty.
+  templatetypename _RbTree
+struct _Rb_tree_reuse_or_alloc_node
+{


Is there a reason to define this and _Rb_tree_alloc_node as
namespace-scope class templates, rather than non-template members of
_Rb_tree?
Just to limit amount of code within _Rb_tree. I wanted to do 
something like in _Hashtable where many code is isolated in different 
types aggregated to build the final _Hashtable type. But it looks like 
you prefer it nested so I will do so.


They wouldn't need to be friends if they were members, and you
wouldn't need a typedef for _Rb_tree_alloc_node_Rb_tree because it
would just be called _Rb_tree_alloc_node.


+private:
+  typedef _RbTree __rb_tree;


This typedef doesn't seem useful, it's only used once and is more
characters than _RbTree. If the class was a member of _Rb_tree it
could just use that name.


+  typedef _Rb_tree_nodetypename _RbTree::value_type __node_type;


If it was a member the value_type name would be in scope.


+public:
+  _Rb_tree_reuse_or_alloc_node(const _Rb_tree_node_base __header,
+   __rb_tree __t)
+: _M_root(__header._M_parent), _M_nodes(__header._M_right), 
_M_t(__t)

+  {
+if (_M_root)
+  _M_root-_M_parent = 0;
+else
+  _M_nodes = 0;
+  }
+
+  ~_Rb_tree_reuse_or_alloc_node()
+  { _M_t._M_erase(static_cast__node_type*(_M_root)); }


This type needs to be non-copyable, or unintentional copies would
erase all the nodes and leave nothing to be reused (which might be
difficult to detect as it would only affect performance, not
correctness).
Yes, sure, like in the equivalent _Hashtable types. I guess I 
didn't do so here because we might not be in c++11 so it is not as 
convenient to forbid its usage.




+  templatetypename _Arg
+__node_type*
+#if __cplusplus  201103L
+operator()(const _Arg __arg) const
+#else
+operator()(_Arg __arg) const
+#endif


Does this need to be const?

I don't think it does (if you change the function templates taking a
const _NodeGen to take _NodeGen instead).


Sometimes I used lambdas, I am not sure but I think it forced me to take 
functors as const lvalue reference and so the const qualification on the 
operator.




That means the members of this type don't need to be 'mutable'.



+  typedef _Rb_tree_node_base __node_base;


I'm not sure this typedef is useful either, it just means an extra
name to remember when reading the code, when _Rb_tree_node_base is
already in scope and probably understood by readers of the code.


+  mutable __node_base* _M_root;
+  mutable __node_base* _M_nodes;


These members should be of type _Rb_tree::_Base_ptr, not __node_base*,
because that's the type _Rb_tree::_M_right is declared as.

I have a work-in-progress patch to make _Rb_tree use
allocator_traits_Node_allocator::pointer for _Link_type, which
might not be the same type as _Rb_tree_nodeVal*, so it is important
to consistently use the _Base_ptr and _Link_type typedefs not the
underlying types they refer to (because those underlying types are
going to change soon).


+  _RbTree _M_t;
+};
+
+  // Functor similar to the previous one but without any pool of 
node to recycle.

+  templatetypename _RbTree
+struct _Rb_tree_alloc_node


Again, I think this should be a member of _Rb_tree.


+{
+private:
+  typedef _Rb_tree_nodetypename _RbTree::value_type __node_type;


This typedef should be removed.


+
+public:
+  _Rb_tree_alloc_node(_RbTree __t)
+: _M_t(__t) { }
+
+  templatetypename _Arg
+__node_type*


This function should return _Rb_tree::_Link_type because that's what
_M_create_node returns.


+#if __cplusplus  201103L
+operator()(const _Arg __arg) const
+#else
+operator()(_Arg __arg) const
+#endif
+{ return _M_t._M_create_node(_GLIBCXX_FORWARD(_Arg, __arg)); }
@@ -349,6 +454,12 @@
rebind_Rb_tree_node_Val ::other _Node_allocator;

  typedef __gnu_cxx::__alloc_traits_Node_allocator _Alloc_traits;
+  templatetypename _RT
+friend struct _Rb_tree_alloc_node;
+  typedef _Rb_tree_alloc_node_Rb_tree __alloc_node_t;
+  templatetypename _RT
+friend struct _Rb_tree_reuse_or_alloc_node;
+  typedef _Rb_tree_reuse_or_alloc_node_Rb_tree 
__reuse_or_alloc_node_t;


These friend declarations and typedefs become unnecessary.


Re: profile mode fix

2014-01-27 Thread François Dumont
Indeed, default constructor and copy constructor shall not be noexcept 
qualified.


IMO we should be able to make move constructor noexcept by using a 
special allocator for the underlying unordered_map that would allow to 
replace an entry with an other one without requiring a 
deallocate/allocate. It would be the same kind of allocator keeping a 
released instance in cache that you already talk about to enhance 
std::deque behavior especially when used in a std::queue.


For 4.9 we could consider that this test is not supported in profile 
mode and I will work on it for next version.


François


On 01/26/2014 11:38 AM, Jonathan Wakely wrote:

On 26 January 2014 09:43, François Dumont wrote:

Hi

 This is a patch to fix PR 55033 in profile mode. Like in debug mode it
was missing noexcept qualifier on move constructor.

But don't those functions allocate memory? So they can throw.

I agree we want the move constructor to be noexcept anyway, and maybe
the default constructor, but why would we want to lie about the copy
constructor?

I have this patch in my tree that I'm trying to decide whether it
should be committed, but if we make the change we should have a
comment like this:

--- a/libstdc++-v3/include/profile/unordered_base.h
+++ b/libstdc++-v3/include/profile/unordered_base.h
@@ -160,9 +160,14 @@ namespace __profile
  __profcxx_hashtable_construct(__uc, __uc.bucket_count());
 __profcxx_hashtable_construct2(__uc);
}
+
_Unordered_profile(const _Unordered_profile)
 : _Unordered_profile() { }
-  _Unordered_profile(_Unordered_profile)
+
+  // This might actually throw, but for consistency with normal mode
+  // unordered containers we want the noexcept specification, and will
+  // std::terminate() if an exception is thrown.
+  _Unordered_profile(_Unordered_profile) noexcept
 : _Unordered_profile() { }

~_Unordered_profile() noexcept





Re: profile mode fix

2014-01-30 Thread François Dumont

On 01/29/2014 09:18 PM, Jonathan Wakely wrote:

On 29 January 2014 20:06, François Dumont frs.dum...@gmail.com wrote:

 Here is the patch that simply consider 55083 as not supported except in
normal mode. This is a temporary workaround for 4.9 release so I prefer not
to introduce a dg-profile-mode-unsupported or something like that. Those
tests will simply appear as not supported for debug and parallel mode even
if they are, not a big deal, no ?

But with that change we don't find out if those tests regress in debug mode  :-(

I prefer to just add noexcept to the profile mode move constructor,
and if it throws then the program terminates.  If you run out of
memory when using profile mode then terminating seems reasonable to
me;  I don't think people are using profile mode to test how their
programs handle std::bad_alloc.

Put another way, if your program runs out of memory *because* of
profile mode, then the results of the profiling will not give you
useful data about how your program usually behaves. Using profile mode
has altered the behaviour of the program.  So in that situation simply
calling std::terminate() makes sense.


So I let you apply your patch with your comment.

Do you think then that using a special allocator implementation 
with a one element cache to make sure that a std::unordered_map insert 
preceded by an erase won't throw could be interested ? If not I will 
simply remove it from my TODOs list.


François



Missing experimental patch bit

2014-03-14 Thread François Dumont

Hi

I just realized that when I committed this:

2014-01-20  François Dumont  fdum...@gcc.gnu.org

* scripts/create_testsuite_files: Add testsuite/experimental in
the list of folders to search for tests.
* include/experimental/string_view
(basic_string_view::operator[]): Comment _GLIBCXX_DEBUG_ASSERT,
incompatible with constexpr qualifier.
(basic_string_view::front()): Likewise.
(basic_string_view::back()): Likewise.
* testsuite/experimental/string_view/element_access/wchar_t/2.cc:
Merge dg-options directives into one.
* testsuite/experimental/string_view/element_access/char/2.cc:
Likewise. Remove invalid experimental namespace scope on
string_view_type.

I forgot to commit the create_testsuite_files script. Is it still ok to 
do so now ?


By the way is there an info about when stage 1 is going to restart ?

François

Index: scripts/create_testsuite_files
===
--- scripts/create_testsuite_files	(revision 208578)
+++ scripts/create_testsuite_files	(working copy)
@@ -32,7 +32,7 @@
 # This is the ugly version of everything but the current directory.  It's
 # what has to happen when find(1) doesn't support -mindepth, or -xtype.
 dlist=`echo [0-9][0-9]*`
-dlist=$dlist abi backward ext performance tr1 tr2 decimal
+dlist=$dlist abi backward ext performance tr1 tr2 decimal experimental
 find $dlist ( -type f -o -type l ) -name *.cc -print  $tmp.01
 find $dlist ( -type f -o -type l ) -name *.c -print  $tmp.02
 cat  $tmp.01 $tmp.02 | sort  $tmp.1


Fix _Hashtable extension

2014-03-21 Thread François Dumont

Hi

Here is a patch to fix _Hashtable Standard extension type which is 
almost unusable at the moment if instantiated with anything else that 
the types used for the std unordered containers that is to say 
__detail::_Default_ranged_hash and __detail::_Mod_range_hashing.


It is a really safe patch so I would propose it for current trunk 
but at the same time it only impacts a Standard extension and it hasn't 
been reported by anyone so just tell me when to apply it.


2014-03-21  François Dumont fdum...@gcc.gnu.org

* include/bits/hashtable.h (_Hashtable(allocator_type)): Fix call
to delegated constructor.
(_Hashtable(size_type, _H1, key_equal, allocator_type)): Likewise.
(_Hashtable_It(_It, _It, size_type, _H1, key_equal, allocator_type)):
Likewise.
(_Hashtable(
initializer_list, size_type, _H1, key_equal, allocator_type)): 
Likewise.


Tested under Linux x86_64.

François

Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 207322)
+++ include/bits/hashtable.h	(working copy)
@@ -372,9 +372,8 @@
   // Use delegating constructors.
   explicit
   _Hashtable(const allocator_type __a)
-	: _Hashtable(10, _H1(), __detail::_Mod_range_hashing(),
-		 __detail::_Default_ranged_hash(), key_equal(),
-		 __key_extract(), __a)
+  : _Hashtable(10, _H1(), _H2(), _Hash(), key_equal(),
+		   __key_extract(), __a)
   { }
 
   explicit
@@ -382,8 +381,7 @@
 		 const _H1 __hf = _H1(),
 		 const key_equal __eql = key_equal(),
 		 const allocator_type __a = allocator_type())
-  : _Hashtable(__n, __hf, __detail::_Mod_range_hashing(),
-		   __detail::_Default_ranged_hash(), __eql,
+  : _Hashtable(__n, __hf, _H2(), _Hash(), __eql,
 		   __key_extract(), __a)
   { }
 
@@ -393,8 +391,7 @@
 		   const _H1 __hf = _H1(),
 		   const key_equal __eql = key_equal(),
 		   const allocator_type __a = allocator_type())
-	: _Hashtable(__f, __l, __n, __hf, __detail::_Mod_range_hashing(),
-		 __detail::_Default_ranged_hash(), __eql,
+	: _Hashtable(__f, __l, __n, __hf, _H2(), _Hash(), __eql,
 		 __key_extract(), __a)
 	{ }
 
@@ -403,9 +400,7 @@
 		 const _H1 __hf = _H1(),
 		 const key_equal __eql = key_equal(),
 		 const allocator_type __a = allocator_type())
-  : _Hashtable(__l.begin(), __l.end(), __n, __hf,
-		   __detail::_Mod_range_hashing(),
-		   __detail::_Default_ranged_hash(), __eql,
+  : _Hashtable(__l.begin(), __l.end(), __n, __hf, _H2(), _Hash(), __eql,
 		   __key_extract(), __a)
   { }
 



Re: Fix _Hashtable extension

2014-03-23 Thread François Dumont

On 21/03/2014 23:59, Jonathan Wakely wrote:

On 21/03/14 22:39 +0100, François Dumont wrote:

Hi

   Here is a patch to fix _Hashtable Standard extension type which is 
almost unusable at the moment if instantiated with anything else that 
the types used for the std unordered containers that is to say 
__detail::_Default_ranged_hash and __detail::_Mod_range_hashing.


Good catch.

Also, it seems that this specialization is missing the hasher
typedef:

  /// Specialization: ranged hash function, no caching hash codes.  H1
  /// and H2 are provided but ignored.  We define a dummy hash code type.
  templatetypename _Key, typename _Value, typename _ExtractKey,
 typename _H1, typename _H2, typename _Hash
struct _Hash_code_base_Key, _Value, _ExtractKey, _H1, _H2, _Hash, 
false

: private _Hashtable_ebo_helper0, _ExtractKey,
  private _Hashtable_ebo_helper1, _Hash
{

From the comments I think it is intentional, is that right?


It seems intentional to me too even if I haven't written this code. In 
this case the user is supposed to provide a functor that gives the 
bucket index directly from the key so no real hasher even if there might 
be one in this functor.


Patch committed.

François



Re: sort_heap complexity guarantee

2014-10-22 Thread François Dumont

Then I think we need this patch which also fix other issues.

2014-10-22  François Dumont  fdum...@gcc.gnu.org

* testsuite/25_algorithms/make_heap/complexity.cc: Add missing test
variable.
* testsuite/25_algorithms/sort_heap/complexity.cc: Likewise and use
log2.
* testsuite/25_algorithms/pop_heap/complexity.cc: Likewise and require
normal mode.
* testsuite/25_algorithms/push_heap/complexity.cc: Likewise.

Tested under Linux x86_64.

Ok to commit ?

François


On 20/10/2014 22:48, Marc Glisse wrote:

On Mon, 20 Oct 2014, François Dumont wrote:


On 18/10/2014 09:24, Marc Glisse wrote:

On Mon, 6 Oct 2014, François Dumont wrote:


   * testsuite/25_algorithms/push_heap/complexity.cc: New.


This test is randomly failing in about 1% to 2% of cases.

Is it for a particular platform ? I just run it thousands of times on 
my Linux and never experimented any failure.


x86_64-linux-gnu, debian testing

Here is a deterministic version.

By the way, when the standard says log and there isn't an implicit 
O(), it usually means log2.




Index: testsuite/25_algorithms/make_heap/complexity.cc
===
--- testsuite/25_algorithms/make_heap/complexity.cc	(revision 216348)
+++ testsuite/25_algorithms/make_heap/complexity.cc	(working copy)
@@ -26,6 +26,8 @@
 
 void test01()
 {
+  bool test __attribute__((unused)) = true;
+
   using __gnu_test::counter_type;
   const std::size_t nb_values = 1000;
 
Index: testsuite/25_algorithms/pop_heap/complexity.cc
===
--- testsuite/25_algorithms/pop_heap/complexity.cc	(revision 216348)
+++ testsuite/25_algorithms/pop_heap/complexity.cc	(working copy)
@@ -15,6 +15,7 @@
 // with this library; see the file COPYING3.  If not see
 // http://www.gnu.org/licenses/.
 
+// { dg-require-normal-mode  }
 // { dg-options -std=gnu++11 }
 
 #include cmath
@@ -27,6 +28,8 @@
 
 void test01()
 {
+  bool test __attribute__((unused)) = true;
+
   using __gnu_test::counter_type;
   const std::size_t nb_values = 1000;
 
@@ -43,7 +46,7 @@
 
   std::pop_heap(values.begin(), values.end());
 
-  VERIFY( counter_type::less_compare_count = 2.0 * std::log(nb_values) );
+  VERIFY( counter_type::less_compare_count = 2.0 * std::log2(nb_values) );
 }
 
 int main()
Index: testsuite/25_algorithms/push_heap/complexity.cc
===
--- testsuite/25_algorithms/push_heap/complexity.cc	(revision 216348)
+++ testsuite/25_algorithms/push_heap/complexity.cc	(working copy)
@@ -15,6 +15,7 @@
 // with this library; see the file COPYING3.  If not see
 // http://www.gnu.org/licenses/.
 
+// { dg-require-normal-mode  }
 // { dg-options -std=gnu++11 }
 
 #include cmath
@@ -27,6 +28,8 @@
 
 void test01()
 {
+  bool test __attribute__((unused)) = true;
+
   using __gnu_test::counter_type;
   const std::size_t nb_values = 1000;
 
@@ -44,7 +47,7 @@
 
   std::push_heap(values.begin(), values.end());
 
-  VERIFY( counter_type::less_compare_count = std::log(values.size()) );
+  VERIFY( counter_type::less_compare_count = std::log2(values.size()) );
 }
 
 int main()
Index: testsuite/25_algorithms/sort_heap/complexity.cc
===
--- testsuite/25_algorithms/sort_heap/complexity.cc	(revision 216348)
+++ testsuite/25_algorithms/sort_heap/complexity.cc	(working copy)
@@ -27,6 +27,8 @@
 
 void test01()
 {
+  bool test __attribute__((unused)) = true;
+
   using __gnu_test::counter_type;
   const std::size_t nb_values = 1000;
 
@@ -43,7 +45,7 @@
 
   std::sort_heap(values.begin(), values.end());
 
-  VERIFY( counter_type::less_compare_count = 2.0 * nb_values * std::log(nb_values) );
+  VERIFY( counter_type::less_compare_count = 2.0 * nb_values * std::log2(nb_values) );
 }
 
 int main()


Re: [Bug libstdc++/63698] [5 Regression] std::set leaks nodes on assignment

2014-11-04 Thread François Dumont

Hi

Here is more or less the patch proposed on the ticket with the test 
case also provided in the ticket.


2014-11-04  François Dumont  fdum...@gcc.gnu.org
Jonathan Wakely  jwak...@redhat.com

PR libstdc++/63698
* include/bits/stl_tree.h (_Reuse_or_alloc_node): Simplify constructor.
Always move to the left node if there is one.
* testsuite/23_containers/set/allocator/move_assign.cc (test04): New.

Tested under Linux x86_64, ok to commit ?

François

Index: include/bits/stl_tree.h
===
--- include/bits/stl_tree.h	(revision 217101)
+++ include/bits/stl_tree.h	(working copy)
@@ -359,16 +359,20 @@
   typedef const _Rb_tree_node_Val*	_Const_Link_type;
 
 private:
-  // Functor recycling a pool of nodes and using allocation once the pool is
-  // empty.
+  // Functor recycling a pool of nodes and using allocation once the pool
+  // is empty.
   struct _Reuse_or_alloc_node
   {
-	_Reuse_or_alloc_node(const _Rb_tree_node_base __header,
-			 _Rb_tree __t)
-	  : _M_root(__header._M_parent), _M_nodes(__header._M_right), _M_t(__t)
+	_Reuse_or_alloc_node(_Rb_tree __t)
+	  : _M_root(__t._M_root()), _M_nodes(__t._M_rightmost()), _M_t(__t)
 	{
 	  if (_M_root)
-	_M_root-_M_parent = 0;
+	{
+	  _M_root-_M_parent = 0;
+
+	  if (_M_nodes-_M_left)
+		_M_nodes = _M_nodes-_M_left;
+	}
 	  else
 	_M_nodes = 0;
 	}
@@ -420,6 +424,9 @@
 
 		  while (_M_nodes-_M_right)
 			_M_nodes = _M_nodes-_M_right;
+
+		  if (_M_nodes-_M_left)
+			_M_nodes = _M_nodes-_M_left;
 		}
 		}
 	  else // __node is on the left.
@@ -436,7 +443,7 @@
 	_Rb_tree _M_t;
   };
 
-  // Functor similar to the previous one but without any pool of node to
+  // Functor similar to the previous one but without any pool of nodes to
   // recycle.
   struct _Alloc_node
   {
@@ -1271,7 +1278,7 @@
 
   // Try to move each node reusing existing nodes and copying __x nodes
   // structure.
-  _Reuse_or_alloc_node __roan(_M_impl._M_header, *this);
+  _Reuse_or_alloc_node __roan(*this);
   _M_impl._M_reset();
   if (__x._M_root() != nullptr)
 	{
@@ -1297,7 +1304,7 @@
   _Rb_tree_Key, _Val, _KeyOfValue, _Compare, _Alloc::
   _M_assign_unique(_Iterator __first, _Iterator __last)
   {
-	_Reuse_or_alloc_node __roan(this-_M_impl._M_header, *this);
+	_Reuse_or_alloc_node __roan(*this);
 	_M_impl._M_reset();
 	for (; __first != __last; ++__first)
 	  _M_insert_unique_(end(), *__first, __roan);
@@ -1310,7 +1317,7 @@
   _Rb_tree_Key, _Val, _KeyOfValue, _Compare, _Alloc::
   _M_assign_equal(_Iterator __first, _Iterator __last)
   {
-	_Reuse_or_alloc_node __roan(this-_M_impl._M_header, *this);
+	_Reuse_or_alloc_node __roan(*this);
 	_M_impl._M_reset();
 	for (; __first != __last; ++__first)
 	  _M_insert_equal_(end(), *__first, __roan);
@@ -1342,7 +1349,7 @@
 	}
 #endif
 
-	  _Reuse_or_alloc_node __roan(this-_M_impl._M_header, *this);
+	  _Reuse_or_alloc_node __roan(*this);
 	  _M_impl._M_reset();
 	  _M_impl._M_key_compare = __x._M_impl._M_key_compare;
 	  if (__x._M_root() != 0)
Index: testsuite/23_containers/set/allocator/move_assign.cc
===
--- testsuite/23_containers/set/allocator/move_assign.cc	(revision 217101)
+++ testsuite/23_containers/set/allocator/move_assign.cc	(working copy)
@@ -18,6 +18,8 @@
 // { dg-options -std=gnu++11 }
 
 #include set
+#include random
+
 #include testsuite_hooks.h
 #include testsuite_allocator.h
 
@@ -89,10 +91,43 @@
   VERIFY( tracker_allocator_counter::get_construct_count() == constructs + 2 );
 }
 
+void test04()
+{
+  bool test __attribute__((unused)) = true;
+
+  using namespace __gnu_test;
+
+  typedef tracker_allocatorint alloc_type;
+  typedef std::setint, std::lessint, alloc_type test_type;
+
+  std::mt19937 rng;
+  std::uniform_int_distributionint d;
+  std::uniform_int_distributionint::param_type p{0, 100};
+  std::uniform_int_distributionint::param_type x{0, 1000};
+
+  for (int i = 0; i  10; ++i)
+  {
+test_type l, r;
+for (int n = d(rng, p); n  0; --n)
+{
+  int i = d(rng, x);
+  l.insert(i);
+  r.insert(i);
+
+  tracker_allocator_counter::reset();
+
+  l = r;
+
+  VERIFY( tracker_allocator_counter::get_allocation_count() == 0 );
+}
+  }
+}
+
 int main()
 {
   test01();
   test02();
   test03();
+  test04();
   return 0;
 }


Re: [Bug libstdc++/61107] stl_algo.h: std::__inplace_stable_partition() doesn't process the whole data range

2014-11-10 Thread François Dumont

Any news about this one ?

Here is another version with additional random tests on algos just to 
challenge other combinations of tests.


PR libstdc++/61107
* include/bits/stl_algo.h (__inplace_stable_partition): Delete.
(__stable_partition_adaptive): Return __first is range length is 1.
(__stable_partition): Adapt.
* testsuite/util/testsuite_new_operators.h: New.
* testsuite/25_algorithms/stable_sort/1.cc: Test algo in simulated
constraint memory context.
* testsuite/25_algorithms/inplace_merge/1.cc: Likewise.
* testsuite/25_algorithms/stable_partition/1.cc: Likewise.
* testsuite/25_algorithms/stable_sort/4.cc: New.
* testsuite/25_algorithms/inplace_merge/2.cc: New.
* testsuite/25_algorithms/stable_partition/2.cc: New.


Ok to commit ?

François

On 17/10/2014 22:46, François Dumont wrote:

Hi

As proposed in the bug report I just removed the 
__inplace_stable_partition as __stable_partition_adaptive is able to 
handle a 0 buffer size.


To test this bug I introduced overloads of new/delete operators in 
the testsuite utils. The existing set_memory_limits has no impact on 
new operator. I wonder if some test using it really have the expected 
behavior.


I also tests other algos that try to use a buffer and didn't found 
any issue. Those algos however can't be simplified like stable_partition.


2014-10-16  François Dumont fdum...@gcc.gnu.org

PR libstdc++/61107
* include/bits/stl_algo.h (__inplace_stable_partition): Delete.
(__stable_partition): Adapt.
* testsuite/util/testsuite_new_operators.h: New.
* testsuite/25_algorithms/stable_sort/1.cc: Test algo in simulated
constraint memory context.
* testsuite/25_algorithms/inplace_merge/1.cc: Likewise.
* testsuite/25_algorithms/stable_partition/1.cc: Likewise.

Tested under Linux x86_64.

Ok to commit ?

François


Index: include/bits/stl_algo.h
===
--- include/bits/stl_algo.h	(revision 217160)
+++ include/bits/stl_algo.h	(working copy)
@@ -1512,34 +1512,6 @@
   // partition
 
   /// This is a helper function...
-  /// Requires __len != 0 and !__pred(*__first),
-  /// same as __stable_partition_adaptive.
-  templatetypename _ForwardIterator, typename _Predicate, typename _Distance
-_ForwardIterator
-__inplace_stable_partition(_ForwardIterator __first,
-			   _Predicate __pred, _Distance __len)
-{
-  if (__len == 1)
-	return __first;
-  _ForwardIterator __middle = __first;
-  std::advance(__middle, __len / 2);
-  _ForwardIterator __left_split =
-	std::__inplace_stable_partition(__first, __pred, __len / 2);
-  // Advance past true-predicate values to satisfy this
-  // function's preconditions.
-  _Distance __right_len = __len - __len / 2;
-  _ForwardIterator __right_split =
-	std::__find_if_not_n(__middle, __right_len, __pred);
-  if (__right_len)
-	__right_split = std::__inplace_stable_partition(__middle,
-			__pred,
-			__right_len);
-  std::rotate(__left_split, __middle, __right_split);
-  std::advance(__left_split, std::distance(__middle, __right_split));
-  return __left_split;
-}
-
-  /// This is a helper function...
   /// Requires __first != __last and !__pred(__first)
   /// and __len == distance(__first, __last).
   ///
@@ -1554,10 +1526,14 @@
 _Pointer __buffer,
 _Distance __buffer_size)
 {
+  if (__len == 1)
+	return __first;
+
   if (__len = __buffer_size)
 	{
 	  _ForwardIterator __result1 = __first;
 	  _Pointer __result2 = __buffer;
+
 	  // The precondition guarantees that !__pred(__first), so
 	  // move that element to the buffer before starting the loop.
 	  // This ensures that we only call __pred once per element.
@@ -1575,11 +1551,11 @@
 		*__result2 = _GLIBCXX_MOVE(*__first);
 		++__result2;
 	  }
+
 	  _GLIBCXX_MOVE3(__buffer, __result2, __result1);
 	  return __result1;
 	}
-  else
-	{
+
 	  _ForwardIterator __middle = __first;
 	  std::advance(__middle, __len / 2);
 	  _ForwardIterator __left_split =
@@ -1586,21 +1562,23 @@
 	std::__stable_partition_adaptive(__first, __middle, __pred,
 	 __len / 2, __buffer,
 	 __buffer_size);
+
 	  // Advance past true-predicate values to satisfy this
 	  // function's preconditions.
 	  _Distance __right_len = __len - __len / 2;
 	  _ForwardIterator __right_split =
 	std::__find_if_not_n(__middle, __right_len, __pred);
+
 	  if (__right_len)
 	__right_split =
 	  std::__stable_partition_adaptive(__right_split, __last, __pred,
 	   __right_len,
 	   __buffer, __buffer_size);
+
 	  std::rotate(__left_split, __middle, __right_split);
 	  std::advance(__left_split, std::distance(__middle, __right_split));
 	  return __left_split;
 	}
-}
 
   templatetypename _ForwardIterator, typename _Predicate
 _ForwardIterator
@@ -1618,16 +1596,11 @@
 	_DistanceType

Re: [patch] No allocation for empty unordered containers

2014-08-30 Thread François Dumont

Any news for my patch proposals ?

Regarding documentation of default minimum number of buckets, I don't 
know where it has been documented but why do we need to document it 
separately ? Could it be taken care by Doxygen ? Can't it get the 
default value from the code itself ? If not we could document it ourself 
next to the code rather than in a distinct file.


François

On 14/08/2014 21:22, François Dumont wrote:

On 13/08/2014 11:50, Jonathan Wakely wrote:


Yes you can, it's conforming to replace a (non-virtual) member function
with default arguments by two or more member functions. We do it all
the time.

See 17.6.5.5 [member.functions] p2.


You should have told it sooner ! But of course no-one is supposed 
to ignore the Standard :-).


Then here is the patch to introduce default constructor with 
compiler computed noexcept qualification. Note that I also made 
allocator aware default constructor allocation free however noexcept 
qualification has to be manually written which I find quite a burden. 
Do you think we shall do so now ?


2014-08-14  François Dumont fdum...@gcc.gnu.org

* include/bits/hashtable_policy.h (_Prime_rehash_policy): Qualify 
constructor

noexcept.
(_Hash_code_base): All specialization default constructible if
possible.
(_Hashtable_base): Likewise.
* include/bits/hashtable.h (_Hashtable()): Implementation 
defaulted.
* include/bits/unordered_map.h (unordered_map::unordered_map()): 
New,

implementation defaulted.
(unordered_multimap::unordered_multimap()): Likewise.
* include/bits/unordered_set.h
(unordered_set::unordered_set()): Likewise.
(unordered_multiset::unordered_multiset()): Likewise.
* include/debug/unordered_map: Likewise.
* include/debug/unordered_set: Likewise.
* testsuite/23_containers/unordered_map/allocator/noexcept.cc
(test04()): New.
* testsuite/23_containers/unordered_multimap/allocator/noexcept.cc
(test04()): New.
* testsuite/23_containers/unordered_set/allocator/noexcept.cc
(test04()): New.
* testsuite/23_containers/unordered_multiset/allocator/noexcept.cc
(test04()): New.

I am preparing a patch for profile mode so I will submit modification 
for this mode with this big patch.


Tested under Linux x86_64.

Ok to commit ?

François




Re: [patch] No allocation for empty unordered containers

2014-09-09 Thread François Dumont

On 09/09/2014 19:29, Jonathan Wakely wrote:

On 14/08/14 21:22 +0200, François Dumont wrote:
I am preparing a patch for profile mode so I will submit modification 
for this mode with this big patch.


btw, François, for profile mode I think we should just do something
like this patch.

I feel quite strongly that if using Debug Mode or Profile Mode makes
your program run out of memory where it wouldn't usually fail, then
terminating is reasonable. The point of Profile Mode is not to test
abnormal execution of your program because that won't give you useful
profile information for the normal case.

It's more important for the noexcept specification to be consistent
across normal/debug/profile modes than for profile mode to fail
gracefully via bad_alloc in out-of-memory scenarios.

Sure, no problem. In the patch I am preparing for profile mode 
failure in allocation will just mean that the involved container won't 
be profiled so that I can add noexcept wherever it is needed for 
consistency with normal mode. I hope to be able to submit this patch in 
a week or two.


François


[Bug libstdc++/62313] Data race in debug iterators

2014-09-10 Thread François Dumont

Hi

Here is a proposal to fix this data race issue.

I finally generalized bitset approach to fix it by inheriting from 
the normal iterator first and then the _Safe_iterator_base type. None of 
the libstdc++ iterator types are final so it is fine. Surprisingly, 
despite inheritance being private gcc got confused between 
_Safe_iterator_base _M_next and forward_list _M_next so I need to adapt 
some code to make usage of _Safe_iterator_base _M_next explicit.


I also consider any operator where normal iterator is being 
modified while the safe iterator is linked to the list of iterators. 
This is necessary to make sure that thread sanatizer won't consider a 
race condition. I didn't touch to bitset::reference because list 
references are only access on bitset destruction which is clearly not an 
operation allowed to do while playing with references in another thread.


Do you see any way to check for this problem in the testsuite ? Is 
there a thread sanitizer we could use ?


2014-09-10  François Dumont  fdum...@gcc.gnu.org

PR libstdc++/62313
* include/debug/safe_base.h
(_Safe_iterator_base(const _Safe_iterator_base)): Delete declaration.
(_Safe_iterator_base operator=(const _Safe_iterator_base)): Likewise.
* include/debug/safe_iterator.h (_Safe_iterator): Move normal 
iterator
before _Safe_iterator_base in memory. Lock before modifying the 
iterator

in numerous places.
* include/debug/safe_local_iterator.h
(_Safe_local_iterator_base(const _Safe_local_iterator_base)): Delete
declaration.
(_Safe_local_iterator_base operator=(const 
_Safe_local_iterator_base)):

Likewise.
* include/debug/safe_unordered_base.h (_Safe_local_iterator):  Move
normal iterator before _Safe_iterator_base in memory. Lock before
modifying the iterator in numerous places.
* include/debug/forward_list (_Safe_forward_list::_M_swap_aux): 
Adapt.

* include/debug/safe_sequence.tcc
(_Safe_sequence::_M_transfer_from_if): Adapt.

Tested under Linux x86_64 debug mode.

Ok to commit ?

François
Index: include/debug/forward_list
===
--- include/debug/forward_list	(revision 215134)
+++ include/debug/forward_list	(working copy)
@@ -86,24 +86,26 @@
   for (_Safe_iterator_base* __iter = __lhs_iterators; __iter;)
 	{
 	  // Even iterator is cast to const_iterator, not a problem.
-	  const_iterator* __victim = static_castconst_iterator*(__iter);
+	  _Safe_iterator_base* __victim_base = __iter;
+	  const_iterator* __victim =
+	static_castconst_iterator*(__victim_base);
 	  __iter = __iter-_M_next;
 	  if (__victim-base() == __rseq._M_base().cbefore_begin())
 	{
 	  __victim-_M_unlink();
-	  if (__lhs_iterators == __victim)
-		__lhs_iterators = __victim-_M_next;
+	  if (__lhs_iterators == __victim_base)
+		__lhs_iterators = __victim_base-_M_next;
 	  if (__bbegin_its)
 		{
-		  __victim-_M_next = __bbegin_its;
-		  __bbegin_its-_M_prior = __victim;
+		  __victim_base-_M_next = __bbegin_its;
+		  __bbegin_its-_M_prior = __victim_base;
 		}
 	  else
-		__last_bbegin = __victim;
-	  __bbegin_its = __victim;
+		__last_bbegin = __victim_base;
+	  __bbegin_its = __victim_base;
 	}
 	  else
-	__victim-_M_sequence = __lhs;
+	__victim_base-_M_sequence = __lhs;
 	}
 
   if (__bbegin_its)
Index: include/debug/safe_base.h
===
--- include/debug/safe_base.h	(revision 215134)
+++ include/debug/safe_base.h	(working copy)
@@ -95,12 +95,6 @@
 : _M_sequence(0), _M_version(0), _M_prior(0), _M_next(0)
 { this-_M_attach(__x._M_sequence, __constant); }
 
-_Safe_iterator_base
-operator=(const _Safe_iterator_base);
-
-explicit
-_Safe_iterator_base(const _Safe_iterator_base);
-
 ~_Safe_iterator_base() { this-_M_detach(); }
 
 /** For use in _Safe_iterator. */
Index: include/debug/safe_iterator.h
===
--- include/debug/safe_iterator.h	(revision 215134)
+++ include/debug/safe_iterator.h	(working copy)
@@ -109,16 +109,21 @@
*  %_Safe_iterator has member functions for iterator invalidation,
*  attaching/detaching the iterator from sequences, and querying
*  the iterator's state.
+   *
+   *  Note that _Iterator must rely first in the type memory layout so that it
+   *  gets initialized before the iterator is being attached to the container
+   *  list of iterators and it is being dettached before _Iterator get
+   *  destroy. Otherwise it would result in a data race.
*/
   templatetypename _Iterator, typename _Sequence
-class _Safe_iterator : public _Safe_iterator_base
+class _Safe_iterator
+: private _Iterator,
+  public _Safe_iterator_base
 {
-  typedef _Safe_iterator _Self;
+  typedef _Iterator _Ite_base;
+  typedef _Safe_iterator_base _Safe_base;
   typedef

Re: [patch] libstdc++/29988 Rb_Tree reuse allocated nodes

2014-09-19 Thread François Dumont

Still no feedback regarding this proposal ?

On 19/08/2014 22:14, François Dumont wrote:

Any news regarding this proposal ?

Thanks

François


On 30/07/2014 23:39, François Dumont wrote:

Hi

   Now that patch on testsuite allocator is in I would like to 
reactivate this one. Here it is again.


   See previous answer below regarding modification of 
_M_begin/_M_cbegin.


2014-07-30  François Dumont  fdum...@gcc.gnu.org

   PR libstdc++/29988
   * include/bits/stl_tree.h (_Rb_tree_reuse_or_alloc_node): New.
   (_Rb_tree_alloc_node): New.
   (_Rb_tree_impl): Remove unused _Is_pod_comparator template 
parameter.

(_Rb_tree::operator=(_Rb_tree)): New.
   (_Rb_tree::_M_assign_unique): New.
   (_Rb_tree::_M_assign_equal): New.
   (_Rb_tree): Adapt to reuse allocated nodes as much as possible.
   * include/bits/stl_map.h
(std::map::operator=(std::map)): Default implementation.
   (std::map::operator=(initializer_list)): Adapt to use
   _Rb_tree::_M_assign_unique.
   * include/bits/stl_multimap.h
(std::multimap::operator=(std::multimap)): Default implementation.
(std::multimap::operator=(initializer_list)): Adapt to use
   _Rb_tree::_M_assign_equal.
   * include/bits/stl_set.h
(std::set::operator=(std::set)): Default implementation.
   (std::set::operator=(initializer_list)): Adapt to use
   _Rb_tree::_M_assign_unique.
   * include/bits/stl_multiset.h
(std::multiset::operator=(std::multiset)): Default implementation.
(std::multiset::operator=(initializer_list)): Adapt to use
   _Rb_tree::_M_assign_equal.
   * testsuite/23_containers/map/allocator/copy_assign.cc 
(test03): New.

   * testsuite/23_containers/map/allocator/init-list.cc: New.
   * testsuite/23_containers/map/allocator/move_assign.cc 
(test03): New.

   * testsuite/23_containers/multimap/allocator/copy_assign.cc
   (test03): New.
   * testsuite/23_containers/multimap/allocator/init-list.cc: New.
   * testsuite/23_containers/multimap/allocator/move_assign.cc
   (test03): New.
   * testsuite/23_containers/multiset/allocator/copy_assign.cc
   (test03): New.
   * testsuite/23_containers/multiset/allocator/init-list.cc: New.
   * testsuite/23_containers/multiset/allocator/move_assign.cc
   (test03): New.
   * testsuite/23_containers/set/allocator/copy_assign.cc 
(test03): New.

   * testsuite/23_containers/set/allocator/init-list.cc: New.
   * testsuite/23_containers/set/allocator/move_assign.cc 
(test03): New.


Tested under linux x86_64.

Ok to commit ?

François


On 16/06/2014 22:23, François Dumont wrote:

Hi

Here is another proposal taking into account your remarks except
the one below.

In fact I had no problem with lambda, I just needed to store them
in a variable, lambda do not need to be made mutable.

On 11/06/2014 14:02, Jonathan Wakely wrote:



@@ -514,11 +651,11 @@
  { return this-_M_impl._M_header._M_right; }

  _Link_type
-  _M_begin() _GLIBCXX_NOEXCEPT
+  _M_begin() const _GLIBCXX_NOEXCEPT
  { return
static_cast_Link_type(this-_M_impl._M_header._M_parent); }


What's the purpose of this change?
Although it can be 'const' it is consistent with the usual
begin()/end() functions that the functions returning a mutable 
iterator
are non-const and the functions returning a constant iterator are 
const.



  _Const_Link_type
-  _M_begin() const _GLIBCXX_NOEXCEPT
+  _M_cbegin() const _GLIBCXX_NOEXCEPT
  {
return static_cast_Const_Link_type
  (this-_M_impl._M_header._M_parent);
@@ -529,7 +666,7 @@
  { return
reinterpret_cast_Link_type(this-_M_impl._M_header); }

  _Const_Link_type
-  _M_end() const _GLIBCXX_NOEXCEPT
+  _M_cend() const _GLIBCXX_NOEXCEPT
  { return
reinterpret_cast_Const_Link_type(this-_M_impl._M_header); }

  static const_reference


I'm not very comfortable with this renaming.

Having consistent _M_begin() functions allows using them in template
code that doesn't care if it's using the const or non-const version.



I try to revert this part and so remember why I did it in the first
place.

I needed to change _M_copy signature to:

  _Link_type
  _M_copy(_Link_type __x, _Link_type __p)

because I now use this method to also move the elements of the
data structure, I cannot move from a _Const_Like_type so I change
first parameter to _Link_type. I see that there are some code
duplications to deal with _Const_Link_type and _Link_type in 2
different part of the code but I didn't want to duplicate again here
and simply made _M_copy more flexible by taking a _Link_type rather
than a _Const_Link_type.

I don't really see interest of the existing code duplications so I
prefer to not do the same and write the code only once.

François












Profile mode maintenance patch

2014-09-21 Thread François Dumont

Hi

Here is the promise major patch for the profile mode. Here are the 
most important modifications.


Now instance of profiling structs are kept as pointers in the 
containers themselves. It has an impact on the container ABI but it 
greatly enhance performances as we do not need to move through a search 
in an unordered container which also imply a lock during this research. 
I have even been able to remove those unordered containers eventually 
just keeping a counter of allocated bytes to know if we should stop 
creating new profiling structs.


I get rid of the re-entrancy mechanism. The only reason for it was 
a potential hook in the memory allocator potentially creating new 
profiling structs and so long forever. I prefer to put it just where it 
is necessary that is to say when we first allocate memory for profiling 
which is then we create the back-trace.


I wonder if we shouldn't emit a #error when trying to activate 
profiling mode without backtrace feature cause in this case we simply 
won't collect anything.


I finalize ordered to unordered profiling by adding the missing 
__iterator_tracker on the ordered containers (map, multimap, set, multiset).


I clean all useless stuff like __stack_info_base class.

I fixed many memory leak and added a cleanup at exit of the 
application.


Profiling of containers is reseted as soon as one of the following 
operations occur: copy assignment, move assignment, initialization from 
an initialization list, clear.


I have added usage of atomic operations to maintain some counters 
that might be updated from different threads. Do not hesitate to review 
those closely. Especially __objects_byte_size which I am using in 
profiler_trace.h without atomic operation, is it fine ?


With all those modifications I have been able to run all testsuite 
in profile mode with success.


Ok to commit ?

François



profile.patch.bz2
Description: application/bzip


Re: [patch] libstdc++/29988 Rb_Tree reuse allocated nodes

2014-09-22 Thread François Dumont



On 11/06/2014 14:02, Jonathan Wakely wrote:



@@ -514,11 +651,11 @@
 { return this-_M_impl._M_header._M_right; }

 _Link_type
-  _M_begin() _GLIBCXX_NOEXCEPT
+  _M_begin() const _GLIBCXX_NOEXCEPT
 { return
static_cast_Link_type(this-_M_impl._M_header._M_parent); }


What's the purpose of this change?
Although it can be 'const' it is consistent with the usual
begin()/end() functions that the functions returning a mutable 
iterator
are non-const and the functions returning a constant iterator are 
const.


I'm still concerned about this part, especially as _M_end() isn't made
const!


 _Const_Link_type
-  _M_begin() const _GLIBCXX_NOEXCEPT
+  _M_cbegin() const _GLIBCXX_NOEXCEPT
 {
   return static_cast_Const_Link_type
 (this-_M_impl._M_header._M_parent);
@@ -529,7 +666,7 @@
 { return
reinterpret_cast_Link_type(this-_M_impl._M_header); }

 _Const_Link_type
-  _M_end() const _GLIBCXX_NOEXCEPT
+  _M_cend() const _GLIBCXX_NOEXCEPT
 { return
reinterpret_cast_Const_Link_type(this-_M_impl._M_header); }

 static const_reference


I'm not very comfortable with this renaming.

Having consistent _M_begin() functions allows using them in template
code that doesn't care if it's using the const or non-const version.



I try to revert this part and so remember why I did it in the first
place.

I needed to change _M_copy signature to:

 _Link_type
 _M_copy(_Link_type __x, _Link_type __p)

   because I now use this method to also move the elements of the
data structure, I cannot move from a _Const_Like_type so I change
first parameter to _Link_type. I see that there are some code
duplications to deal with _Const_Link_type and _Link_type in 2
different part of the code but I didn't want to duplicate again here
and simply made _M_copy more flexible by taking a _Link_type rather
than a _Const_Link_type.

   I don't really see interest of the existing code duplications so I
prefer to not do the same and write the code only once.


There are alternatives to duplicating the code. _M_copy could be:

 templatetypename _Ptr, typename _NodeGen
   _Link_type
   _M_copy(_Ptr, _Link_type, _NodeGen);

I've been experimenting with a patch that does this instead:

   _M_root() = _M_copy(__x._M_begin(), _M_end(),
   [__an](const value_type __val) {
 auto __nc_val = const_castvalue_type(__val);
 return __an(std::move_if_noexcept(__nc_val));
   });

I'm not very happy about having to use a const_cast, but then I'm also
not very happy having a function called _M_copy which takes a
non-const pointer because it might alter the thing it's copying.

At least with the const_cast the _M_copy function is logically doing a
non-modifying copy, but the caller can decide to pass in a lambda that
moves instead of copying, if it knows that it's OK to modify the
source object (because it's known to have been an rvalue).


I also prefer avoiding const_cast usually and for me _M_copy just mean 
that it copies the data structure either it is by moving its elements or 
copying them too. But if you prefer it this way I will do so.






protected:
-  templatetypename _Key_compare, -   bool 
_Is_pod_comparator = __is_pod(_Key_compare)

+  templatetypename _Key_compare
struct _Rb_tree_impl : public _Node_allocator


I don't think we should remove this parameter, it alters the mangled
name for _Rb_tree_impl symbols, which means users can get two
different symbols in their program and the linker will keep both.

It's redundant, but doesn't actually cause any harm. Maybe just rename
the parameter to _Unused or something, but leave it there, with the
same default argument.



Too bad.

New patch in a couple of day then.

François



Re: [patch] libstdc++/29988 Rb_Tree reuse allocated nodes

2014-09-23 Thread François Dumont

On 23/09/2014 13:22, Jonathan Wakely wrote:

On 22/09/14 23:51 +0200, François Dumont wrote:

New patch in a couple of day then.


OK, thanks.

It was faster than I though, here is the fixed patch tested under Linux 
x86_64.


2014-09-23  François Dumont  fdum...@gcc.gnu.org

PR libstdc++/29988
* include/bits/stl_tree.h (_Rb_tree_reuse_or_alloc_node): New.
(_Rb_tree_alloc_node): New.
(_Rb_tree::operator=(_Rb_tree)): New.
(_Rb_tree::_M_assign_unique): New.
(_Rb_tree::_M_assign_equal): New.
(_Rb_tree): Adapt to reuse allocated nodes as much as possible.
* include/bits/stl_map.h
(std::map::operator=(std::map)): Default implementation.
(std::map::operator=(initializer_list)): Adapt to use
_Rb_tree::_M_assign_unique.
* include/bits/stl_multimap.h
(std::multimap::operator=(std::multimap)): Default implementation.
(std::multimap::operator=(initializer_list)): Adapt to use
_Rb_tree::_M_assign_equal.
* include/bits/stl_set.h
(std::set::operator=(std::set)): Default implementation.
(std::set::operator=(initializer_list)): Adapt to use
_Rb_tree::_M_assign_unique.
* include/bits/stl_multiset.h
(std::multiset::operator=(std::multiset)): Default implementation.
(std::multiset::operator=(initializer_list)): Adapt to use
_Rb_tree::_M_assign_equal.
* testsuite/23_containers/map/allocator/copy_assign.cc (test03): New.
* testsuite/23_containers/map/allocator/init-list.cc: New.
* testsuite/23_containers/map/allocator/move_assign.cc (test03): New.
* testsuite/23_containers/multimap/allocator/copy_assign.cc
(test03): New.
* testsuite/23_containers/multimap/allocator/init-list.cc: New.
* testsuite/23_containers/multimap/allocator/move_assign.cc
(test03): New.
* testsuite/23_containers/multiset/allocator/copy_assign.cc
(test03): New.
* testsuite/23_containers/multiset/allocator/init-list.cc: New.
* testsuite/23_containers/multiset/allocator/move_assign.cc
(test03): New.
* testsuite/23_containers/set/allocator/copy_assign.cc (test03): New.
* testsuite/23_containers/set/allocator/init-list.cc: New.
* testsuite/23_containers/set/allocator/move_assign.cc (test03): New.

Ok to commit ?

François
Index: include/bits/stl_map.h
===
--- include/bits/stl_map.h	(revision 215528)
+++ include/bits/stl_map.h	(working copy)
@@ -297,28 +297,9 @@
   }
 
 #if __cplusplus = 201103L
-  /**
-   *  @brief  %Map move assignment operator.
-   *  @param  __x  A %map of identical element and allocator types.
-   *
-   *  The contents of @a __x are moved into this map (without copying
-   *  if the allocators compare equal or get moved on assignment).
-   *  Afterwards @a __x is in a valid, but unspecified state.
-   */
+  /// Move assignment operator.
   map
-  operator=(map __x) noexcept(_Alloc_traits::_S_nothrow_move())
-  {
-	if (!_M_t._M_move_assign(__x._M_t))
-	  {
-	// The rvalue's allocator cannot be moved and is not equal,
-	// so we need to individually move each element.
-	clear();
-	insert(std::__make_move_if_noexcept_iterator(__x.begin()),
-		   std::__make_move_if_noexcept_iterator(__x.end()));
-	__x.clear();
-	  }
-	return *this;
-  }
+  operator=(map) = default;
 
   /**
*  @brief  %Map list assignment operator.
@@ -334,8 +315,7 @@
   map
   operator=(initializer_listvalue_type __l)
   {
-	this-clear();
-	this-insert(__l.begin(), __l.end());
+	_M_t._M_assign_unique(__l.begin(), __l.end());
 	return *this;
   }
 #endif
Index: include/bits/stl_multimap.h
===
--- include/bits/stl_multimap.h	(revision 215528)
+++ include/bits/stl_multimap.h	(working copy)
@@ -292,28 +292,9 @@
   }
 
 #if __cplusplus = 201103L
-  /**
-   *  @brief  %Multimap move assignment operator.
-   *  @param  __x  A %multimap of identical element and allocator types.
-   *
-   *  The contents of @a __x are moved into this multimap (without copying
-   *  if the allocators compare equal or get moved on assignment).
-   *  Afterwards @a __x is in a valid, but unspecified state.
-   */
+  /// Move assignment operator.
   multimap
-  operator=(multimap __x) noexcept(_Alloc_traits::_S_nothrow_move())
-  {
-	if (!_M_t._M_move_assign(__x._M_t))
-	  {
-	// The rvalue's allocator cannot be moved and is not equal,
-	// so we need to individually move each element.
-	clear();
-	insert(std::__make_move_if_noexcept_iterator(__x.begin()),
-		   std::__make_move_if_noexcept_iterator(__x.end()));
-	__x.clear();
-	  }
-	return *this;
-  }
+  operator=(multimap) = default;
 
   /**
*  @brief  %Multimap list assignment operator.
@@ -329,8 +310,7 @@
   multimap
   operator=(initializer_listvalue_type __l

Re: [Bug libstdc++/62313] Data race in debug iterators

2014-09-23 Thread François Dumont

On 22/09/2014 00:04, Jonathan Wakely wrote:

On 10/09/14 22:55 +0200, François Dumont wrote:

Hi

   Here is a proposal to fix this data race issue.

   I finally generalized bitset approach to fix it by inheriting from 
the normal iterator first and then the _Safe_iterator_base type. None 
of the libstdc++ iterator types are final so it is fine. 
Surprisingly, despite inheritance being private gcc got confused 
between _Safe_iterator_base _M_next and forward_list _M_next so I 
need to adapt some code to make usage of _Safe_iterator_base _M_next 
explicit.


Access control in C++ is not related to visibility, name lookup still
finds private members, but it is an error to use them.


Ok, tricky.



   I also consider any operator where normal iterator is being 
modified while the safe iterator is linked to the list of iterators. 
This is necessary to make sure that thread sanatizer won't consider a 
race condition. I didn't touch to bitset::reference because list 
references are only access on bitset destruction which is clearly not 
an operation allowed to do while playing with references in another 
thread.


   Do you see any way to check for this problem in the testsuite ? Is 
there a thread sanitizer we could use ?


GCC's -fsanitize=thread option, although using it in the testsuite
would need something like dg-require-tsan so the test doesn't run on
platforms where it doesn't work, or if GCC was built without
libsanitizer.

Have you run some tests using -fsanitize=thread, even if they are not
in the testsuite?


No I hadn't and try since but without success. When I build with 
-fsanitize=thread the produced binary just segfault at startup. It 
complained about using -fPIE at compilation time and -lpie at link time 
but even with those options it segfault. Don't know what is going wrong. 
Maybe Dmitry who reported the bug could give it a try. I will ask for 
this on the bug ticket





Index: include/debug/safe_iterator.h

Same renaming here please, to _Iter_base.

Apart from those minor adjustments I think this looks good, but I'd
like to know that it has been tested with -fsanitize=thread, even if
only lightly tested.




I fixed the vocabulary problems. Just need a slight test then.

François


Re: Profile mode maintenance patch

2014-09-23 Thread François Dumont

On 23/09/2014 13:27, Jonathan Wakely wrote:

On 21/09/14 23:29 +0200, François Dumont wrote:
   With all those modifications I have been able to run all testsuite 
in profile mode with success.


I've looked over the patch and it looks fine.

I don't know the details of the Profile Mode, so if you're happy that
these changes are an improvement and all tests pass then that's good
enough for me.


   Ok to commit ?


Yes, OK for trunk - thanks very much.




Ok but could you just let me know what you think of this method:

  templatetypename __object_info, typename __stack_info
__object_info*
__trace_base__object_info, __stack_info::
__add_object(const __object_info __info)
{
  if (__max_mem() != 0  __objects_byte_size = __max_mem())
{
  delete __info.__stack();
  return 0;
}

  __object_info* __ret = new(std::nothrow) __object_info(__info);
  if (!__ret)
{
  delete __info.__stack();
  return 0;
}

  __gnu_cxx::__atomic_add(__objects_byte_size, sizeof(__object_info));
  return __ret;
}

This method can be called from several threads. I check condition 
accessing __object_byte_size and then update it with an atomic operation 
to make sure it stays consistent. Does it look ok to you too ?


François


[v3] Fix management of non empty hash functor

2012-12-13 Thread François Dumont

Hi

As part of a performance patch proposed in an other mailing thread 
was a patch to improve management of hash functor with state. This part 
is I think less sensible than the performance patch so I propose it 
independently. I only would like to commit the modification on the 
performance tests here if you don't mind.


Thanks to this patch caching the hash code or not doesn't depend on 
the hash functor to be empty of final anymore. I only keep the default 
constructible condition so that local_iterator can be default 
constructible, considering it is a Standard request.


2012-12-14  François Dumont  fdum...@gcc.gnu.org

* include/bits/hashtable_policy.h (_Local_iterator_base): Use
_Hashtable_ebo_helper to embed necessary functors into the
local_iterator. Pass information about functors involved in hash
code by copy.
* include/bits/hashtable.h (__cache_default): Cache only if the
hash functor is not noexcept qualified or if it is slow or if the
hash functor do not expose a default constructor.
* include/debug/unordered_set
(std::__debug::unordered_set::erase): Detect local iterators to
invalidate using contained node rather than generating a dummy
local_iterator instance.
(std::__debug::unordered_multiset::erase): Likewise.
* include/debug/unordered_map
(std::__debug::unordered_map::erase): Likewise.
(std::__debug::unordered_multimap::erase): Likewise.
* testsuite/performance/23_containers/insert_erase/41975.cc: Test
std::tr1 and std versions of unordered_set regardless of any
macro. Add test on default cache behavior.
* testsuite/performance/23_containers/insert/54075.cc: Likewise.
* testsuite/23_containers/unordered_set/instantiation_neg.cc:
Adapt line number.
* testsuite/23_containers/unordered_set/
not_default_constructible_hash_neg.cc: New.
* testsuite/23_containers/unordered_set/buckets/swap.cc: New.

Tested under Linux x86_64, normal and debug modes.

Ok to commit ?

François

Index: include/bits/hashtable_policy.h
===
--- include/bits/hashtable_policy.h	(revision 194488)
+++ include/bits/hashtable_policy.h	(working copy)
@@ -981,9 +981,17 @@
   typedef void* 	__hash_code;
   typedef _Hash_node_Value, false			__node_type;
 
-  // We need the default constructor for the local iterators.
+public:
+  // We need the following public for local iterators.
+
   _Hash_code_base() = default;
+  _Hash_code_base(const _Hash_code_base) = default;
 
+  std::size_t
+  _M_bucket_index(const __node_type* __p, std::size_t __n) const
+  { return _M_ranged_hash()(_M_extract()(__p-_M_v), __n); }
+
+protected:
   _Hash_code_base(const _ExtractKey __ex, const _H1, const _H2,
 		  const _Hash __h)
   : _EboExtractKey(__ex), _EboHash(__h) { }
@@ -996,10 +1004,6 @@
   _M_bucket_index(const _Key __k, __hash_code, std::size_t __n) const
   { return _M_ranged_hash()(__k, __n); }
 
-  std::size_t
-  _M_bucket_index(const __node_type* __p, std::size_t __n) const
-  { return _M_ranged_hash()(_M_extract()(__p-_M_v), __n); }
-
   void
   _M_store_code(__node_type*, __hash_code) const
   { }
@@ -1062,13 +1066,19 @@
   hash_function() const
   { return _M_h1(); }
 
+  // We need the following public for the local iterators.
   typedef std::size_t __hash_code;
   typedef _Hash_node_Value, false			__node_type;
 
-protected:
-  // We need the default constructor for the local iterators.
   _Hash_code_base() = default;
+  _Hash_code_base(const _Hash_code_base) = default;
 
+  std::size_t
+  _M_bucket_index(const __node_type* __p,
+		  std::size_t __n) const
+  { return _M_h2()(_M_h1()(_M_extract()(__p-_M_v)), __n); }
+
+protected:
   _Hash_code_base(const _ExtractKey __ex,
 		  const _H1 __h1, const _H2 __h2,
 		  const _Default_ranged_hash)
@@ -1082,11 +1092,6 @@
   _M_bucket_index(const _Key, __hash_code __c, std::size_t __n) const
   { return _M_h2()(__c, __n); }
 
-  std::size_t
-  _M_bucket_index(const __node_type* __p,
-		  std::size_t __n) const
-  { return _M_h2()(_M_h1()(_M_extract()(__p-_M_v)), __n); }
-
   void
   _M_store_code(__node_type*, __hash_code) const
   { }
@@ -1148,6 +1153,10 @@
   typedef std::size_t __hash_code;
   typedef _Hash_node_Value, true			__node_type;
 
+  // Need the following public for local iterators.
+  const _H2
+  _M_h2() const { return _EboH2::_S_cget(*this); }
+
 protected:
   _Hash_code_base(const _ExtractKey __ex,
 		  const _H1 __h1, const _H2 __h2,
@@ -1195,9 +1204,6 @@
   _H1
   _M_h1() { return _EboH1::_S_get(*this); }
 
-  const _H2
-  _M_h2() const { return _EboH2::_S_cget(*this); }
-
   _H2
   _M_h2() { return _EboH2::_S_get(*this); }
 };
@@ -1250,12

Re: [v3] Fix management of non empty hash functor

2013-01-10 Thread François Dumont

Hi

Here is an other version of this patch. Indeed there were no need 
to expose many stuff public. Inheriting from _Hash_code_base is fine, it 
is not final and it deals with EBO itself. I only kept usage of 
_Hashtable_ebo_helper when embedding H2 functor. As it is an extension 
we could have impose it not to be final but it doesn't cost a lot to 
deal with it. Finally I only needed a single friend declaration to get 
access to the H2 part of _Hash_code_base.


I didn't touch the default cache policy for the moment except 
reducing constraints on the hash functor. I prefer to submit an other 
patch to change when we cache or not depending on the hash functor 
expected performance.


I also took the time to replace some typedef expressions with using 
ones. I really know what is the rule about using one or the other but I 
remembered that Benjamin spent quite some time changing typedef in using 
so I prefer to stick to this approach in this file, even if there are 
still some typedef left.


Tested under linux x86_64 normal and debug modes.

2013-01-10  François Dumont  fdum...@gcc.gnu.org

* include/bits/hashtable_policy.h (_Local_iterator_base): Use
_Hashtable_ebo_helper to embed necessary functors into the
local_iterator when necessary. Pass information about functors
involved in hash code by copy.
* include/bits/hashtable.h (__cache_default): Do not cache for
builtin integral types unless the hash functor is not noexcept
qualified or is not default constructible. Adapt static assertions
and local iteraror instantiations.
* include/debug/unordered_set
(std::__debug::unordered_set::erase): Detect local iterators to
invalidate using contained node rather than generating a dummy
local_iterator instance.
(std::__debug::unordered_multiset::erase): Likewise.
* include/debug/unordered_map
(std::__debug::unordered_map::erase): Likewise.
(std::__debug::unordered_multimap::erase): Likewise.
* testsuite/performance/23_containers/insert_erase/41975.cc: Test
std::tr1 and std versions of unordered_set regardless of any
macro. Add test on default cache behavior.
* testsuite/performance/23_containers/insert/54075.cc: Likewise.
* testsuite/23_containers/unordered_set/instantiation_neg.cc:
Adapt line number.
* testsuite/23_containers/unordered_set/
not_default_constructible_hash_neg.cc: New.
* testsuite/23_containers/unordered_set/buckets/swap.cc: New.

If you agree with the patch tell me where and when to apply it.

François


On 01/04/2013 12:17 PM, Paolo Carlini wrote:

Hi,

On 12/13/2012 10:32 PM, François Dumont wrote:

Hi

As part of a performance patch proposed in an other mailing 
thread was a patch to improve management of hash functor with state. 
This part is I think less sensible than the performance patch so I 
propose it independently. I only would like to commit the 
modification on the performance tests here if you don't mind.


Thanks to this patch caching the hash code or not doesn't depend 
on the hash functor to be empty of final anymore. I only keep the 
default constructible condition so that local_iterator can be default 
constructible, considering it is a Standard request.
I'm finally having a closer look at this work of yours (sorry aboutt 
the delay!) and I think we want something similar for 4.8.0. However, 
to be honest, I'm not convinced we are implementing the general idea 
in the best way, in particular I don't like the much more complex 
access control structure, _Hash_code_base loses encapsulation, etc. 
Did you consider maybe adding friend declarations in a few places?


Jon, do you have suggestiong? The idea of managing to get rid of the 
empty  !final requirement for dispatching seems right to me.


By the way, I'm also not convinced that is_integral is the right 
category, I think is_scalar for example is better: pointers are common 
and very similar in terms of std::hash, likewise floating point 
quantities (with the possible exception of long double, but I don't 
think we should spend time on it).


Paolo.



Index: include/bits/hashtable_policy.h
===
--- include/bits/hashtable_policy.h	(revision 195097)
+++ include/bits/hashtable_policy.h	(working copy)
@@ -1,6 +1,6 @@
 // Internal policy header for unordered_set and unordered_map -*- C++ -*-
 
-// Copyright (C) 2010, 2011, 2012 Free Software Foundation, Inc.
+// Copyright (C) 2010-2013 Free Software Foundation, Inc.
 //
 // This file is part of the GNU ISO C++ Library.  This library is free
 // software; you can redistribute it and/or modify it under the
@@ -202,7 +202,7 @@
   templatetypename _Value, bool _Cache_hash_code
 struct _Node_iterator_base
 {
-  typedef _Hash_node_Value, _Cache_hash_code	__node_type;
+  using __node_type = _Hash_node_Value, _Cache_hash_code;
 
   __node_type*  _M_cur;
 
@@ -282,7 +282,7

Re: [v3] Fix management of non empty hash functor

2013-01-28 Thread François Dumont

Attached patch applied.

2013-01-28  François Dumont  fdum...@gcc.gnu.org

* include/bits/hashtable_policy.h (_Local_iterator_base): Use
_Hashtable_ebo_helper to embed functors into the local_iterator
when necessary. Pass information about functors involved in hash
code by copy.
* include/bits/hashtable.h (__cache_default): Do not cache for
builtin integral types unless the hash functor is not noexcept
qualified or is not default constructible. Adapt static assertions
and local iterator instantiations.
* include/debug/unordered_set
(std::__debug::unordered_set::erase): Detect local iterators to
invalidate using contained node rather than generating a dummy
local_iterator instance.
(std::__debug::unordered_multiset::erase): Likewise.
* include/debug/unordered_map
(std::__debug::unordered_map::erase): Likewise.
(std::__debug::unordered_multimap::erase): Likewise.
* testsuite/performance/23_containers/insert_erase/41975.cc: Test
std::tr1 and std versions of unordered_set regardless of any
macro. Add test on default cache behavior.
* testsuite/performance/23_containers/insert/54075.cc: Likewise.
* testsuite/23_containers/unordered_set/instantiation_neg.cc:
Adapt line number.
* testsuite/23_containers/unordered_set/
not_default_constructible_hash_neg.cc: New.
* testsuite/23_containers/unordered_set/buckets/swap.cc: New.

On 01/28/2013 04:42 PM, Jonathan Wakely wrote:

On 10 January 2013 21:02, François Dumont wrote:

Hi

 Here is an other version of this patch. Indeed there were no need to
expose many stuff public. Inheriting from _Hash_code_base is fine, it is not
final and it deals with EBO itself. I only kept usage of
_Hashtable_ebo_helper when embedding H2 functor. As it is an extension we
could have impose it not to be final but it doesn't cost a lot to deal with
it. Finally I only needed a single friend declaration to get access to the
H2 part of _Hash_code_base.

OK.


 I didn't touch the default cache policy for the moment except reducing
constraints on the hash functor. I prefer to submit an other patch to change
when we cache or not depending on the hash functor expected performance.

OK.  The reduced constraints are good.  Does this actually affect
performance?  In my tests it doesn't, so I assume we still need to
change the caching decision to notice any performance improvements?


No performance gain plan with that patch indeed. It just restore support 
for non-empty hash functor that used to work with previous 
implementation. There is also no performance test impacted by the 
modification of the default cache behavior so it is not surprised that 
you noticed nothing.




(Do the performance benchmarks actually tell us anything useful?
When I run them I get such varying results it doesn't seem to be reliable.)
Last time I run the tests it was showing when not caching was better 
than caching. I have even added a bench on the unordered containers 
directly to show what are the performance of default behavior. For the 
moment, for the Foo type used in 54075.cc, the default behavior is not 
the best one. But I will submit a patch for that soon with a hash traits 
telling if it is fast or not, like we already talk about.


François

Index: include/bits/hashtable_policy.h
===
--- include/bits/hashtable_policy.h	(revision 195515)
+++ include/bits/hashtable_policy.h	(working copy)
@@ -1,6 +1,6 @@
 // Internal policy header for unordered_set and unordered_map -*- C++ -*-
 
-// Copyright (C) 2010, 2011, 2012 Free Software Foundation, Inc.
+// Copyright (C) 2010-2013 Free Software Foundation, Inc.
 //
 // This file is part of the GNU ISO C++ Library.  This library is free
 // software; you can redistribute it and/or modify it under the
@@ -202,7 +202,7 @@
   templatetypename _Value, bool _Cache_hash_code
 struct _Node_iterator_base
 {
-  typedef _Hash_node_Value, _Cache_hash_code	__node_type;
+  using __node_type = _Hash_node_Value, _Cache_hash_code;
 
   __node_type*  _M_cur;
 
@@ -282,7 +282,7 @@
 struct _Node_const_iterator
 : public _Node_iterator_base_Value, __cache
 {
- private:
+private:
   using __base_type = _Node_iterator_base_Value, __cache;
   using __node_type = typename __base_type::__node_type;
 
@@ -941,6 +941,17 @@
 };
 
   /**
+   *  Primary class template _Local_iterator_base.
+   *
+   *  Base class for local iterators, used to iterate within a bucket
+   *  but not between buckets.
+   */
+  templatetypename _Key, typename _Value, typename _ExtractKey,
+	   typename _H1, typename _H2, typename _Hash,
+	   bool __cache_hash_code
+struct _Local_iterator_base;
+
+  /**
*  Primary class template _Hash_code_base.
*
*  Encapsulates two policy issues that aren't quite orthogonal.
@@ -974,8 +985,8 @@
   private _Hashtable_ebo_helper1, _Hash

Fwd: Re: Export _Prime_rehash_policy symbols

2013-02-01 Thread François Dumont

Test successful so attached patch applied.

2013-02-01  François Dumont fdum...@gcc.gnu.org

* include/bits/hashtable_policy.h
(_Prime_rehash_policy::_M_next_bkt)
(_Prime_rehash_policy::_M_need_rehash): Move definition...
* src/c++11/hashtable_c++0x.cc: ... here.
* src/shared/hashtable-aux.cc: Remove c++config.h include.
* config/abi/gnu.ver (GLIBCXX_3.4.18): Export _Prime_rehash_policy
symbols.

François


On 01/30/2013 11:12 AM, Paolo Carlini wrote:
... before committing, please double check that we aren't breaking 
|--enable-symvers=|gnu-versioned-namespace, wouldn't be the first time 
that we do that and we notice only much later. At minimum build with 
it and run the testsuite.


Paolo.





Index: include/bits/hashtable_policy.h
===
--- include/bits/hashtable_policy.h	(revision 195557)
+++ include/bits/hashtable_policy.h	(working copy)
@@ -369,7 +369,8 @@
 
 // Return a bucket count appropriate for n elements
 std::size_t
-_M_bkt_for_elements(std::size_t __n) const;
+_M_bkt_for_elements(std::size_t __n) const
+{ return __builtin_ceil(__n / (long double)_M_max_load_factor); }
 
 // __n_bkt is current bucket count, __n_elt is current element count,
 // and __n_ins is number of elements to be inserted.  Do we need to
@@ -397,77 +398,6 @@
 mutable std::size_t  _M_next_resize;
   };
 
-  extern const unsigned long __prime_list[];
-
-  // XXX This is a hack.  There's no good reason for any of
-  // _Prime_rehash_policy's member functions to be inline.
-
-  // Return a prime no smaller than n.
-  inline std::size_t
-  _Prime_rehash_policy::
-  _M_next_bkt(std::size_t __n) const
-  {
-// Optimize lookups involving the first elements of __prime_list.
-// (useful to speed-up, eg, constructors)
-static const unsigned char __fast_bkt[12]
-  = { 2, 2, 2, 3, 5, 5, 7, 7, 11, 11, 11, 11 };
-
-if (__n = 11)
-  {
-	_M_next_resize
-	  = __builtin_ceil(__fast_bkt[__n]
-			   * (long double)_M_max_load_factor);
-	return __fast_bkt[__n];
-  }
-
-const unsigned long* __next_bkt
-  = std::lower_bound(__prime_list + 5, __prime_list + _S_n_primes,
-			 __n);
-_M_next_resize
-  = __builtin_ceil(*__next_bkt * (long double)_M_max_load_factor);
-return *__next_bkt;
-  }
-
-  // Return the smallest integer p such that alpha p = n, where alpha
-  // is the load factor.
-  inline std::size_t
-  _Prime_rehash_policy::
-  _M_bkt_for_elements(std::size_t __n) const
-  { return __builtin_ceil(__n / (long double)_M_max_load_factor); }
-
-  // Finds the smallest prime p such that alpha p  __n_elt + __n_ins.
-  // If p  __n_bkt, return make_pair(true, p); otherwise return
-  // make_pair(false, 0).  In principle this isn't very different from
-  // _M_bkt_for_elements.
-
-  // The only tricky part is that we're caching the element count at
-  // which we need to rehash, so we don't have to do a floating-point
-  // multiply for every insertion.
-
-  inline std::pairbool, std::size_t
-  _Prime_rehash_policy::
-  _M_need_rehash(std::size_t __n_bkt, std::size_t __n_elt,
-		 std::size_t __n_ins) const
-  {
-if (__n_elt + __n_ins = _M_next_resize)
-  {
-	long double __min_bkts = (__n_elt + __n_ins)
- / (long double)_M_max_load_factor;
-	if (__min_bkts = __n_bkt)
-	  return std::make_pair(true,
-	_M_next_bkt(std::maxstd::size_t(__builtin_floor(__min_bkts) + 1,
-	  __n_bkt * _S_growth_factor)));
-	else
-	  {
-	_M_next_resize
-	  = __builtin_floor(__n_bkt * (long double)_M_max_load_factor);
-	return std::make_pair(false, 0);
-	  }
-  }
-else
-  return std::make_pair(false, 0);
-  }
-
   // Base classes for std::_Hashtable.  We define these base classes
   // because in some cases we want to do different things depending on
   // the value of a policy class.  In some cases the policy class
Index: src/shared/hashtable-aux.cc
===
--- src/shared/hashtable-aux.cc	(revision 195557)
+++ src/shared/hashtable-aux.cc	(working copy)
@@ -1,6 +1,6 @@
 // std::__detail and std::tr1::__detail definitions -*- C++ -*-
 
-// Copyright (C) 2007, 2009, 2011 Free Software Foundation, Inc.
+// Copyright (C) 2007-2013 Free Software Foundation, Inc.
 //
 // This file is part of the GNU ISO C++ Library.  This library is free
 // software; you can redistribute it and/or modify it under the
@@ -22,8 +22,6 @@
 // see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
 // http://www.gnu.org/licenses/.
 
-#include bits/c++config.h
-
 namespace __detail
 {
 _GLIBCXX_BEGIN_NAMESPACE_VERSION
Index: src/c++11/hashtable_c++0x.cc
===
--- src/c++11/hashtable_c++0x.cc	(revision 195557)
+++ src/c++11/hashtable_c++0x.cc	(working copy)
@@ -1,6 +1,6 @@
 // std::__detail definitions -*- C++ -*-
 
-// Copyright (C) 2007, 2008, 2009

hasher speed traits

2013-02-02 Thread François Dumont

Hi

Here is the last patch I can think of for 4.8. Thanks to it default 
performance reported in performance/23_containers/insert/54075.cc and 
performance/23_containers/insert_erase/41975.cc are always the best:


54075.cc std::unordered_set without hash code cached 
30 insertion attempts, 30 inserted  10r8u 1s 
13761936mem0pf
54075.cc std::unordered_set without hash code cached 
10 times insertion of 30 elements  31r   31u 0s0mem0pf


54075.cc std::unordered_set with hash code cached 
30 insertion attempts, 30 inserted  10r9u 1s 
18562000mem0pf
54075.cc std::unordered_set with hash code cached 10 
times insertion of 30 elements  34r   35u 0s0mem0pf


54075.cc std::unordered_set default cache 30 
insertion attempts, 30 inserted   9r8u0s 13761936mem0pf
54075.cc std::unordered_set default cache 10 times 
insertion of 30 elements  31r   32u0s 0mem0pf



41975.cc std::unordered_setstring without hash 
code cached: first insert   9r9u0s 8450336mem0pf
41975.cc std::unordered_setstring without hash 
code cached: erase from iterator   6r5u0s -6400096mem0pf
41975.cc std::unordered_setstring without hash 
code cached: second insert   6r5u0s 640mem0pf
41975.cc std::unordered_setstring without hash 
code cached: erase from key   5r5u0s -640mem0pf


41975.cc std::unordered_setstring with hash code 
cached: first insert   5r5u1s  8450336mem 0pf
41975.cc std::unordered_setstring with hash code 
cached: erase from iterator   4r3u0s -6400096mem0pf
41975.cc std::unordered_setstring with hash code 
cached: second insert   3r3u0s  6400016mem 0pf
41975.cc std::unordered_setstring with hash code 
cached: erase from key   4r3u0s -6400016mem 0pf


41975.cc std::unordered_setstring default cache: 
first insert   5r5u1s  8450336mem0pf
41975.cc std::unordered_setstring default cache: 
erase from iterator   4r3u0s -6400096mem0pf
41975.cc std::unordered_setstring default cache: 
second insert   3r3u0s  640mem0pf
41975.cc std::unordered_setstring default cache: 
erase from key   4r3u0s -640mem 0pf


2013-02-02  François Dumont  fdum...@gcc.gnu.org

* include/bits/functional_hash.h (std::__is_fast_hash): New.
* include/bits/basic_string.h: Specialize previous to mark
std::hash for string types as slow.
* include/bits/hashtable.h (__cache_default): Replace is_integral
with __is_fast_hash.
* src/c++11/hash_c++0x.cc: Add type_traits include.

Tested under Linux x86_64.

Ok to commit ?

François
Index: include/bits/functional_hash.h
===
--- include/bits/functional_hash.h	(revision 195686)
+++ include/bits/functional_hash.h	(working copy)
@@ -195,6 +195,18 @@
 
   // @} group hashes
 
+  // Hint about performance of hash functor. If not fast the hash based
+  // containers will cache the hash code.
+  // Default behavior is to consider that hasher are fast unless specified
+  // otherwise.
+  templatetypename _Hash
+struct __is_fast_hash : public std::true_type
+{ };
+
+  template
+struct __is_fast_hashhashlong double : public std::false_type
+{ };
+
 _GLIBCXX_END_NAMESPACE_VERSION
 } // namespace
 
Index: include/bits/basic_string.h
===
--- include/bits/basic_string.h	(revision 195686)
+++ include/bits/basic_string.h	(working copy)
@@ -3053,6 +3053,10 @@
   { return std::_Hash_impl::hash(__s.data(), __s.length()); }
 };
 
+  template
+struct __is_fast_hashhashstring : std::false_type
+{ };
+
 #ifdef _GLIBCXX_USE_WCHAR_T
   /// std::hash specialization for wstring.
   template
@@ -3064,6 +3068,10 @@
   { return std::_Hash_impl::hash(__s.data(),
  __s.length() * sizeof(wchar_t)); }
 };
+
+  template
+struct __is_fast_hashhashwstring : std::false_type
+{ };
 #endif
 #endif /* _GLIBCXX_COMPATIBILITY_CXX0X */
 
@@ -3079,6 +3087,10 @@
  __s.length() * sizeof(char16_t)); }
 };
 
+  template
+struct __is_fast_hashhashu16string : std::false_type
+{ };
+
   /// std::hash specialization for u32string.
   template
 struct hashu32string
@@ -3089,6 +3101,10 @@
   { return std::_Hash_impl::hash(__s.data(),
  __s.length() * sizeof(char32_t)); }
 };
+
+  template

Re: [PATCH, PR] Crash of Bessel functions at x==0!

2013-02-09 Thread François Dumont

Attached patch applied then.

2013-02-09  François Dumont  fdum...@gcc.gnu.org

* include/tr1/bessel_function.tcc (__cyl_bessel_ij_series): Code
simplification.


On 02/08/2013 08:46 PM, Paolo Carlini wrote:

On 02/08/2013 07:08 PM, François Dumont wrote:

Just a small remark, in bessel_function.tcc, the following:

+  if (__x == _Tp(0))
+{
+  if (__nu == _Tp(0))
+return _Tp(1);
+  else if (__nu == _Tp(1))
+return _Tp(0);
+  else
+return _Tp(0);
+}

could be simplified into

+  if (__x == _Tp(0))
+return (__nu == _Tp(0)) ? _Tp(1) : _Tp(0);
Thanks Francois. Besides the tiny-winy specific issue, we can all 
learn why normally unrelated changes should not be bundled together in 
the same patch, even more so when the more substantive one is by far 
the smaller.


Anyway, change pre-approved, whoever cares to commit it.

Thanks,
Paolo.



Index: include/tr1/bessel_function.tcc
===
--- include/tr1/bessel_function.tcc	(revision 195919)
+++ include/tr1/bessel_function.tcc	(working copy)
@@ -409,14 +409,8 @@
unsigned int __max_iter)
 {
   if (__x == _Tp(0))
-	{
-  if (__nu == _Tp(0))
-return _Tp(1);
-  else if (__nu == _Tp(1))
-return _Tp(0);
-  else
-return _Tp(0);
-	}
+	return __nu == _Tp(0) ? _Tp(1) : _Tp(0);
+
   const _Tp __x2 = __x / _Tp(2);
   _Tp __fact = __nu * std::log(__x2);
 #if _GLIBCXX_USE_C99_MATH_TR1


Re: unordered containers doc

2013-02-11 Thread François Dumont
That's crystal clear. I think I can recognize one or two words from my 
original proposal like 'The' or 'in' :-)


François


On 02/11/2013 01:24 AM, Jonathan Wakely wrote:

On 7 February 2013 21:01, François Dumont wrote:

Thanks for taking care of it. Here is another version, I think clearer.
Reading the doc I also found an unfinished sentence in an other chapter, it
is in the patch.

I made quite a few changes, partly to address the is_copy_assignable
check I added for PR 56267, the change I committed is attached.
Please let me know if you think it's wrong or unclear.

2013-02-10  François Dumont  fdum...@gcc.gnu.org
 Jonathan Wakely  jwakely@gmail.com

 * doc/xml/manual/containers.xml: Add section on unordered containers.
 * doc/xml/manual/using.xml: Fix incomplete sentence.

Tested with doc-xml-validate-docbook and doc-html-docbook, committed to trunk.




Re: [patch] fix libstdc++/56278

2013-02-13 Thread François Dumont

Committed then.

2013-02-13  François Dumont  fdum...@gcc.gnu.org

* include/bits/hashtable_policy.h (_Hash_code_base): Restore
default constructor protected.
* include/bits/hashtable.h: static assert that _Hash_code_base has
a default constructor available through inheritance.


On 02/13/2013 12:36 PM, Jonathan Wakely wrote:

On 13 February 2013 10:19, Paolo Carlini wrote:

On 02/12/2013 09:54 PM, François Dumont wrote:

 Of course this is not mandatory but I think _Hash_code_base would be
cleaner this way only exposing as public what is required by C++11. It can
also wait for 4.9.

I like it. Thanks Francois. If Jon has no objections over the next day or
so, please go ahead.

No objection, I agree it's suitable for 4.8 as it's refining my
regression fix, but I don't think it really makes any difference.
There is nothing required by C++11 for _Hash_code_base's constructors
because it's an implementation detail, and the only way a user can
tell the difference between a public and protected constructor is
trying to construct _Hash_code_base directly, which is obviously
unsupported.



Index: include/bits/hashtable_policy.h
===
--- include/bits/hashtable_policy.h	(revision 195955)
+++ include/bits/hashtable_policy.h	(working copy)
@@ -918,15 +918,13 @@
   using __ebo_extract_key = _Hashtable_ebo_helper0, _ExtractKey;
   using __ebo_hash = _Hashtable_ebo_helper1, _Hash;
 
-public:
-  // We need the default constructor for the local iterators.
-  _Hash_code_base() = default;
-
 protected:
   typedef void* 	__hash_code;
   typedef _Hash_node_Value, false			__node_type;
 
-protected:
+  // We need the default constructor for the local iterators.
+  _Hash_code_base() = default;
+
   _Hash_code_base(const _ExtractKey __ex, const _H1, const _H2,
 		  const _Hash __h)
   : __ebo_extract_key(__ex), __ebo_hash(__h) { }
@@ -1004,13 +1002,13 @@
   hash_function() const
   { return _M_h1(); }
 
-  // We need the default constructor for the local iterators.
-  _Hash_code_base() = default;
-
 protected:
   typedef std::size_t __hash_code;
   typedef _Hash_node_Value, false			__node_type;
 
+  // We need the default constructor for the local iterators.
+  _Hash_code_base() = default;
+
   _Hash_code_base(const _ExtractKey __ex,
 		  const _H1 __h1, const _H2 __h2,
 		  const _Default_ranged_hash)
Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 195955)
+++ include/bits/hashtable.h	(working copy)
@@ -266,7 +266,10 @@
   // __hash_code_base above to compute node bucket index so it has to be
   // default constructible.
   static_assert(__if_hash_not_cached
-		  is_default_constructible__hash_code_base::value,
+		is_default_constructible
+		  // We use _Hashtable_ebo_helper to access the protected
+		  // default constructor.
+		  __detail::_Hashtable_ebo_helper0, __hash_code_base::value,
 		Cache the hash code or make functors involved in hash code
 		 and bucket index computation default constructible);
 


std::pair copy and move constructor

2013-02-15 Thread François Dumont

Hi

I had a problem with the result of 
std::is_copy_assignablestd::pairconst int, int::type which used to 
be true_type. So here is a patch to fix that.


2013-02-15  François Dumont  fdum...@gcc.gnu.org

* include/bits/stl_pair.h (pair): Use default implementation for
copy and move constructors.
* testsuite/20_util/pair/is_copy_assignable.cc: New.
* testsuite/20_util/pair/is_move_assignable.cc: New.

I kept some checks commented. For is_copy_assignable.cc is looks 
like DeletedMoveAssignClass has also its copy assignment operator 
deleted. In is_move_assignable.cc even when pair is only composed of 
DeletedMoveAssignClass it looks like the pair still have a move 
assignment operator.


I was surprised to see that those operator were not already using 
the default implementation so sorry if I miss the mails explaining why.


Tested under Linux x86_64.

François

Index: include/bits/stl_pair.h
===
--- include/bits/stl_pair.h	(revision 196091)
+++ include/bits/stl_pair.h	(working copy)
@@ -150,22 +150,10 @@
 pair(piecewise_construct_t, tuple_Args1..., tuple_Args2...);
 
   pair
-  operator=(const pair __p)
-  {
-	first = __p.first;
-	second = __p.second;
-	return *this;
-  }
+  operator=(const pair) = default;
 
   pair
-  operator=(pair __p)
-  noexcept(__and_is_nothrow_move_assignable_T1,
-	  is_nothrow_move_assignable_T2::value)
-  {
-	first = std::forwardfirst_type(__p.first);
-	second = std::forwardsecond_type(__p.second);
-	return *this;
-  }
+  operator=(pair) = default;
 
   templateclass _U1, class _U2
 	pair
Index: testsuite/20_util/pair/is_move_assignable.cc
===
--- testsuite/20_util/pair/is_move_assignable.cc	(revision 0)
+++ testsuite/20_util/pair/is_move_assignable.cc	(revision 0)
@@ -0,0 +1,54 @@
+// { dg-do compile }
+// { dg-options -std=c++11 }
+
+// Copyright (C) 2013 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+//
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+//
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// http://www.gnu.org/licenses/.
+
+#include utility
+#include testsuite_tr1.h
+
+using namespace __gnu_test;
+
+typedef std::pairint, int	tt1;
+typedef std::pairint, double	tt2;
+typedef std::pairconst int, inttt3;
+typedef std::pairint, const inttt4;
+typedef std::pairNoexceptCopyAssignClass,
+		  NoexceptCopyAssignClass			tt5;
+typedef std::pairExceptCopyAssignClass, ExceptCopyAssignClass	tt6;
+typedef std::pairExceptCopyAssignClass, double		tt7;
+typedef std::pairNoexceptCopyAssignClass,
+		  ExceptCopyAssignClass			tt8;
+typedef std::pairint,
+		  DeletedCopyAssignClass			tt9;
+typedef std::pairint,
+		  DeletedMoveAssignClass			tt10;
+//typedef std::pairDeletedMoveAssignClass,
+//		  DeletedMoveAssignClass			tt11;
+
+
+static_assert(std::is_move_assignablett1::value, Error);
+static_assert(std::is_move_assignablett2::value, Error);
+static_assert(std::is_move_assignablett3::value, Error);
+static_assert(std::is_move_assignablett4::value, Error);
+static_assert(std::is_move_assignablett5::value, Error);
+static_assert(std::is_move_assignablett6::value, Error);
+static_assert(std::is_move_assignablett7::value, Error);
+static_assert(std::is_move_assignablett8::value, Error);
+static_assert(std::is_move_assignablett9::value, Error);
+static_assert(std::is_move_assignablett10::value, Error);
+//static_assert(!std::is_move_assignablett11::value, Error);
Index: testsuite/20_util/pair/is_copy_assignable.cc
===
--- testsuite/20_util/pair/is_copy_assignable.cc	(revision 0)
+++ testsuite/20_util/pair/is_copy_assignable.cc	(revision 0)
@@ -0,0 +1,51 @@
+// { dg-do compile }
+// { dg-options -std=c++11 }
+
+// Copyright (C) 2013 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+//
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS

Re: [v3] PR libstdc++/50529

2011-10-01 Thread François Dumont

On 09/29/2011 10:59 PM, Paolo Carlini wrote:

Ok to commit ?

Ok. These patches are going also to 4_6-branch.

Paolo


Attached patch applied, thanks Paolo for applying it to 4.6 branch.

2011-10-01  François Dumont fdum...@gcc.gnu.org

* include/debug/vector (vector::erase(iterator, iterator): Check
iterators equality using normal iterators.
* include/debug/deque (deque::erase(iterator, iterator): 
Likewise.


Regards

Index: include/debug/vector
===
--- include/debug/vector	(revision 179319)
+++ include/debug/vector	(working copy)
@@ -499,7 +499,7 @@
 	// 151. can't currently clear() empty container
 	__glibcxx_check_erase_range(__first, __last);
 
-	if (__first != __last)
+	if (__first.base() != __last.base())
 	  {
 	difference_type __offset = __first.base() - _Base::begin();
 	_Base_iterator __res = _Base::erase(__first.base(),
Index: include/debug/deque
===
--- include/debug/deque	(revision 179319)
+++ include/debug/deque	(working copy)
@@ -465,7 +465,7 @@
 	// 151. can't currently clear() empty container
 	__glibcxx_check_erase_range(__first, __last);
 
-	if (__first == __last)
+	if (__first.base() == __last.base())
 	  return __first;
 else if (__first.base() == _Base::begin()
 		 || __last.base() == _Base::end())


Re: [v3] fix libstdc++/52476

2012-04-09 Thread François Dumont

Attached patch applied to 4_7-branch.

2012-04-09  François Dumont fdum...@gcc.gnu.org

PR libstdc++/52476
* include/bits/hashtable.h (_Hashtable::_M_rehash_aux): Add.
(_Hashtable::_M_rehash): Use the latter.
* testsuite/23_containers/unordered_multimap/insert/52476.cc: New.
* testsuite/23_containers/unordered_multiset/insert/52476.cc: New.

Tested under linux x86_64.

I don't think I have the necessary rights to close the PR on bugzilla, I 
haven't been able to do so.


François

On 04/02/2012 12:12 AM, Paolo Carlini wrote:

Hi,

Attached patch applied.

2012-03-16  François Dumont fdum...@gcc.gnu.org

PR libstdc++/52476
* include/bits/hashtable.h (_Hashtable::_M_rehash_aux): Add.
(_Hashtable::_M_rehash): Use the latter.
* testsuite/23_containers/unordered_multimap/insert/52476.cc: 
New.
* testsuite/23_containers/unordered_multiset/insert/52476.cc: 
New.
Francois, at your ease, I think it's time to apply the fix to 
4_7-branch too and resolve the PR. By the way, Daniel confirmed in 
private email that mainline works just fine now.


Thanks,
Paolo.



Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 186244)
+++ include/bits/hashtable.h	(working copy)
@@ -596,6 +596,12 @@
   // reserve, if present, comes from _Rehash_base.
 
 private:
+  // Helper rehash method used when keys are unique.
+  void _M_rehash_aux(size_type __n, std::true_type);
+
+  // Helper rehash method used when keys can be non-unique.
+  void _M_rehash_aux(size_type __n, std::false_type);
+
   // Unconditionally change size of bucket array to n, restore hash policy
   // state to __state on exception.
   void _M_rehash(size_type __n, const _RehashPolicyState __state);
@@ -1592,41 +1598,145 @@
 {
   __try
 	{
-	  _Bucket* __new_buckets = _M_allocate_buckets(__n);
-	  _Node* __p = _M_begin();
-	  _M_before_begin._M_nxt = nullptr;
-	  std::size_t __cur_bbegin_bkt;
-	  while (__p)
+	  _M_rehash_aux(__n, integral_constantbool, __uk());
+	}
+  __catch(...)
+	{
+	  // A failure here means that buckets allocation failed.  We only
+	  // have to restore hash policy previous state.
+	  _M_rehash_policy._M_reset(__state);
+	  __throw_exception_again;
+	}
+}
+
+  // Rehash when there is no equivalent elements.
+  templatetypename _Key, typename _Value,
+	   typename _Allocator, typename _ExtractKey, typename _Equal,
+	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
+	   bool __chc, bool __cit, bool __uk
+void
+_Hashtable_Key, _Value, _Allocator, _ExtractKey, _Equal,
+	   _H1, _H2, _Hash, _RehashPolicy, __chc, __cit, __uk::
+_M_rehash_aux(size_type __n, std::true_type)
+{
+  _Bucket* __new_buckets = _M_allocate_buckets(__n);
+  _Node* __p = _M_begin();
+  _M_before_begin._M_nxt = nullptr;
+  std::size_t __bbegin_bkt;
+  while (__p)
+	{
+	  _Node* __next = __p-_M_next();
+	  std::size_t __bkt = _HCBase::_M_bucket_index(__p, __n);
+	  if (!__new_buckets[__bkt])
 	{
-	  _Node* __next = __p-_M_next();
-	  std::size_t __new_index = _HCBase::_M_bucket_index(__p, __n);
-	  if (!__new_buckets[__new_index])
+	  __p-_M_nxt = _M_before_begin._M_nxt;
+	  _M_before_begin._M_nxt = __p;
+	  __new_buckets[__bkt] = _M_before_begin;
+	  if (__p-_M_nxt)
+		__new_buckets[__bbegin_bkt] = __p;
+	  __bbegin_bkt = __bkt;
+	}
+	  else
+	{
+	  __p-_M_nxt = __new_buckets[__bkt]-_M_nxt;
+	  __new_buckets[__bkt]-_M_nxt = __p;
+	}
+	  __p = __next;
+	}
+  _M_deallocate_buckets(_M_buckets, _M_bucket_count);
+  _M_bucket_count = __n;
+  _M_buckets = __new_buckets;
+}
+
+  // Rehash when there can be equivalent elements, preserve their relative
+  // order.
+  templatetypename _Key, typename _Value,
+	   typename _Allocator, typename _ExtractKey, typename _Equal,
+	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
+	   bool __chc, bool __cit, bool __uk
+void
+_Hashtable_Key, _Value, _Allocator, _ExtractKey, _Equal,
+	   _H1, _H2, _Hash, _RehashPolicy, __chc, __cit, __uk::
+_M_rehash_aux(size_type __n, std::false_type)
+{
+  _Bucket* __new_buckets = _M_allocate_buckets(__n);
+
+  _Node* __p = _M_begin();
+  _M_before_begin._M_nxt = nullptr;
+  std::size_t __bbegin_bkt;
+  std::size_t __prev_bkt;
+  _Node* __prev_p = nullptr;
+  bool __check_bucket = false;
+
+  while (__p)
+	{
+	  bool __check_now = true;
+	  _Node* __next = __p-_M_next();
+	  std::size_t __bkt = _HCBase::_M_bucket_index(__p, __n);
+
+	  if (!__new_buckets[__bkt])
+	{
+	  __p-_M_nxt = _M_before_begin._M_nxt;
+	  _M_before_begin._M_nxt = __p;
+	  __new_buckets[__bkt] = _M_before_begin;
+	  if (__p-_M_nxt)
+		__new_buckets[__bbegin_bkt] = __p;
+	  __bbegin_bkt = __bkt

Re: PR 53115

2012-05-01 Thread François Dumont
unordered_multilmap test added, attached patch applied to 4.7 branch and 
trunk.


This bug was not so difficult to fix. It would even have been quite easy 
to detect with a good test coverage tool showing that not all possible 
path had been tested in this method. I hope to be able to make some 
progress on this subject in the future. However I will have a try with 
Valgrind.


I can only add comment in bugzilla so I let you set this issue as resolved.

François


I will have a run with Valgrind

2012-05-01  François Dumont fdum...@gcc.gnu.org

PR libstdc++/53115
* include/bits/hashtable.h
(_Hashtable::_M_rehash_aux(size_type, false_type)): Fix buckets
after insertion of several equivalent elements.
* testsuite/23_containers/unordered_multiset/insert/53115.cc: New.
* testsuite/23_containers/unordered_multimap/insert/53115.cc: New.
On 04/29/2012 12:42 PM, Paolo Carlini wrote:

On 04/29/2012 12:21 PM, François Dumont wrote:

Hi

Here is the patch for this PR. We were using buckets before 
updating them after having inserted equivalents elements one after 
the another.


2012-04-29  François Dumont fdum...@gcc.gnu.org

PR libstdc++/53115
* include/bits/hashtable.h
(_Hashtable::_M_rehash_aux(size_type, false_type)): Fix buckets
after insertion of several equivalent elements.
* testsuite/23_containers/unordered_multiset/insert/53115.cc: New.

Tested undex linux x86_64 in the 4.7 branch, normal and debug mode.

Ok to commit ?
Ok, but please also add a similar testcase for unordered_multimap. 
Also - just in case isn't obvious enough - please run such testcases 
through valgrind.


Thanks!
Paolo.




Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 187022)
+++ include/bits/hashtable.h	(working copy)
@@ -1,6 +1,7 @@
 // hashtable.h header -*- C++ -*-
 
-// Copyright (C) 2007, 2008, 2009, 2010, 2011 Free Software Foundation, Inc.
+// Copyright (C) 2007, 2008, 2009, 2010, 2011, 2012
+// Free Software Foundation, Inc.
 //
 // This file is part of the GNU ISO C++ Library.  This library is free
 // software; you can redistribute it and/or modify it under the
@@ -1670,57 +1671,55 @@
 
   while (__p)
 	{
-	  bool __check_now = true;
 	  _Node* __next = __p-_M_next();
 	  std::size_t __bkt = _HCBase::_M_bucket_index(__p, __n);
 
-	  if (!__new_buckets[__bkt])
+	  if (__prev_p  __prev_bkt == __bkt)
 	{
-	  __p-_M_nxt = _M_before_begin._M_nxt;
-	  _M_before_begin._M_nxt = __p;
-	  __new_buckets[__bkt] = _M_before_begin;
-	  if (__p-_M_nxt)
-		__new_buckets[__bbegin_bkt] = __p;
-	  __bbegin_bkt = __bkt;
+	  // Previous insert was already in this bucket, we insert after
+	  // the previously inserted one to preserve equivalent elements
+	  // relative order.
+	  __p-_M_nxt = __prev_p-_M_nxt;
+	  __prev_p-_M_nxt = __p;
+	  
+	  // Inserting after a node in a bucket require to check that we
+	  // haven't change the bucket last node, in this case next
+	  // bucket containing its before begin node must be updated. We
+	  // schedule a check as soon as we move out of the sequence of
+	  // equivalent nodes to limit the number of checks.
+	  __check_bucket = true;
 	}
 	  else
 	{
-	  if (__prev_p  __prev_bkt == __bkt)
+	  if (__check_bucket)
 		{
-		  // Previous insert was already in this bucket, we insert after
-		  // the previously inserted one to preserve equivalent elements
-		  // relative order.
-		  __p-_M_nxt = __prev_p-_M_nxt;
-		  __prev_p-_M_nxt = __p;
-
-		  // Inserting after a node in a bucket require to check that we
-		  // haven't change the bucket last node, in this case next
-		  // bucket containing its before begin node must be updated. We
-		  // schedule a check as soon as we move out of the sequence of
-		  // equivalent nodes to limit the number of checks.
-		  __check_bucket = true;
-		  __check_now = false;
+		  // Check if we shall update the next bucket because of insertions
+		  // into __prev_bkt bucket.
+		  if (__prev_p-_M_nxt)
+		{
+		  std::size_t __next_bkt
+			= _HCBase::_M_bucket_index(__prev_p-_M_next(), __n);
+		  if (__next_bkt != __prev_bkt)
+			__new_buckets[__next_bkt] = __prev_p;
+		}
+		  __check_bucket = false;
 		}
+	  if (!__new_buckets[__bkt])
+		{
+		  __p-_M_nxt = _M_before_begin._M_nxt;
+		  _M_before_begin._M_nxt = __p;
+		  __new_buckets[__bkt] = _M_before_begin;
+		  if (__p-_M_nxt)
+		__new_buckets[__bbegin_bkt] = __p;
+		  __bbegin_bkt = __bkt;
+		}
 	  else
 		{
 		  __p-_M_nxt = __new_buckets[__bkt]-_M_nxt;
 		  __new_buckets[__bkt]-_M_nxt = __p;
 		}
 	}
-	  
-	  if (__check_now  __check_bucket)
-	{
-	  // Check if we shall update the next bucket because of insertions
-	  // into __prev_bkt bucket.
-	  if (__prev_p-_M_nxt)
-		{
-		  std::size_t __next_bkt

Re: PR 53115

2012-05-02 Thread François Dumont

On 05/02/2012 06:23 PM, H.J. Lu wrote:

On Tue, May 1, 2012 at 1:23 PM, François Dumontfrs.dum...@gmail.com  wrote:

unordered_multilmap test added, attached patch applied to 4.7 branch and
trunk.

This bug was not so difficult to fix. It would even have been quite easy to
detect with a good test coverage tool showing that not all possible path had
been tested in this method. I hope to be able to make some progress on this
subject in the future. However I will have a try with Valgrind.

I can only add comment in bugzilla so I let you set this issue as resolved.

François


I will have a run with Valgrind

2012-05-01  François Dumontfdum...@gcc.gnu.org

PR libstdc++/53115
* include/bits/hashtable.h
(_Hashtable::_M_rehash_aux(size_type, false_type)): Fix buckets
after insertion of several equivalent elements.
* testsuite/23_containers/unordered_multiset/insert/53115.cc: New.
* testsuite/23_containers/unordered_multimap/insert/53115.cc: New.

This caused:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=53193

It may need other fixes from trunk.


I run tests before updating FSF copyright year.

Sorry



Re: [v3] fix libstdc++/53263

2012-05-11 Thread François Dumont

Attached patch applied to trunk.

2012-05-11  François Dumont fdum...@gcc.gnu.org

PR libstdc++/53263
* include/debug/safe_iterator.h (__gnu_debug::__base): Move...
* include/debug/functions.h: ... Here. Add debug function
overloads to perform checks on normal iterators when possible.
* include/debug/macros.h (__glibcxx_check_heap)
(__glibcxx_check_heap_pred): Use __gnu_debug::__base on iterator range.

I check my Bugzilla account Permissions page and it is written:

There are no permission bits set on your account.

I simply don't know who to ask permissions. Should I file a bugzilla 
entry for that ?


François

On 05/10/2012 11:18 PM, Paolo Carlini wrote:

Hi,

On 05/09/2012 11:02 PM, François Dumont wrote:

Here is a patch for PR 53263.

Tested under linux x86_64 debug mode.

Ok for trunk and 4.7 branch ?


Thanks. Considering that this isn't a regression and also that nobody 
reported the issue for so many years, the patch seems a bit largish to 
me to go into the branch. Thus, let's apply to mainline only and 
consider the issue closed. If people insist, seriously insist ;) we 
may reconsider for 4.7.2.


Thanks again!
Paolo.

PS: are you finally able to manage Bugzilla, yes?




Index: include/debug/functions.h
===
--- include/debug/functions.h	(revision 187292)
+++ include/debug/functions.h	(working copy)
@@ -31,7 +31,8 @@
 #define _GLIBCXX_DEBUG_FUNCTIONS_H 1
 
 #include bits/c++config.h
-#include bits/stl_iterator_base_types.h // for iterator_traits, categories
+#include bits/stl_iterator_base_types.h // for iterator_traits, categories and
+	  // _Iter_base
 #include bits/cpp_type_traits.h // for __is_integer
 #include debug/formatter.h
 
@@ -118,11 +119,8 @@
 inline bool
 __valid_range_aux(const _InputIterator __first,
 		  const _InputIterator __last, std::__false_type)
-  {
-typedef typename std::iterator_traits_InputIterator::iterator_category
-  _Category;
-return __valid_range_aux2(__first, __last, _Category());
-  }
+  { return __valid_range_aux2(__first, __last,
+			  std::__iterator_category(__first)); }
 
   /** Don't know what these iterators are, or if they are even
*  iterators (we may get an integral type for InputIterator), so
@@ -214,6 +212,15 @@
   return true;
 }
 
+  // For performance reason, as the iterator range has been validated, check on
+  // random access safe iterators is done using the base iterator.
+  templatetypename _Iterator, typename _Sequence
+inline bool
+__check_sorted_aux(const _Safe_iterator_Iterator, _Sequence __first,
+		   const _Safe_iterator_Iterator, _Sequence __last,
+		   std::random_access_iterator_tag __tag)
+  { return __check_sorted_aux(__first.base(), __last.base(), __tag); }
+
   // Can't check if an input iterator sequence is sorted, because we can't step
   // through the sequence.
   templatetypename _InputIterator, typename _Predicate
@@ -240,19 +247,28 @@
   return true;
 }
 
+  // For performance reason, as the iterator range has been validated, check on
+  // random access safe iterators is done using the base iterator.
+  templatetypename _Iterator, typename _Sequence,
+	   typename _Predicate
+inline bool
+__check_sorted_aux(const _Safe_iterator_Iterator, _Sequence __first,
+		   const _Safe_iterator_Iterator, _Sequence __last,
+		   _Predicate __pred,
+		   std::random_access_iterator_tag __tag)
+  { return __check_sorted_aux(__first.base(), __last.base(), __pred, __tag); }
+
   // Determine if a sequence is sorted.
   templatetypename _InputIterator
 inline bool
 __check_sorted(const _InputIterator __first, const _InputIterator __last)
 {
-  typedef typename std::iterator_traits_InputIterator::iterator_category
-_Category;
-
   // Verify that the  operator for elements in the sequence is a
   // StrictWeakOrdering by checking that it is irreflexive.
   __glibcxx_assert(__first == __last || !(*__first  *__first));
 
-  return __check_sorted_aux(__first, __last, _Category());
+  return __check_sorted_aux(__first, __last,
+std::__iterator_category(__first));
 }
 
   templatetypename _InputIterator, typename _Predicate
@@ -260,14 +276,12 @@
 __check_sorted(const _InputIterator __first, const _InputIterator __last,
_Predicate __pred)
 {
-  typedef typename std::iterator_traits_InputIterator::iterator_category
-_Category;
-
   // Verify that the predicate is StrictWeakOrdering by checking that it
   // is irreflexive.
   __glibcxx_assert(__first == __last || !__pred(*__first, *__first));
 
-  return __check_sorted_aux(__first, __last, __pred, _Category());
+  return __check_sorted_aux(__first, __last, __pred,
+std::__iterator_category(__first));
 }
 
   templatetypename _InputIterator
@@ -332,13 +346,11 @@
   return

Re: PR 54075 Fix hashtable::reserve

2012-07-25 Thread François Dumont
Attached patch applied to trunk. I am building 4.7 branch to also apply 
the patch to this branch.


2012-07-25  François Dumont  fdum...@gcc.gnu.org

PR libstdc++/54075
* include/bits/hashtable.h
(_Hashtable::_Hashtable(_InputIterator, _InputIterator,
size_type, ...): Remove std::max usage to guarantee that hashtable
state is consistent with hash policy state.
(_Hashtable::rehash): Likewise. Set _M_prev_resize to 0 to avoid
the hashtable to be shrinking on next insertion.
* testsuite/23_containers/unordered_set/modifiers/reserve.cc: New.
* 
testsuite/23_containers/unordered_multiset/modifiers/reserve.cc: New.

* testsuite/23_containers/unordered_map/modifiers/reserve.cc: New.
* 
testsuite/23_containers/unordered_multimap/modifiers/reserve.cc: New.


François


On 07/25/2012 04:55 PM, Jonathan Wakely wrote:

(CC gcc-patches)

On 25 July 2012 10:26, François Dumont wrote:

Hi

 Here is a patch proposal for PR 54075. I also took the occasion to fix
something that has been delay so far which is usage of std::max to get the
number of buckets to use. The problem of using std::max when using the hash
policy is that the hashtable might be using a number of buckets inconsistent
with the hash policy.

2012-07-25  François Dumont  fdum...@gcc.gnu.org

 PR libstdc++/54075
 * include/bits/hashtable.h
 (_Hashtable::_Hashtable(_InputIterator, _InputIterator,
 size_type, ...): Remove std::max usage to guaranty that hashtable
 state is consistent with hash policy state.

s/guaranty/guarantee/


 (_Hashtable::rehash): Likewise. Set _M_prev_resize to 0 to avoid
 the hashtable to be shrink on next insertion.

s/to be shrink/shrinking/


 * testsuite/23_containers/unordered_set/modifiers/reserve.cc: New.
 * testsuite/23_containers/unordered_multiset/modifiers/reserve.cc: New.
 * testsuite/23_containers/unordered_map/modifiers/reserve.cc: New.
 * testsuite/23_containers/unordered_multimap/modifiers/reserve.cc: New.

 Tested under Linux x86_64.

OK with the changelog edits above.


 I guess it will have to be apply to the 4.7 branch too, confirm please.

Yes, I think so, it's a regression from 4.6.

Thanks for dealing with it so quickly.


Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 189626)
+++ include/bits/hashtable.h	(working copy)
@@ -803,11 +803,11 @@
 	_M_element_count(0),
 	_M_rehash_policy()
   {
-	_M_bucket_count = std::max(_M_rehash_policy._M_next_bkt(__bucket_hint),
-   _M_rehash_policy.
-   _M_bkt_for_elements(__detail::
-		   __distance_fw(__f,
- __l)));
+	_M_bucket_count =
+	  _M_rehash_policy._M_bkt_for_elements(__detail::__distance_fw(__f,
+   __l));
+	if (_M_bucket_count = __bucket_hint)
+	  _M_bucket_count = _M_rehash_policy._M_next_bkt(__bucket_hint);
 
 	// We don't want the rehash policy to ask for the hashtable to
 	// shrink on the first insertion so we need to reset its
@@ -1609,10 +1609,20 @@
 rehash(size_type __n)
 {
   const __rehash_state __saved_state = _M_rehash_policy._M_state();
-  _M_rehash(std::max(_M_rehash_policy._M_next_bkt(__n),
-			 _M_rehash_policy._M_bkt_for_elements(_M_element_count
-			  + 1)),
-		__saved_state);
+  std::size_t __buckets
+	= _M_rehash_policy._M_bkt_for_elements(_M_element_count + 1);
+  if (__buckets = __n)
+	__buckets = _M_rehash_policy._M_next_bkt(__n);
+
+  if (__buckets != _M_bucket_count)
+	{
+	  _M_rehash(__buckets, __saved_state);
+
+	  // We don't want the rehash policy to ask for the hashtable to shrink
+	  // on the next insertion so we need to reset its previous resize
+	  // level.
+	  _M_rehash_policy._M_prev_resize = 0;
+	}
 }
 
   templatetypename _Key, typename _Value,
Index: testsuite/23_containers/unordered_multiset/modifiers/reserve.cc
===
--- testsuite/23_containers/unordered_multiset/modifiers/reserve.cc	(revision 0)
+++ testsuite/23_containers/unordered_multiset/modifiers/reserve.cc	(revision 0)
@@ -0,0 +1,48 @@
+// { dg-options -std=gnu++0x }
+
+// Copyright (C) 2012 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+//
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+//
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3

Re: PR 54075 Fix hashtable::reserve

2012-07-26 Thread François Dumont

Attached patch applied on 4.7 branch.

Tested under Linux x86_64.

I will fix this small english issue in trunk ChangeLog.

François


On 07/26/2012 11:11 AM, Jonathan Wakely wrote:

On 25 July 2012 21:29, François Dumont wrote:

 (_Hashtable::rehash): Likewise. Set _M_prev_resize to 0 to avoid
 the hashtable to be shrinking on next insertion.

Not to be shrinking just shrinking, but nevermind.



Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 189710)
+++ include/bits/hashtable.h	(working copy)
@@ -760,11 +760,12 @@
 	_M_element_count(0),
 	_M_rehash_policy()
   {
-	_M_bucket_count = std::max(_M_rehash_policy._M_next_bkt(__bucket_hint),
-   _M_rehash_policy.
-   _M_bkt_for_elements(__detail::
-		   __distance_fw(__f,
- __l)));
+	_M_bucket_count =
+	  _M_rehash_policy._M_bkt_for_elements(__detail::__distance_fw(__f,
+   __l));
+	if (_M_bucket_count = __bucket_hint)
+	  _M_bucket_count = _M_rehash_policy._M_next_bkt(__bucket_hint);
+
 // We don't want the rehash policy to ask for the hashtable to shrink
 // on the first insertion so we need to reset its previous resize
 	// level.
@@ -1582,10 +1583,20 @@
 rehash(size_type __n)
 {
   const _RehashPolicyState __saved_state = _M_rehash_policy._M_state();
-  _M_rehash(std::max(_M_rehash_policy._M_next_bkt(__n),
-			 _M_rehash_policy._M_bkt_for_elements(_M_element_count
-			  + 1)),
-		__saved_state);
+  std::size_t __buckets
+	= _M_rehash_policy._M_bkt_for_elements(_M_element_count + 1);
+  if (__buckets = __n)
+	__buckets = _M_rehash_policy._M_next_bkt(__n);
+
+  if (__buckets != _M_bucket_count)
+	{
+	  _M_rehash(__buckets, __saved_state);
+	  
+	  // We don't want the rehash policy to ask for the hashtable to shrink
+	  // on the next insertion so we need to reset its previous resize
+	  // level.
+	  _M_rehash_policy._M_prev_resize = 0;
+	}
 }
 
   templatetypename _Key, typename _Value,
Index: ChangeLog
===
--- ChangeLog	(revision 189710)
+++ ChangeLog	(working copy)
@@ -1,3 +1,17 @@
+2012-07-26  François Dumont  fdum...@gcc.gnu.org
+
+	PR libstdc++/54075
+	* include/bits/hashtable.h
+	(_Hashtable::_Hashtable(_InputIterator, _InputIterator,
+	size_type, ...): Remove std::max usage to guarantee that hashtable
+	state is consistent with hash policy state.
+	(_Hashtable::rehash): Likewise. Set _M_prev_resize to 0 to avoid
+	the hashtable shrinking on next insertion.
+	* testsuite/23_containers/unordered_set/modifiers/reserve.cc: New.
+	* testsuite/23_containers/unordered_multiset/modifiers/reserve.cc: New.
+	* testsuite/23_containers/unordered_map/modifiers/reserve.cc: New.
+	* testsuite/23_containers/unordered_multimap/modifiers/reserve.cc: New.
+
 2012-07-20  Paolo Carlini  paolo.carl...@oracle.com
 
 	* testsuite/30_threads/thread/adl.cc: Add missing dg-requires.
Index: testsuite/23_containers/unordered_multimap/modifiers/reserve.cc
===
--- testsuite/23_containers/unordered_multimap/modifiers/reserve.cc	(revision 0)
+++ testsuite/23_containers/unordered_multimap/modifiers/reserve.cc	(revision 189889)
@@ -0,0 +1,48 @@
+// { dg-options -std=gnu++0x }
+
+// Copyright (C) 2012 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+//
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+//
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// http://www.gnu.org/licenses/.
+
+#include unordered_map
+#include testsuite_hooks.h
+
+bool test __attribute__((unused)) = true;
+
+void test01()
+{
+  const int N = 1000;
+
+  typedef std::unordered_multimapint, int MMap;
+  MMap m;
+  m.reserve(N * 2);
+
+  std::size_t bkts = m.bucket_count();
+  for (int i = 0; i != N; ++i)
+{
+  m.insert(std::make_pair(i, i));
+  m.insert(std::make_pair(i, i));
+  // As long as we insert less than the reserved number of elements we
+  // shouldn't experiment any rehash.
+  VERIFY( m.bucket_count() == bkts );
+}
+}
+
+int main()
+{
+  test01();
+  return 0;
+}
Index: testsuite/23_containers/unordered_set/modifiers/reserve.cc
===
--- testsuite/23_containers/unordered_set/modifiers/reserve.cc	(revision 0)
+++ testsuite

Re: PR 54075 Restore 4.6 growth factor

2012-07-29 Thread François Dumont
Patch applied. I usually CC to gcc-patches when I signal that it has 
been applied. Should I send it all my patch proposals ?


François

On 07/28/2012 11:18 PM, Jonathan Wakely wrote:

Please remember to CC gcc-patches too.

On 28 July 2012 21:49, François Dumont wrote:

Hi

 Here is the patch to restore the 4.6 growth factor of 2. I prefer to
validate the restored behavior by adding a performance test. Without the
patch the result was:

unordered_set.cc unordered_set 1000 insertions  403r  329u
73s 402825280mem0pf

after the patch:

unordered_set.cc unordered_set 1000 insertions  112r   86u
25s 402825104mem0pf

It validates the 3x times performance hint.

Tested under Linux x86_64.

2012-07-28  François Dumont  fdum...@gcc.gnu.org

 PR libstdc++/54075
 * include/bits/hashtable_policy.h
 (_Prime_rehash_policy::_M_next_bkt): Add a growth factor set to 2
 to boost growth in the number of buckets.
 * testsuite/performance/23_containers/insert/unordered_set.cc: New.

Even if it is not a Standard conformity issue I think we can apply it to the
4.7 branch too.

Yes, it's a performance regression, so this is OK for trunk and 4.7, thanks.


Index: include/bits/hashtable_policy.h
===
--- include/bits/hashtable_policy.h	(revision 189893)
+++ include/bits/hashtable_policy.h	(working copy)
@@ -395,6 +395,8 @@
 
 enum { _S_n_primes = sizeof(unsigned long) != 8 ? 256 : 256 + 48 };
 
+static const std::size_t _S_growth_factor = 2;
+
 float_M_max_load_factor;
 mutable std::size_t  _M_prev_resize;
 mutable std::size_t  _M_next_resize;
@@ -415,28 +417,27 @@
 static const unsigned char __fast_bkt[12]
   = { 2, 2, 2, 3, 5, 5, 7, 7, 11, 11, 11, 11 };
 
-if (__n = 11)
+const std::size_t __grown_n = __n * _S_growth_factor;
+if (__grown_n = 11)
   {
 	_M_prev_resize = 0;
 	_M_next_resize
-	  = __builtin_ceil(__fast_bkt[__n] * (long double)_M_max_load_factor);
-	return __fast_bkt[__n];
+	  = __builtin_ceil(__fast_bkt[__grown_n]
+			   * (long double)_M_max_load_factor);
+	return __fast_bkt[__grown_n];
   }
 
-const unsigned long* __p
-  = std::lower_bound(__prime_list + 5, __prime_list + _S_n_primes, __n);
+const unsigned long* __next_bkt
+  = std::lower_bound(__prime_list + 5, __prime_list + _S_n_primes,
+			 __grown_n);
+const unsigned long* __prev_bkt
+  = std::lower_bound(__prime_list + 1, __next_bkt, __n / _S_growth_factor);
 
-// Shrink will take place only if the number of elements is small enough
-// so that the prime number 2 steps before __p is large enough to still
-// conform to the max load factor:
 _M_prev_resize
-  = __builtin_floor(*(__p - 2) * (long double)_M_max_load_factor);
-
-// Let's guaranty that a minimal grow step of 11 is used
-if (*__p - __n  11)
-  __p = std::lower_bound(__p, __prime_list + _S_n_primes, __n + 11);
-_M_next_resize = __builtin_ceil(*__p * (long double)_M_max_load_factor);
-return *__p;
+  = __builtin_floor(*(__prev_bkt - 1) * (long double)_M_max_load_factor);
+_M_next_resize
+  = __builtin_ceil(*__next_bkt * (long double)_M_max_load_factor);
+return *__next_bkt;
   }
 
   // Return the smallest prime p such that alpha p = n, where alpha
Index: testsuite/performance/23_containers/insert/unordered_set.cc
===
--- testsuite/performance/23_containers/insert/unordered_set.cc	(revision 0)
+++ testsuite/performance/23_containers/insert/unordered_set.cc	(revision 0)
@@ -0,0 +1,42 @@
+// Copyright (C) 2012 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// http://www.gnu.org/licenses/.
+
+// { dg-options -std=c++11 }
+
+#include unordered_set
+#include testsuite_performance.h
+
+int main()
+{
+  using namespace __gnu_test;
+
+  time_counter time;
+  resource_counter resource;
+
+  const int sz = 1000;
+
+  std::unordered_setint s;
+  start_counters(time, resource);
+
+  for (int i = 0; i != sz ; ++i)
+s.insert(i);
+
+  stop_counters(time, resource);
+  report_performance(__FILE__, unordered_set 1000 insertions,
+		 time, resource);
+  return 0;
+}


Remove redundant comparison in debug mode

2012-08-01 Thread François Dumont
Verifying number of comparisons invoked in different algos and different 
modes I remarked this small performance issue.


2012-08-01  François Dumont  fdum...@gcc.gnu.org

* include/debug/functions.h (__check_partition_lower_aux): Remove
redundant comparison with pivot value.
(__check_partition_upper_aux): Likewise.

Tested under Linux x86_64 debug mode.

Ok for trunk ?

François
Index: include/debug/functions.h
===
--- include/debug/functions.h	(revision 189985)
+++ include/debug/functions.h	(working copy)
@@ -354,8 +354,12 @@
 {
   while (__first != __last  *__first  __value)
 	++__first;
-  while (__first != __last  !(*__first  __value))
-	++__first;
+  if (__first != __last)
+	{
+	  ++__first;
+	  while (__first != __last  !(*__first  __value))
+	++__first;
+	}
   return __first == __last;
 }
 
@@ -368,8 +372,10 @@
 			const _Safe_iterator_Iterator, _Sequence __last,
 			const _Tp __value,
 			std::random_access_iterator_tag __tag)
-{ return __check_partitioned_lower_aux(__first.base(), __last.base(),
-	   __value, __tag); }
+{
+  return __check_partitioned_lower_aux(__first.base(), __last.base(),
+	   __value, __tag);
+}
 
   // _GLIBCXX_RESOLVE_LIB_DEFECTS
   // 270. Binary search requirements overly strict
@@ -378,8 +384,10 @@
 inline bool
 __check_partitioned_lower(_ForwardIterator __first,
 			  _ForwardIterator __last, const _Tp __value)
-{ return __check_partitioned_lower_aux(__first, __last, __value,
-	   std::__iterator_category(__first)); }
+{
+  return __check_partitioned_lower_aux(__first, __last, __value,
+	   std::__iterator_category(__first));
+}
 
   templatetypename _ForwardIterator, typename _Tp
 inline bool
@@ -389,8 +397,12 @@
 {
   while (__first != __last  !(__value  *__first))
 	++__first;
-  while (__first != __last  __value  *__first)
-	++__first;
+  if (__first != __last)
+	{
+	  ++__first;
+	  while (__first != __last  __value  *__first)
+	++__first;
+	}
   return __first == __last;
 }
 
@@ -403,15 +415,19 @@
 			const _Safe_iterator_Iterator, _Sequence __last,
 			const _Tp __value,
 			std::random_access_iterator_tag __tag)
-{ return __check_partitioned_upper_aux(__first.base(), __last.base(),
-	   __value, __tag); }
+{
+  return __check_partitioned_upper_aux(__first.base(), __last.base(),
+	   __value, __tag);
+}
 
   templatetypename _ForwardIterator, typename _Tp
 inline bool
 __check_partitioned_upper(_ForwardIterator __first,
 			  _ForwardIterator __last, const _Tp __value)
-{ return __check_partitioned_upper_aux(__first, __last, __value,
-	   std::__iterator_category(__first)); }
+{
+  return __check_partitioned_upper_aux(__first, __last, __value,
+	   std::__iterator_category(__first));
+}
 
   templatetypename _ForwardIterator, typename _Tp, typename _Pred
 inline bool
@@ -422,8 +438,12 @@
 {
   while (__first != __last  bool(__pred(*__first, __value)))
 	++__first;
-  while (__first != __last  !bool(__pred(*__first, __value)))
-	++__first;
+  if (__first != __last)
+	{
+	  ++__first;
+	  while (__first != __last  !bool(__pred(*__first, __value)))
+	++__first;
+	}
   return __first == __last;
 }
 
@@ -437,8 +457,10 @@
 			const _Safe_iterator_Iterator, _Sequence __last,
 			const _Tp __value, _Pred __pred,
 			std::random_access_iterator_tag __tag)
-{ return __check_partitioned_lower_aux(__first.base(), __last.base(),
-	   __value, __pred, __tag); }
+{
+  return __check_partitioned_lower_aux(__first.base(), __last.base(),
+	   __value, __pred, __tag);
+}
 
   // Determine if a sequence is partitioned w.r.t. this element.
   templatetypename _ForwardIterator, typename _Tp, typename _Pred
@@ -446,8 +468,10 @@
 __check_partitioned_lower(_ForwardIterator __first,
 			  _ForwardIterator __last, const _Tp __value,
 			  _Pred __pred)
-{ return __check_partitioned_lower_aux(__first, __last, __value, __pred,
-	   std::__iterator_category(__first)); }
+{
+  return __check_partitioned_lower_aux(__first, __last, __value, __pred,
+	   std::__iterator_category(__first));
+}
 
   templatetypename _ForwardIterator, typename _Tp, typename _Pred
 inline bool
@@ -458,8 +482,12 @@
 {
   while (__first != __last  !bool(__pred(__value, *__first)))
 	++__first;
-  while (__first != __last  bool(__pred(__value, *__first)))
-	++__first;
+  if (__first != __last)
+	{
+	  ++__first;
+	  while (__first != __last  bool(__pred(__value, *__first)))
+	++__first;
+	}
   return __first == __last;
 }
 
@@ -473,16 +501,20 @@
 			const _Safe_iterator_Iterator, _Sequence __last,
 			const _Tp __value, _Pred __pred,
 			std::random_access_iterator_tag __tag)
-{ return __check_partitioned_upper_aux(__first.base

Re: Value type of map need not be default copyable

2012-08-08 Thread François Dumont

On 08/08/2012 09:34 AM, Marc Glisse wrote:

On Tue, 7 Aug 2012, Richard Smith wrote:


I've attached a patch for unordered_map which solves the rvalue
reference problem.  For efficiency, I've created a new
_M_emplace_bucket method rather than call emplace directly.

I've verified all libstdc++ tests pass (sorry for the previous
oversight) and am running the full GCC test suite now.  However, I'd
appreciate any feedback on whether this is a reasonable approach.  STL
hacking is way outside my comfort zone.  ;-)

If this looks good, I'll take a stab at std::map.


I think you should remove the mapped_type() argument from the call to
_M_emplace_bucket. In C++11, the mapped_type is not required to be 
copyable

at all, just to be DefaultInsertable.


Indeed. The reason I was talking about emplace is that you want an 
object to be created only at the time the node is created. That might 
mean passing piecewise_construct_t and an empty tuple to emplace 
(otherwise it is too similar to insert). Or for unordered_map where 
the node functions are exposed, you could just create the node 
directly without passing through emplace.


This is what I try to do in the attached patch. I replace 
_M_insert_bucket with _M_insert_node and use it for operator[] 
implementation. I have also introduce a special std::pair constructor 
for container usage so that we do not have to include the whole tuple 
stuff just for associative container implementations.


However one test is failing:
/home/fdt/dev/gcc/libstdc++-v3/testsuite/23_containers/unordered_map/insert/array_syntax_move.cc:39:18: 
required from here
/home/fdt/dev/gcc-build/x86_64-unknown-linux-gnu/libstdc++-v3/include/bits/stl_pair.h:175:42: 
error: use of deleted function '__gnu_test::rvalstruct::rvalstruct(const 
__gnu_test::rvalstruct)'

  : first(std::forward_T1(__x)), second() { }

I don't understand why it doesn't use the move constructor. I can't see 
any std::forward call missing. Anyone ?


François

Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 190209)
+++ include/bits/hashtable.h	(working copy)
@@ -584,11 +584,10 @@
   __node_base*
   _M_get_previous_node(size_type __bkt, __node_base* __n);
 
-  templatetypename _Arg
-	iterator
-	_M_insert_bucket(_Arg, size_type, __hash_code);
+  // Insert the node n. Assumes key doesn't exist
+  iterator
+  _M_insert_node(size_type __bkt, __hash_code __code, __node_type* __n);
 
-
   templatetypename... _Args
 	std::pairiterator, bool
 	_M_emplace(std::true_type, _Args... __args);
@@ -1307,54 +1306,49 @@
 	  }
   }
 
-  // Insert v in bucket n (assumes no element with its key already present).
+  // Insert node in bucket bkt (assumes no element with its key already
+  // present). Take ownership of the passed node, deallocate it on exception.
   templatetypename _Key, typename _Value,
 	   typename _Alloc, typename _ExtractKey, typename _Equal,
 	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
 	   typename _Traits
-templatetypename _Arg
-  typename _Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
-			  _H1, _H2, _Hash, _RehashPolicy,
-			  _Traits::iterator
-  _Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
-		 _H1, _H2, _Hash, _RehashPolicy, _Traits::
-  _M_insert_bucket(_Arg __v, size_type __n, __hash_code __code)
-  {
-	const __rehash_state __saved_state = _M_rehash_policy._M_state();
-	std::pairbool, std::size_t __do_rehash
-	  = _M_rehash_policy._M_need_rehash(_M_bucket_count,
-	_M_element_count, 1);
+typename _Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
+			_H1, _H2, _Hash, _RehashPolicy,
+			_Traits::iterator
+_Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	   _H1, _H2, _Hash, _RehashPolicy, _Traits::
+_M_insert_node(size_type __bkt, __hash_code __code, __node_type* __node)
+{
+  const __rehash_state __saved_state = _M_rehash_policy._M_state();
+  std::pairbool, std::size_t __do_rehash
+	= _M_rehash_policy._M_need_rehash(_M_bucket_count,
+	  _M_element_count, 1);
 
-	if (__do_rehash.first)
-	  {
-	const key_type __k = this-_M_extract()(__v);
-	__n = __hash_code_base::_M_bucket_index(__k, __code,
+  if (__do_rehash.first)
+	{
+	  const key_type __k = this-_M_extract()(__node-_M_v);
+	  __bkt = __hash_code_base::_M_bucket_index(__k, __code,
 		__do_rehash.second);
-	  }
+	}
 
-	__node_type* __node = nullptr;
-	__try
-	  {
-	// Allocate the new node before doing the rehash so that we
-	// don't do a rehash if the allocation throws.
-	__node = _M_allocate_node(std::forward_Arg(__v));
-	this-_M_store_code(__node, __code);
-	if (__do_rehash.first)
-	  _M_rehash(__do_rehash.second, __saved_state);
+  __try
+	{
+	  if (__do_rehash.first)
+	_M_rehash(__do_rehash.second, __saved_state);
 
-	_M_insert_bucket_begin(__n, __node);
-	

Re: Value type of map need not be default copyable

2012-08-08 Thread François Dumont

On 08/08/2012 03:39 PM, Paolo Carlini wrote:

On 08/08/2012 03:15 PM, François Dumont wrote:
I have also introduce a special std::pair constructor for container 
usage so that we do not have to include the whole tuple stuff just 
for associative container implementations.

To be clear: sorry, this is not an option.

Paolo.

Then I can only imagine the attached patch which require to include 
tuple when including unordered_map or unordered_set. The 
std::pair(piecewise_construct_t, tuple, tuple) is the only 
constructor that allow to build a pair using the default constructor for 
the second member.


In fact, adding declaration for std::make_tuple and 
std::forward_as_tuple could avoid to include tuple from unordered_set, 
there is no operator[] on unordered_set or unordered_multiset. But I am 
not sure it worth the effort, tell me.


All unordered tests run under Linux x86_64, normal and debug modes.

2012-08-08  François Dumont  fdum...@gcc.gnu.org
Ollie Wild  a...@google.com

* include/bits/hashtable.h (_Hashtable::_M_insert_bucket):
Replace by ...
(_Hashtable::_M_insert_node): ... this, new.
(_Hashtable::_M_insert(_Args, true_type)): Use latter.
* include/bits/hashtable_policy.h (_Map_base::operator[]): Use
latter, emplace the value_type rather than insert.
* include/std/unordered_map: Include tuple.
* include/std/unordered_set: Likewise.
* testsuite/23_containers/unordered_map/operators/2.cc: New.

François

Index: include/std/unordered_map
===
--- include/std/unordered_map	(revision 190209)
+++ include/std/unordered_map	(working copy)
@@ -38,6 +38,7 @@
 #include utility
 #include type_traits
 #include initializer_list
+#include tuple
 #include bits/stl_algobase.h
 #include bits/allocator.h
 #include bits/stl_function.h // equal_to, _Identity, _Select1st
Index: include/std/unordered_set
===
--- include/std/unordered_set	(revision 190209)
+++ include/std/unordered_set	(working copy)
@@ -38,6 +38,7 @@
 #include utility
 #include type_traits
 #include initializer_list
+#include tuple
 #include bits/stl_algobase.h
 #include bits/allocator.h
 #include bits/stl_function.h // equal_to, _Identity, _Select1st
Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 190209)
+++ include/bits/hashtable.h	(working copy)
@@ -584,11 +584,11 @@
   __node_base*
   _M_get_previous_node(size_type __bkt, __node_base* __n);
 
-  templatetypename _Arg
-	iterator
-	_M_insert_bucket(_Arg, size_type, __hash_code);
+  // Insert node in bucket bkt (assumes no element with its key already
+  // present). Take ownership of the node, deallocate it on exception.
+  iterator
+  _M_insert_node(size_type __bkt, __hash_code __code, __node_type* __n);
 
-
   templatetypename... _Args
 	std::pairiterator, bool
 	_M_emplace(std::true_type, _Args... __args);
@@ -1307,54 +1307,49 @@
 	  }
   }
 
-  // Insert v in bucket n (assumes no element with its key already present).
+  // Insert node in bucket bkt (assumes no element with its key already
+  // present). Take ownership of the node, deallocate it on exception.
   templatetypename _Key, typename _Value,
 	   typename _Alloc, typename _ExtractKey, typename _Equal,
 	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
 	   typename _Traits
-templatetypename _Arg
-  typename _Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
-			  _H1, _H2, _Hash, _RehashPolicy,
-			  _Traits::iterator
-  _Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
-		 _H1, _H2, _Hash, _RehashPolicy, _Traits::
-  _M_insert_bucket(_Arg __v, size_type __n, __hash_code __code)
-  {
-	const __rehash_state __saved_state = _M_rehash_policy._M_state();
-	std::pairbool, std::size_t __do_rehash
-	  = _M_rehash_policy._M_need_rehash(_M_bucket_count,
-	_M_element_count, 1);
+typename _Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
+			_H1, _H2, _Hash, _RehashPolicy,
+			_Traits::iterator
+_Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	   _H1, _H2, _Hash, _RehashPolicy, _Traits::
+_M_insert_node(size_type __bkt, __hash_code __code, __node_type* __node)
+{
+  const __rehash_state __saved_state = _M_rehash_policy._M_state();
+  std::pairbool, std::size_t __do_rehash
+	= _M_rehash_policy._M_need_rehash(_M_bucket_count,
+	  _M_element_count, 1);
 
-	if (__do_rehash.first)
-	  {
-	const key_type __k = this-_M_extract()(__v);
-	__n = __hash_code_base::_M_bucket_index(__k, __code,
+  if (__do_rehash.first)
+	{
+	  const key_type __k = this-_M_extract()(__node-_M_v);
+	  __bkt = __hash_code_base::_M_bucket_index(__k, __code,
 		__do_rehash.second);
-	  }
+	}
 
-	__node_type* __node = nullptr;
-	__try
-	  {
-	// Allocate

Re: Value type of map need not be default copyable

2012-08-09 Thread François Dumont

On 08/09/2012 10:35 AM, Paolo Carlini wrote:

Hi,

On 08/09/2012 09:14 AM, Marc Glisse wrote:

On Wed, 8 Aug 2012, François Dumont wrote:


On 08/08/2012 03:39 PM, Paolo Carlini wrote:

On 08/08/2012 03:15 PM, François Dumont wrote:
I have also introduce a special std::pair constructor for 
container usage so that we do not have to include the whole tuple 
stuff just for associative container implementations.

To be clear: sorry, this is not an option.

Paolo.

   Then I can only imagine the attached patch which require to 
include tuple when including unordered_map or unordered_set. The 
std::pair(piecewise_construct_t, tuple, tuple) is the only 
constructor that allow to build a pair using the default constructor 
for the second member.


I agree that the extra constructor would be convenient (I probably 
would have gone with pair(T,__default_construct_t), the symmetric 
version, and enough extra constructors to resolve all ambiguities). 
Maybe LWG would consider doing something.
When it does, and the corresponding PR will be *ready* we'll 
reconsider the issue. After all the *months and months and months* 
spent by the LWG adding and removing members from pair and tweaking 
everything wrt the containers and issues *still* popping up (like that 
with the defaulted copy constructor vs insert constraining), and with 
the support for scoped allocators still missing from our 
implementation, we are not adding members to std::pair such easily. 
Sorry, but personally I'm not available now to further discuss this 
specific point.


I was still hoping that for something as simple as mapped_type() we 
wouldn't need the full tuple machinery, and I encourage everybody to 
have another look (while making sure anything we figure out adapts 
smoothly an consistently to std::map), then in a few days we'll take a 
final decision. We'll still have chances to further improve the code 
in time for 4.8.0.



+ __p = __h-_M_allocate_node(std::piecewise_construct,
+ std::make_tuple(__k),
+ std::make_tuple());

Don't you want cref(__k)? It might save a move at some point.
Are we already doing that elsewhere? I think we should aim for 
something simple first, then carefully evaluate if the additional 
complexity is worth the cost and in case deploy the superior solution 
consistently everywhere it may apply.


Thanks!
Paolo.



Here is an updated version considering the good catch from Marc. 
However I prefer to use an explicit instantiation of tuple rather than 
using cref that would have imply inclusion of functional in addition 
to tuple. I have also updated the test case to use a type without copy 
and move constructors.


2012-08-09  François Dumont  fdum...@gcc.gnu.org
Ollie Wild  a...@google.com

* include/bits/hashtable.h (_Hashtable::_M_insert_bucket):
Replace by ...
(_Hashtable::_M_insert_node): ... this, new.
(_Hashtable::_M_insert(_Args, true_type)): Use latter.
* include/bits/hashtable_policy.h (_Map_base::operator[]): Use
latter, emplace the value_type rather than insert.
* include/std/unordered_map: Include tuple.
* include/std/unordered_set: Likewise.
* testsuite/util/testsuite_counter_type.h: New.
* testsuite/23_containers/unordered_map/operators/2.cc: New.


François



Index: include/std/unordered_map
===
--- include/std/unordered_map	(revision 190209)
+++ include/std/unordered_map	(working copy)
@@ -38,6 +38,7 @@
 #include utility
 #include type_traits
 #include initializer_list
+#include tuple
 #include bits/stl_algobase.h
 #include bits/allocator.h
 #include bits/stl_function.h // equal_to, _Identity, _Select1st
Index: include/std/unordered_set
===
--- include/std/unordered_set	(revision 190209)
+++ include/std/unordered_set	(working copy)
@@ -38,6 +38,7 @@
 #include utility
 #include type_traits
 #include initializer_list
+#include tuple
 #include bits/stl_algobase.h
 #include bits/allocator.h
 #include bits/stl_function.h // equal_to, _Identity, _Select1st
Index: include/bits/hashtable_policy.h
===
--- include/bits/hashtable_policy.h	(revision 190209)
+++ include/bits/hashtable_policy.h	(working copy)
@@ -577,8 +577,14 @@
   __node_type* __p = __h-_M_find_node(__n, __k, __code);
 
   if (!__p)
-	return __h-_M_insert_bucket(std::make_pair(__k, mapped_type()),
- __n, __code)-second;
+	{
+	  __p = __h-_M_allocate_node(std::piecewise_construct,
+  std::tupleconst key_type(__k),
+  std::make_tuple());
+	  __h-_M_store_code(__p, __code);
+	  return __h-_M_insert_node(__n, __code, __p)-second;
+	}
+
   return (__p-_M_v).second;
 }
 
@@ -598,9 +604,14 @@
   __node_type* __p = __h-_M_find_node(__n, __k, __code);
 
   if (!__p)
-	return __h

Re: Value type of map need not be default copyable

2012-08-11 Thread François Dumont
Here is an other attempt. I took the time to refactor the hashtable 
implementation. I prefer to rename _M_insert_node into 
_M_insert_unique_node and use it also into _M_emplace implementation. I 
introduce _M_insert_multi_node that is used in _M_insert and _M_emplace 
when keys are not unique.


Your remark on using std::move rather than std::forward Marc made 
sens but didn't work. I don't understand why but the new test is showing 
that std::forward works. If anyone can explain why std::move doesn't 
work I am interested.


For your question regarding how to include headers I just follow 
current method. Normally it is done so to make headers more reusable but 
in this case I agree that hashtable_policy.h can't be included without 
tuple before. Should I put tuple include into hashtable_policy.h ? 
Adding a declaration of std::tuple in hashtable_policy.h could make this 
header less dependent on tuple, should I do so ?


2012-08-09  François Dumont  fdum...@gcc.gnu.org
Ollie Wild  a...@google.com

* include/bits/hashtable.h
(_Hashtable_M_insert_multi_node(hash_code, node_type*)): New.
(_Hashtable_M_insert(_Args, false_type)): Use latter.
(_Hashtable::_M_emplace(false_type, _Args...)): Likewise.
(_Hashtable::_M_insert_bucket): Replace by ...
(_Hashtable::_M_insert_unique_node(size_type, hash_code, 
node_type*)):

... this, new.
(_Hashtable::_M_insert(_Args, true_type)): Use latter.
(_Hashtable::_M_emplace(true_type, _Args...)): Likewise.
* include/bits/hashtable_policy.h (_Map_base::operator[]): Use
latter, emplace the value_type rather than insert.
* include/std/unordered_map: Include tuple.
* include/std/unordered_set: Likewise.
* testsuite/util/testsuite_counter_type.h: New.
* testsuite/23_containers/unordered_map/operators/2.cc: New.

Tested under linux x86_64, normal and debug mode.

Ok for trunk ?

François

On 08/10/2012 01:26 AM, Paolo Carlini wrote:

On 08/09/2012 11:22 PM, Marc Glisse wrote:
I don't know if std:: is needed, but it looks strange to have it only 
on some functions:

std::forward_as_tuple(forwardkey_type(__k)),

Looking at this line again, you seem to be using std::forward on 
something that is not a deduced parameter type. I guess it is 
equivalent to std::move in this case, it just confuses me a bit.

Wanted to point out that yesterday. Please double check std::move.

I realize now that nobody is interested in std::cref, good ;)

Thanks!
Paolo.



Index: include/std/unordered_map
===
--- include/std/unordered_map	(revision 190209)
+++ include/std/unordered_map	(working copy)
@@ -38,6 +38,7 @@
 #include utility
 #include type_traits
 #include initializer_list
+#include tuple
 #include bits/stl_algobase.h
 #include bits/allocator.h
 #include bits/stl_function.h // equal_to, _Identity, _Select1st
Index: include/std/unordered_set
===
--- include/std/unordered_set	(revision 190209)
+++ include/std/unordered_set	(working copy)
@@ -38,6 +38,7 @@
 #include utility
 #include type_traits
 #include initializer_list
+#include tuple
 #include bits/stl_algobase.h
 #include bits/allocator.h
 #include bits/stl_function.h // equal_to, _Identity, _Select1st
Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 190209)
+++ include/bits/hashtable.h	(working copy)
@@ -584,10 +584,17 @@
   __node_base*
   _M_get_previous_node(size_type __bkt, __node_base* __n);
 
-  templatetypename _Arg
-	iterator
-	_M_insert_bucket(_Arg, size_type, __hash_code);
+  // Insert node with hash code __code, in bucket bkt if no rehash (assumes
+  // no element with its key already present). Take ownership of the node,
+  // deallocate it on exception.
+  iterator
+  _M_insert_unique_node(size_type __bkt, __hash_code __code,
+			__node_type* __n);
 
+  // Insert node with hash code __code. Take ownership of the node,
+  // deallocate it on exception.
+  iterator
+  _M_insert_multi_node(__hash_code __code, __node_type* __n);
 
   templatetypename... _Args
 	std::pairiterator, bool
@@ -1214,42 +1221,29 @@
   {
 	// First build the node to get access to the hash code
 	__node_type* __node = _M_allocate_node(std::forward_Args(__args)...);
+	const key_type __k = this-_M_extract()(__node-_M_v);
+	__hash_code __code;
 	__try
 	  {
-	const key_type __k = this-_M_extract()(__node-_M_v);
-	__hash_code __code = this-_M_hash_code(__k);
-	size_type __bkt = _M_bucket_index(__k, __code);
-
-	if (__node_type* __p = _M_find_node(__bkt, __k, __code))
-	  {
-		// There is already an equivalent node, no insertion
-		_M_deallocate_node(__node);
-		return std::make_pair(iterator(__p), false);
-	  }
-
-	// We are going to insert this node
-	this-_M_store_code

Re: Value type of map need not be default copyable

2012-08-12 Thread François Dumont

On 08/11/2012 03:47 PM, Marc Glisse wrote:

On Sat, 11 Aug 2012, François Dumont wrote:

   Your remark on using std::move rather than std::forward Marc made 
sens but didn't work. I don't understand why but the new test is 
showing that std::forward works. If anyone can explain why std::move 
doesn't work I am interested.


What testcase failed? I just tried the 2.cc file you added with the 
patch, and replacing forwardkey_type(__k) with move(__k) compiled fine.




You are right, I replaced std::forwardkey_type with 
std::movekey_type forcing a wrong type deduction in std::move. With a 
simple std::move() it works fine. So here is the patch again.


2012-08-10  François Dumont  fdum...@gcc.gnu.org
Ollie Wild  a...@google.com

* include/bits/hashtable.h
(_Hashtable_M_insert_multi_node(hash_code, node_type*)): New.
(_Hashtable_M_insert(_Args, false_type)): Use latter.
(_Hashtable::_M_emplace(false_type, _Args...)): Likewise.
(_Hashtable::_M_insert_bucket): Replace by ...
(_Hashtable::_M_insert_unique_node(size_type, hash_code, 
node_type*)):

... this, new.
(_Hashtable::_M_insert(_Args, true_type)): Use latter.
(_Hashtable::_M_emplace(true_type, _Args...)): Likewise.
* include/bits/hashtable_policy.h (_Map_base::operator[]): Use
latter, emplace the value_type rather than insert.
* include/std/unordered_map: Include tuple.
* include/std/unordered_set: Likewise.
* testsuite/util/testsuite_counter_type.h: New.
* testsuite/23_containers/unordered_map/operators/2.cc: New.

Tested under Linux x86_64.

Ok for trunk ?

François

Index: include/std/unordered_map
===
--- include/std/unordered_map	(revision 190209)
+++ include/std/unordered_map	(working copy)
@@ -38,6 +38,7 @@
 #include utility
 #include type_traits
 #include initializer_list
+#include tuple
 #include bits/stl_algobase.h
 #include bits/allocator.h
 #include bits/stl_function.h // equal_to, _Identity, _Select1st
Index: include/std/unordered_set
===
--- include/std/unordered_set	(revision 190209)
+++ include/std/unordered_set	(working copy)
@@ -38,6 +38,7 @@
 #include utility
 #include type_traits
 #include initializer_list
+#include tuple
 #include bits/stl_algobase.h
 #include bits/allocator.h
 #include bits/stl_function.h // equal_to, _Identity, _Select1st
Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 190209)
+++ include/bits/hashtable.h	(working copy)
@@ -584,10 +584,17 @@
   __node_base*
   _M_get_previous_node(size_type __bkt, __node_base* __n);
 
-  templatetypename _Arg
-	iterator
-	_M_insert_bucket(_Arg, size_type, __hash_code);
+  // Insert node with hash code __code, in bucket bkt if no rehash (assumes
+  // no element with its key already present). Take ownership of the node,
+  // deallocate it on exception.
+  iterator
+  _M_insert_unique_node(size_type __bkt, __hash_code __code,
+			__node_type* __n);
 
+  // Insert node with hash code __code. Take ownership of the node,
+  // deallocate it on exception.
+  iterator
+  _M_insert_multi_node(__hash_code __code, __node_type* __n);
 
   templatetypename... _Args
 	std::pairiterator, bool
@@ -1214,42 +1221,29 @@
   {
 	// First build the node to get access to the hash code
 	__node_type* __node = _M_allocate_node(std::forward_Args(__args)...);
+	const key_type __k = this-_M_extract()(__node-_M_v);
+	__hash_code __code;
 	__try
 	  {
-	const key_type __k = this-_M_extract()(__node-_M_v);
-	__hash_code __code = this-_M_hash_code(__k);
-	size_type __bkt = _M_bucket_index(__k, __code);
-
-	if (__node_type* __p = _M_find_node(__bkt, __k, __code))
-	  {
-		// There is already an equivalent node, no insertion
-		_M_deallocate_node(__node);
-		return std::make_pair(iterator(__p), false);
-	  }
-
-	// We are going to insert this node
-	this-_M_store_code(__node, __code);
-	const __rehash_state __saved_state
-	  = _M_rehash_policy._M_state();
-	std::pairbool, std::size_t __do_rehash
-	  = _M_rehash_policy._M_need_rehash(_M_bucket_count,
-		_M_element_count, 1);
-
-	if (__do_rehash.first)
-	  {
-		_M_rehash(__do_rehash.second, __saved_state);
-		__bkt = _M_bucket_index(__k, __code);
-	  }
-
-	_M_insert_bucket_begin(__bkt, __node);
-	++_M_element_count;
-	return std::make_pair(iterator(__node), true);
+	__code = this-_M_hash_code(__k);
 	  }
 	__catch(...)
 	  {
 	_M_deallocate_node(__node);
 	__throw_exception_again;
 	  }
+
+	size_type __bkt = _M_bucket_index(__k, __code);
+	if (__node_type* __p = _M_find_node(__bkt, __k, __code))
+	  {
+	// There is already an equivalent node, no insertion
+	_M_deallocate_node(__node);
+	return std::make_pair(iterator(__p

Re: Value type of map need not be default copyable

2012-08-13 Thread François Dumont

On 08/13/2012 02:10 PM, Paolo Carlini wrote:

On 08/12/2012 10:00 PM, François Dumont wrote:

Ok for trunk ?

Ok, thanks!

Paolo.

PS: you may want to remove the trailing blank line of 
testsuite_counter_type.h




Attached patch applied.

2012-08-13  François Dumont  fdum...@gcc.gnu.org
Ollie Wild  a...@google.com

* include/bits/hashtable.h
(_Hashtable_M_insert_multi_node(hash_code, node_type*)): New.
(_Hashtable_M_insert(_Args, false_type)): Use latter.
(_Hashtable::_M_emplace(false_type, _Args...)): Likewise.
(_Hashtable::_M_insert_bucket): Replace by ...
(_Hashtable::_M_insert_unique_node(size_type, hash_code, 
node_type*)):

... this, new.
(_Hashtable::_M_insert(_Args, true_type)): Use latter.
(_Hashtable::_M_emplace(true_type, _Args...)): Likewise.
* include/bits/hashtable_policy.h (_Map_base::operator[]): Use
latter, emplace the value_type rather than insert.
* include/std/unordered_map: Include tuple.
* include/std/unordered_set: Likewise.
* testsuite/util/testsuite_counter_type.h: New.
* testsuite/23_containers/unordered_map/operators/2.cc: New.

François

Index: include/bits/hashtable_policy.h
===
--- include/bits/hashtable_policy.h	(revision 190353)
+++ include/bits/hashtable_policy.h	(working copy)
@@ -577,8 +577,13 @@
   __node_type* __p = __h-_M_find_node(__n, __k, __code);
 
   if (!__p)
-	return __h-_M_insert_bucket(std::make_pair(__k, mapped_type()),
- __n, __code)-second;
+	{
+	  __p = __h-_M_allocate_node(std::piecewise_construct,
+  std::tupleconst key_type(__k),
+  std::tuple());
+	  return __h-_M_insert_unique_node(__n, __code, __p)-second;
+	}
+
   return (__p-_M_v).second;
 }
 
@@ -598,9 +603,13 @@
   __node_type* __p = __h-_M_find_node(__n, __k, __code);
 
   if (!__p)
-	return __h-_M_insert_bucket(std::make_pair(std::move(__k),
-		mapped_type()),
- __n, __code)-second;
+	{
+	  __p = __h-_M_allocate_node(std::piecewise_construct,
+  std::forward_as_tuple(std::move(__k)),
+  std::tuple());
+	  return __h-_M_insert_unique_node(__n, __code, __p)-second;
+	}
+
   return (__p-_M_v).second;
 }
 
Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 190353)
+++ include/bits/hashtable.h	(working copy)
@@ -584,10 +584,17 @@
   __node_base*
   _M_get_previous_node(size_type __bkt, __node_base* __n);
 
-  templatetypename _Arg
-	iterator
-	_M_insert_bucket(_Arg, size_type, __hash_code);
+  // Insert node with hash code __code, in bucket bkt if no rehash (assumes
+  // no element with its key already present). Take ownership of the node,
+  // deallocate it on exception.
+  iterator
+  _M_insert_unique_node(size_type __bkt, __hash_code __code,
+			__node_type* __n);
 
+  // Insert node with hash code __code. Take ownership of the node,
+  // deallocate it on exception.
+  iterator
+  _M_insert_multi_node(__hash_code __code, __node_type* __n);
 
   templatetypename... _Args
 	std::pairiterator, bool
@@ -1214,42 +1221,29 @@
   {
 	// First build the node to get access to the hash code
 	__node_type* __node = _M_allocate_node(std::forward_Args(__args)...);
+	const key_type __k = this-_M_extract()(__node-_M_v);
+	__hash_code __code;
 	__try
 	  {
-	const key_type __k = this-_M_extract()(__node-_M_v);
-	__hash_code __code = this-_M_hash_code(__k);
-	size_type __bkt = _M_bucket_index(__k, __code);
-
-	if (__node_type* __p = _M_find_node(__bkt, __k, __code))
-	  {
-		// There is already an equivalent node, no insertion
-		_M_deallocate_node(__node);
-		return std::make_pair(iterator(__p), false);
-	  }
-
-	// We are going to insert this node
-	this-_M_store_code(__node, __code);
-	const __rehash_state __saved_state
-	  = _M_rehash_policy._M_state();
-	std::pairbool, std::size_t __do_rehash
-	  = _M_rehash_policy._M_need_rehash(_M_bucket_count,
-		_M_element_count, 1);
-
-	if (__do_rehash.first)
-	  {
-		_M_rehash(__do_rehash.second, __saved_state);
-		__bkt = _M_bucket_index(__k, __code);
-	  }
-
-	_M_insert_bucket_begin(__bkt, __node);
-	++_M_element_count;
-	return std::make_pair(iterator(__node), true);
+	__code = this-_M_hash_code(__k);
 	  }
 	__catch(...)
 	  {
 	_M_deallocate_node(__node);
 	__throw_exception_again;
 	  }
+
+	size_type __bkt = _M_bucket_index(__k, __code);
+	if (__node_type* __p = _M_find_node(__bkt, __k, __code))
+	  {
+	// There is already an equivalent node, no insertion
+	_M_deallocate_node(__node);
+	return std::make_pair(iterator(__p), false);
+	  }
+
+	// Insert the node
+	return std::make_pair(_M_insert_unique_node(__bkt, __code, __node),
+			  true);
   }
 
   templatetypename _Key, typename _Value

[v3] libstdc++/54296

2012-08-28 Thread François Dumont

Hi

Here is the patch for this issue. I introduced 2 distinct methods 
to erase elements from a key. The one when keys are unique is rather 
simple and now use the same underlying code that the erase method from 
iterator. The other one when keys are not unique first look for nodes 
matching the key and deallocate those in a second loop so that it 
doesn't invalidate the key while looking for nodes. I considered 
checking if the key instance address was inside the node address space 
but the key instance might also be referenced as a pointer in the value 
type free when the value instance is destroyed. Separating is the only 
way to be sure that the key won't be broken while looking for matching 
nodes.


I check that _Rb_tree is not impacted by this issue as it is using 
a call to equal_range first and erase the range after. I considered 
doing the same in _Hashtable implementation but finally preferred not to 
do so because it would imply re-computing hash code and add useless checks.


2012-08-28  François Dumont fdum...@gcc.gnu.org

PR libstdc++/54296
* include/bits/hashtable.h (_M_erase(size_type, __node_base*,
__node_type*)): New.
(erase(const_iterator)): Use latter.
(_M_erase(std::true_type, const key_type)): New, likewise.
(_M_erase(std::false_type, const key_type)): New. Find all nodes
matching the key before deallocating them so that the key doesn't
get invalidated.
(erase(const key_type)): Use latters.
* testsuite/23_containers/unordered_map/erase/54296.cc: New.
* testsuite/23_containers/unordered_multimap/erase/54296.cc: New.

Tested under linux x86_64.

Ok for trunk ? As it is an old issue I don't think it needs to be apply 
to any branch, tell me otherwise.


François

Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 190703)
+++ include/bits/hashtable.h	(working copy)
@@ -612,6 +612,15 @@
 	iterator
 	_M_insert(_Arg, std::false_type);
 
+  size_type
+  _M_erase(std::true_type, const key_type);
+
+  size_type
+  _M_erase(std::false_type, const key_type);
+
+  iterator
+  _M_erase(size_type __bkt, __node_base* __prev_n, __node_type* __n);
+
 public:
   // Emplace
   templatetypename... _Args
@@ -636,7 +645,8 @@
   { return erase(const_iterator(__it)); }
 
   size_type
-  erase(const key_type);
+  erase(const key_type __k)
+  { return _M_erase(__unique_keys(), __k); }
 
   iterator
   erase(const_iterator, const_iterator);
@@ -1430,7 +1440,21 @@
   // is why we need buckets to contain the before begin to make
   // this research fast.
   __node_base* __prev_n = _M_get_previous_node(__bkt, __n);
-  if (__n == _M_bucket_begin(__bkt))
+  return _M_erase(__bkt, __prev_n, __n);
+}
+
+  templatetypename _Key, typename _Value,
+	   typename _Alloc, typename _ExtractKey, typename _Equal,
+	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
+	   typename _Traits
+typename _Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
+			_H1, _H2, _Hash, _RehashPolicy,
+			_Traits::iterator
+_Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	   _H1, _H2, _Hash, _RehashPolicy, _Traits::
+_M_erase(size_type __bkt, __node_base* __prev_n, __node_type* __n)
+{
+  if (__prev_n == _M_buckets[__bkt])
 	_M_remove_bucket_begin(__bkt, __n-_M_next(),
 	   __n-_M_nxt ? _M_bucket_index(__n-_M_next()) : 0);
   else if (__n-_M_nxt)
@@ -1457,7 +1481,7 @@
 			_Traits::size_type
 _Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	   _H1, _H2, _Hash, _RehashPolicy, _Traits::
-erase(const key_type __k)
+_M_erase(std::true_type, const key_type __k)
 {
   __hash_code __code = this-_M_hash_code(__k);
   std::size_t __bkt = _M_bucket_index(__k, __code);
@@ -1466,43 +1490,67 @@
   __node_base* __prev_n = _M_find_before_node(__bkt, __k, __code);
   if (!__prev_n)
 	return 0;
+
+  // We found a matching node, erase it.
   __node_type* __n = static_cast__node_type*(__prev_n-_M_nxt);
-  bool __is_bucket_begin = _M_buckets[__bkt] == __prev_n;
+  _M_erase(__bkt, __prev_n, __n);
+  return 1;
+}
 
-  // We found a matching node, start deallocation loop from it
-  std::size_t __next_bkt = __bkt;
-  __node_type* __next_n = __n;
+  templatetypename _Key, typename _Value,
+	   typename _Alloc, typename _ExtractKey, typename _Equal,
+	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
+	   typename _Traits
+typename _Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
+			_H1, _H2, _Hash, _RehashPolicy,
+			_Traits::size_type
+_Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	   _H1, _H2, _Hash, _RehashPolicy, _Traits::
+_M_erase(std::false_type, const key_type __k)
+{
+  __hash_code __code = this-_M_hash_code(__k);
+  std::size_t __bkt

Re: [v3] libstdc++/54296

2012-09-05 Thread François Dumont

On 09/05/2012 11:58 AM, Paolo Carlini wrote:

Hi,

On 09/04/2012 10:08 PM, François Dumont wrote:

Hi

I managed to do the test with Valgrind and so confirm the fix 
with the attached patch (unmodified since last proposal).
Patch is Ok, thanks for your patience and thanks again for all your 
great work on the unordered containers!


Paolo.



Attached patch applied. No problem Paolo, this is your job as 
maintainers to challenge the patches, no big deal. And being now able to 
run programs through Valgrind or Gdb is definitely more comfortable for me.


2012-09-05  François Dumont  fdum...@gcc.gnu.org

PR libstdc++/54296
* include/bits/hashtable.h (_M_erase(size_type, __node_base*,
__node_type*)): New.
(erase(const_iterator)): Use latter.
(_M_erase(std::true_type, const key_type)): New, likewise.
(_M_erase(std::false_type, const key_type)): New. Find all nodes
matching the key before deallocating them so that the key doesn't
get invalidated.
(erase(const key_type)): Use the new member functions.
* testsuite/23_containers/unordered_map/erase/54296.cc: New.
* testsuite/23_containers/unordered_multimap/erase/54296.cc: New.

Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 190990)
+++ include/bits/hashtable.h	(working copy)
@@ -612,6 +612,15 @@
 	iterator
 	_M_insert(_Arg, std::false_type);
 
+  size_type
+  _M_erase(std::true_type, const key_type);
+
+  size_type
+  _M_erase(std::false_type, const key_type);
+
+  iterator
+  _M_erase(size_type __bkt, __node_base* __prev_n, __node_type* __n);
+
 public:
   // Emplace
   templatetypename... _Args
@@ -636,7 +645,8 @@
   { return erase(const_iterator(__it)); }
 
   size_type
-  erase(const key_type);
+  erase(const key_type __k)
+  { return _M_erase(__unique_keys(), __k); }
 
   iterator
   erase(const_iterator, const_iterator);
@@ -1430,7 +1440,21 @@
   // is why we need buckets to contain the before begin to make
   // this research fast.
   __node_base* __prev_n = _M_get_previous_node(__bkt, __n);
-  if (__n == _M_bucket_begin(__bkt))
+  return _M_erase(__bkt, __prev_n, __n);
+}
+
+  templatetypename _Key, typename _Value,
+	   typename _Alloc, typename _ExtractKey, typename _Equal,
+	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
+	   typename _Traits
+typename _Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
+			_H1, _H2, _Hash, _RehashPolicy,
+			_Traits::iterator
+_Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	   _H1, _H2, _Hash, _RehashPolicy, _Traits::
+_M_erase(size_type __bkt, __node_base* __prev_n, __node_type* __n)
+{
+  if (__prev_n == _M_buckets[__bkt])
 	_M_remove_bucket_begin(__bkt, __n-_M_next(),
 	   __n-_M_nxt ? _M_bucket_index(__n-_M_next()) : 0);
   else if (__n-_M_nxt)
@@ -1457,7 +1481,7 @@
 			_Traits::size_type
 _Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
 	   _H1, _H2, _Hash, _RehashPolicy, _Traits::
-erase(const key_type __k)
+_M_erase(std::true_type, const key_type __k)
 {
   __hash_code __code = this-_M_hash_code(__k);
   std::size_t __bkt = _M_bucket_index(__k, __code);
@@ -1466,43 +1490,67 @@
   __node_base* __prev_n = _M_find_before_node(__bkt, __k, __code);
   if (!__prev_n)
 	return 0;
+
+  // We found a matching node, erase it.
   __node_type* __n = static_cast__node_type*(__prev_n-_M_nxt);
-  bool __is_bucket_begin = _M_buckets[__bkt] == __prev_n;
+  _M_erase(__bkt, __prev_n, __n);
+  return 1;
+}
 
-  // We found a matching node, start deallocation loop from it
-  std::size_t __next_bkt = __bkt;
-  __node_type* __next_n = __n;
+  templatetypename _Key, typename _Value,
+	   typename _Alloc, typename _ExtractKey, typename _Equal,
+	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
+	   typename _Traits
+typename _Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
+			_H1, _H2, _Hash, _RehashPolicy,
+			_Traits::size_type
+_Hashtable_Key, _Value, _Alloc, _ExtractKey, _Equal,
+	   _H1, _H2, _Hash, _RehashPolicy, _Traits::
+_M_erase(std::false_type, const key_type __k)
+{
+  __hash_code __code = this-_M_hash_code(__k);
+  std::size_t __bkt = _M_bucket_index(__k, __code);
+
+  // Look for the node before the first matching node.
+  __node_base* __prev_n = _M_find_before_node(__bkt, __k, __code);
+  if (!__prev_n)
+	return 0;
+
+  // _GLIBCXX_RESOLVE_LIB_DEFECTS
+  // 526. Is it undefined if a function in the standard changes
+  // in parameters?
+  // We use one loop to find all matching nodes and another to deallocate
+  // them so that the key stays valid during the first loop. It might be
+  // invalidated indirectly when destroying nodes.
+  __node_type* __n

Re: safe unordered local iterators

2011-07-21 Thread François Dumont

Attached patch applied:

2011-07-21  François Dumont francois.cppd...@free.fr

* include/debug/safe_unordered_sequence.h,
safe_unordered_sequence.tcc: Rename respectively in...
* include/debug/safe_unordered_container.h,
safe_unordered_container.tcc: ...those. _Safe_unordered_sequence
rename _Safe_unordered_container.
* include/debug/safe_unordered_base.h: 
_Safe_unordered_sequence_base

rename _Safe_unordered_container_base.
* include/debug/unordered_map, unordered_set: Adapt to previous
modifications.
* config/abi/pre/gnu.ver: Likewise.
* src/debug.cc: Likewise.
* include/Makefile.am: Likewise.
* include/Makefile.in: Regenerate.

Regards


On 07/21/2011 12:58 AM, Jonathan Wakely wrote:

On 20 July 2011 22:40, Paolo Carlinipaolo.carl...@oracle.com  wrote:

On 07/20/2011 10:46 PM, François Dumont wrote:

Hello

Sorry for the inconvenience. Here is my proposition for the renaming, I

I should probably apologise for the inconvenience for suggesting
changing it!  I do think it is a bit confusing though, the Sequence
containers in the standard library have a definite ordering, so an
unordered sequence is confusing :)


haven't tested it yet but will of course before commiting.

I'm Ok with it if Jonathan is...

Yes, thanks for changing it.


Thanks,
Paolo.

PS: my first language obviously isn't English, but I see you are using
proposition  a lot where I would normally preferproposal...

Yes, proposal would be correct there.



Index: src/debug.cc
===
--- src/debug.cc	(revision 176533)
+++ src/debug.cc	(working copy)
@@ -25,7 +25,7 @@
 
 #include debug/debug.h
 #include debug/safe_sequence.h
-#include debug/safe_unordered_sequence.h
+#include debug/safe_unordered_container.h
 #include debug/safe_iterator.h
 #include debug/safe_local_iterator.h
 #include algorithm
@@ -78,8 +78,8 @@
   }
 
   void
-  swap_useq(__gnu_debug::_Safe_unordered_sequence_base __lhs,
-	__gnu_debug::_Safe_unordered_sequence_base __rhs)
+  swap_ucont(__gnu_debug::_Safe_unordered_container_base __lhs,
+	__gnu_debug::_Safe_unordered_container_base __rhs)
   {
 swap_seq(__lhs, __rhs);
 swap_its(__lhs, __lhs._M_local_iterators,
@@ -174,8 +174,8 @@
  by a dereferenceable one,
 function requires a valid iterator range (%2.name;, %3.name;)
 , \%2.name;\ shall be before and not equal to \%3.name;\,
-// std::unordered_sequence::local_iterator
-attempt to compare local iterators from different unordered sequence
+// std::unordered_container::local_iterator
+attempt to compare local iterators from different unordered container
  buckets
   };
 
@@ -374,38 +374,38 @@
   _M_get_mutex() throw ()
   { return get_safe_base_mutex(_M_sequence); }
 
-  _Safe_unordered_sequence_base*
+  _Safe_unordered_container_base*
   _Safe_local_iterator_base::
-  _M_get_sequence() const _GLIBCXX_NOEXCEPT
-  { return static_cast_Safe_unordered_sequence_base*(_M_sequence); }
+  _M_get_container() const _GLIBCXX_NOEXCEPT
+  { return static_cast_Safe_unordered_container_base*(_M_sequence); }
 
   void
   _Safe_local_iterator_base::
-  _M_attach(_Safe_sequence_base* __seq, bool __constant)
+  _M_attach(_Safe_sequence_base* __cont, bool __constant)
   {
 _M_detach();
 
-// Attach to the new sequence (if there is one)
-if (__seq)
+// Attach to the new container (if there is one)
+if (__cont)
   {
-	_M_sequence = __seq;
+	_M_sequence = __cont;
 	_M_version = _M_sequence-_M_version;
-	_M_get_sequence()-_M_attach_local(this, __constant);
+	_M_get_container()-_M_attach_local(this, __constant);
   }
   }
   
   void
   _Safe_local_iterator_base::
-  _M_attach_single(_Safe_sequence_base* __seq, bool __constant) throw ()
+  _M_attach_single(_Safe_sequence_base* __cont, bool __constant) throw ()
   {
 _M_detach_single();
 
-// Attach to the new sequence (if there is one)
-if (__seq)
+// Attach to the new container (if there is one)
+if (__cont)
   {
-	_M_sequence = __seq;
+	_M_sequence = __cont;
 	_M_version = _M_sequence-_M_version;
-	_M_get_sequence()-_M_attach_local_single(this, __constant);
+	_M_get_container()-_M_attach_local_single(this, __constant);
   }
   }
 
@@ -414,7 +414,7 @@
   _M_detach()
   {
 if (_M_sequence)
-  _M_get_sequence()-_M_detach_local(this);
+  _M_get_container()-_M_detach_local(this);
 
 _M_reset();
   }
@@ -424,13 +424,13 @@
   _M_detach_single() throw ()
   {
 if (_M_sequence)
-  _M_get_sequence()-_M_detach_local_single(this);
+  _M_get_container()-_M_detach_local_single(this);
 
 _M_reset();
   }
 
   void
-  _Safe_unordered_sequence_base::
+  _Safe_unordered_container_base::
   _M_detach_all()
   {
 __gnu_cxx::__scoped_lock sentry(_M_get_mutex());
@@ -448,17 +448,17 @@
   }
 
   void
-  _Safe_unordered_sequence_base::
-  _M_swap

Re: hash policy patch

2011-07-24 Thread François Dumont

On 07/24/2011 01:31 AM, Paolo Carlini wrote:

On 07/23/2011 10:31 PM, François Dumont wrote:

Hi

While working on DR 41975 I realized a small issue in current 
rehash implementation that sometimes lead to load_factor being 
greater than max_load_factor. Here is a patch to fix that:

Ok, good.

I think we could as well have everywhere:

const unsigned long __p = *std::lower_bound...

and then change the following *__p to __p. Isn't a tad cleaner?

Thanks,
Paolo.


Attached patch applied, I have integrated your remark Paolo.

2011-07-24  François Dumont francois.cppd...@free.fr

* include/bits/hashtable_policy.h (_Prime_rehash_policy): Use
__builtin_floor rather than __builtin_ceil to compute next resize
value.
* testsuite/23_containers/unordered_set/hash_policy/load_factor.cc:
New.

For info, I will submit a proposal for DR 41975 tomorrow or the day after.

Regards
Index: include/bits/hashtable_policy.h
===
--- include/bits/hashtable_policy.h	(revision 176581)
+++ include/bits/hashtable_policy.h	(working copy)
@@ -427,10 +427,10 @@
   _Prime_rehash_policy::
   _M_next_bkt(std::size_t __n) const
   {
-const unsigned long* __p = std::lower_bound(__prime_list, __prime_list
+const unsigned long __p = *std::lower_bound(__prime_list, __prime_list
 		+ _S_n_primes, __n);
 _M_next_resize =
-  static_caststd::size_t(__builtin_ceil(*__p * _M_max_load_factor));
+  static_caststd::size_t(__builtin_floor(__p * _M_max_load_factor));
 return *__p;
   }
 
@@ -441,10 +441,10 @@
   _M_bkt_for_elements(std::size_t __n) const
   {
 const float __min_bkts = __n / _M_max_load_factor;
-const unsigned long* __p = std::lower_bound(__prime_list, __prime_list
+const unsigned long __p = *std::lower_bound(__prime_list, __prime_list
 		+ _S_n_primes, __min_bkts);
 _M_next_resize =
-  static_caststd::size_t(__builtin_ceil(*__p * _M_max_load_factor));
+  static_caststd::size_t(__builtin_floor(__p * _M_max_load_factor));
 return *__p;
   }
 
@@ -469,17 +469,17 @@
 	if (__min_bkts  __n_bkt)
 	  {
 	__min_bkts = std::max(__min_bkts, _M_growth_factor * __n_bkt);
-	const unsigned long* __p =
-	  std::lower_bound(__prime_list, __prime_list + _S_n_primes,
-			   __min_bkts);
+	const unsigned long __p =
+	  *std::lower_bound(__prime_list, __prime_list + _S_n_primes,
+__min_bkts);
 	_M_next_resize = static_caststd::size_t
-	  (__builtin_ceil(*__p * _M_max_load_factor));
+	  (__builtin_floor(__p * _M_max_load_factor));
 	return std::make_pair(true, *__p);
 	  }
 	else
 	  {
 	_M_next_resize = static_caststd::size_t
-	  (__builtin_ceil(__n_bkt * _M_max_load_factor));
+	  (__builtin_floor(__n_bkt * _M_max_load_factor));
 	return std::make_pair(false, 0);
 	  }
   }
Index: testsuite/23_containers/unordered_set/hash_policy/load_factor.cc
===
--- testsuite/23_containers/unordered_set/hash_policy/load_factor.cc	(revision 0)
+++ testsuite/23_containers/unordered_set/hash_policy/load_factor.cc	(revision 0)
@@ -0,0 +1,58 @@
+// Copyright (C) 2011 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+//
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+//
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// http://www.gnu.org/licenses/.
+//
+// { dg-options -std=gnu++0x }
+
+#include unordered_set
+#include testsuite_hooks.h
+
+void test01()
+{
+  bool test __attribute__((unused)) = true;
+  {
+std::unordered_setint us;
+for (int i = 0; i != 10; ++i)
+{
+  us.insert(i);
+  VERIFY( us.load_factor() = us.max_load_factor() );
+}
+  }
+  {
+std::unordered_setint us;
+us.max_load_factor(3.f);
+for (int i = 0; i != 10; ++i)
+{
+  us.insert(i);
+  VERIFY( us.load_factor() = us.max_load_factor() );
+}
+  }
+  {
+std::unordered_setint us;
+us.max_load_factor(.3f);
+for (int i = 0; i != 10; ++i)
+{
+  us.insert(i);
+  VERIFY( us.load_factor() = us.max_load_factor() );
+}
+  }
+}
+
+int main()
+{
+  test01();
+  return 0;
+}


Re: Fix stable_sort to work on iterators returning rvalue

2012-05-29 Thread François Dumont

Attached patch applied then.

2012-05-29  François Dumont fdum...@gcc.gnu.org

* include/bits/stl_tempbuf.h (__uninitialized_construct_buf)
(__uninitialized_construct_buf_dispatch::__ucr): Fix to work
with iterator returning rvalue.
* testsuite/25_algorithms/stable_sort/3.cc: New.

François


On 05/28/2012 09:59 PM, Christopher Jefferson wrote:

On 28 May 2012, at 20:00, François Dumont wrote:


On 05/28/2012 12:11 PM, Christopher Jefferson wrote:

My main concern is one I have mentioned previously. I'm unsure all our code 
works with things like move_iterator, even when it correctly compiles. We 
previously took out internal uses of move_iterator, because while the code 
compiled it did not work correctly. Look at bug : 
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48038 for any example.

With this change, code which uses move_iterator, and operator  which pass by 
value, will cause values to be 'eaten' during the sort. Now, the standard in my 
opinion doesn't say this would be a bug, but it certainly seems like it would be 
unpleasant.

How to fix this? The simplest way might be to wrap all predicates / operator  
in a wrapper which just takes lvalue references. Not sure if it is worth going to 
that much pain just to fix this.

At the very least, I would want to see a bunch of tests which make sure we 
don't have any other code which accidentally 'eats' users data. Make sure you 
catch all the various bits of sort, some of which get triggered rarely. I 
realise that is putting other people to a higher standard than I put myself, 
but we have learnt this is a tricky area :)

Chris

Does it mean that you refuse the patch ?

The patch purpose is not to make the code compatible with move_iterator but 
to make it compatible with iterator types that return pure rvalue through their 
dereference operator. As a side effect std::move_iterator are going to compile 
too which might be bad but is it really a reason to forbid other kind of 
iterators ?

Of course we should find a good way to handle move_iterator, and I can 
spend some time on it, but I think that it should be the subject of a dedicated 
patch.

Sorry, just to clarify (also, I have been having problems with my mail client).

I believe this patch is good, I withdraw any complaint.

I believe there is a serious issue with move_iterator in use with almost all 
standard libraries, but it is disconnected from this patch.

Chris


Index: include/bits/stl_tempbuf.h
===
--- include/bits/stl_tempbuf.h	(revision 187979)
+++ include/bits/stl_tempbuf.h	(working copy)
@@ -1,7 +1,7 @@
 // Temporary buffer implementation -*- C++ -*-
 
 // Copyright (C) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009,
-// 2010, 2011
+// 2010, 2011, 2012
 // Free Software Foundation, Inc.
 //
 // This file is part of the GNU ISO C++ Library.  This library is free
@@ -182,25 +182,25 @@
   templatebool
 struct __uninitialized_construct_buf_dispatch
 {
-  templatetypename _ForwardIterator, typename _Tp
+  templatetypename _Pointer, typename _ForwardIterator
 static void
-__ucr(_ForwardIterator __first, _ForwardIterator __last,
-	  _Tp __value)
+__ucr(_Pointer __first, _Pointer __last,
+	  _ForwardIterator __seed)
 {
 	  if(__first == __last)
 	return;
 
-	  _ForwardIterator __cur = __first;
+	  _Pointer __cur = __first;
 	  __try
 	{
 	  std::_Construct(std::__addressof(*__first),
-			  _GLIBCXX_MOVE(__value));
-	  _ForwardIterator __prev = __cur;
+			  _GLIBCXX_MOVE(*__seed));
+	  _Pointer __prev = __cur;
 	  ++__cur;
 	  for(; __cur != __last; ++__cur, ++__prev)
 		std::_Construct(std::__addressof(*__cur),
 _GLIBCXX_MOVE(*__prev));
-	  __value = _GLIBCXX_MOVE(*__prev);
+	  *__seed = _GLIBCXX_MOVE(*__prev);
 	}
 	  __catch(...)
 	{
@@ -213,9 +213,9 @@
   template
 struct __uninitialized_construct_buf_dispatchtrue
 {
-  templatetypename _ForwardIterator, typename _Tp
+  templatetypename _Pointer, typename _ForwardIterator
 static void
-__ucr(_ForwardIterator, _ForwardIterator, _Tp) { }
+__ucr(_Pointer, _Pointer, _ForwardIterator) { }
 };
 
   // Constructs objects in the range [first, last).
@@ -223,23 +223,22 @@
   // their exact value is not defined. In particular they may
   // be 'moved from'.
   //
-  // While __value may altered during this algorithm, it will have
+  // While *__seed may be altered during this algorithm, it will have
   // the same value when the algorithm finishes, unless one of the
   // constructions throws.
   //
-  // Requirements: _ForwardIterator::value_type(_Tp) is valid.
-  templatetypename _ForwardIterator, typename _Tp
+  // Requirements: _Pointer::value_type(_Tp) is valid.
+  templatetypename _Pointer, typename _ForwardIterator
 inline void
-__uninitialized_construct_buf(_ForwardIterator __first

Re: Enhance performance test

2011-11-06 Thread François Dumont

Attached patch applied.

2011-11-06  François Dumont fdum...@gcc.gnu.org

* testsuite/performance/23_containers/insert_erase/41975.cc: Add
tests to check performance with or without cache of hash code 
and with

string type that has a costlier hash functor than int type.

François


On 11/05/2011 12:56 AM, Paolo Carlini wrote:

On 11/04/2011 10:32 PM, François Dumont wrote:

Hi

Here is a patch to enhance the performance test introduced 
recently for hashtable. It shows more clearly the 41975 performance 
issue. I also introduced a bench using unordered_setstring so that 
we have a tests involving a type with costlier hash functor than the 
one used for int. And lastly the bench are run twice, with and 
without hash code cached.


2011-11-04  François Dumont fdum...@gcc.gnu.org

* testsuite/performance/23_containers/insert_erase/41975.cc: Add
tests to check performance with or without cache of hash code 
and with

string type that has a costlier hash functor than int type.

Ok to commit ?

Looks Ok, but, as usual, watch overlong lines!

Paolo.



Index: testsuite/performance/23_containers/insert_erase/41975.cc
===
--- testsuite/performance/23_containers/insert_erase/41975.cc	(revision 181036)
+++ testsuite/performance/23_containers/insert_erase/41975.cc	(working copy)
@@ -17,56 +17,167 @@
 // with this library; see the file COPYING3.  If not see
 // http://www.gnu.org/licenses/.
 
-#include cassert
+#include sstream
 #include unordered_set
 #include testsuite_performance.h
 
-int main()
+namespace
 {
-  using namespace __gnu_test;
+  // Bench using an unordered_setint. Hash functor for int is quite
+  // predictable so it helps bench very specific use cases.
+  templatebool use_cache
+void bench()
+{
+  using namespace __gnu_test;
+  std::ostringstream ostr;
+  ostr  unordered_setint   (use_cache ? with : without)
+	 cache;
+  const std::string desc = ostr.str();
 
-  time_counter time;
-  resource_counter resource;
+  time_counter time;
+  resource_counter resource;
 
-  start_counters(time, resource);
+  const int nb = 20;
+  start_counters(time, resource);
 
-  std::unordered_setint us;
-  for (int i = 0; i != 500; ++i)
-us.insert(i);
+  std::__unordered_setint, std::hashint, std::equal_toint,
+			   std::allocatorint, use_cache us;
+  for (int i = 0; i != nb; ++i)
+	us.insert(i);
 
-  stop_counters(time, resource);
-  report_performance(__FILE__, Container generation, time, resource);
+  stop_counters(time, resource);
+  ostr.str();
+  ostr  desc  : first insert;
+  report_performance(__FILE__, ostr.str().c_str(), time, resource);
 
-  start_counters(time, resource);
+  start_counters(time, resource);
 
-  for (int j = 100; j != 0; --j)
-{
-  auto it = us.begin();
-  while (it != us.end())
+  // Here is the worst erase use case when hashtable implementation was
+  // something like vectorforward_list. Erasing from the end was very
+  // costly because we need to return the iterator following the erased
+  // one, as the hashtable is getting emptier at each step there are
+  // more and more empty bucket to loop through to reach the end of the
+  // container and find out that it was in fact the last element.
+  for (int j = nb - 1; j = 0; --j)
 	{
-	  if ((*it % j) == 0)
-	it = us.erase(it);
-	  else
-	++it;
+	  auto it = us.find(j);
+	  if (it != us.end())
+	us.erase(it);
 	}
+
+  stop_counters(time, resource);
+  ostr.str();
+  ostr  desc  : erase from iterator;
+  report_performance(__FILE__, ostr.str().c_str(), time, resource);
+
+  start_counters(time, resource);
+
+  // This is a worst insertion use case for the current implementation as
+  // we insert an element at the begining of the hashtable and then we
+  // insert starting at the end so that each time we need to seek up to the
+  // first bucket to find the first non-empty one.
+  us.insert(0);
+  for (int i = nb - 1; i = 0; --i)
+	us.insert(i);
+
+  stop_counters(time, resource);
+  ostr.str();
+  ostr  desc  : second insert;
+  report_performance(__FILE__, ostr.str().c_str(), time, resource);
+
+  start_counters(time, resource);
+
+  for (int j = nb - 1; j = 0; --j)
+	us.erase(j);
+
+  stop_counters(time, resource);
+  ostr.str();
+  ostr  desc  : erase from key;
+  report_performance(__FILE__, ostr.str().c_str(), time, resource);
 }
 
-  stop_counters(time, resource);
-  report_performance(__FILE__, Container erase, time, resource);
+  // Bench using unordered_setstring that show how important it is to cache
+  // hash code as computing string hash code is quite expensive compared to
+  // computing it for int.
+  templatebool use_cache
+void bench_str()
+{
+  using namespace __gnu_test

Fwd: Re: hashtable cleanup + new testsuite files

2011-11-28 Thread François Dumont

Attached patch applied.

2011-11-29  François Dumontfdum...@gcc.gnu.org

 * include/bits/hashtable.h (_Hashtable::_M_rehash): Remove code
 useless now that the hashtable implementation put the hash code in
 cache if the hash functor throws.
 * testsuite/23_containers/unordered_set/erase/1.cc: Enhance test by
 checking also distance between begin and end iterators to validate
 underlying data model.
 * testsuite/23_containers/unordered_multiset/erase/1.cc: Likewise.
 * testsuire/23_containers/unordered_map/erase/1.cc: Likewise.
 * testsuite/23_containers/unordered_multimap/erase/1.cc: Likewise.
 * testsuite/23_containers/unordered_multiset/erase/2.cc: New.
 * testsuite/23_containers/unordered_multimap/erase/2.cc: New.

Regards


On 11/28/2011 10:54 PM, Paolo Carlini wrote:

 On 11/28/2011 09:29 PM, François Dumont wrote:

 2011-11-28  François Dumontfdum...@gcc.gnu.org

 * include/bits/hashtable.h (_Hashtable::_M_rehash): Remove
 code
 useless now that the hashtable implementation put the hash
 code in
 cache if the hash functor throws.
 * testsuite/23_containers/unordered_set/erase/1.cc: Enhance
 test by
 checking also distance between begin and end iterators to
 validate
 underlying data model.
 * testsuite/23_containers/unordered_multiset/erase/1.cc:
 Likewise.
 * testsuire/23_containers/unordered_map/erase/1.cc: Likewise.
 * testsuite/23_containers/unordered_multimap/erase/1.cc:
 Likewise.
 * testsuite/23_containers/unordered_multiset/erase/2.cc: New.
 * testsuite/23_containers/unordered_multimap/erase/2.cc: New.

 Tested under linux x86_64.

 Ok to commit in trunk ?

 Ok, thanks.

 Paolo.




Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 181729)
+++ include/bits/hashtable.h	(working copy)
@@ -233,11 +233,6 @@
   void
   _M_deallocate_node(_Node* __n);
 
-  // Deallocate all nodes contained in the bucket array, buckets' nodes
-  // are not linked to each other
-  void
-  _M_deallocate_nodes(_Bucket*, size_type);
-
   // Deallocate the linked list of nodes pointed to by __n
   void
   _M_deallocate_nodes(_Node* __n);
@@ -591,19 +586,6 @@
 void
 _Hashtable_Key, _Value, _Allocator, _ExtractKey, _Equal,
 	   _H1, _H2, _Hash, _RehashPolicy, __chc, __cit, __uk::
-_M_deallocate_nodes(_Bucket* __buckets, size_type __n)
-{
-  for (size_type __i = 0; __i != __n; ++__i)
-	_M_deallocate_nodes(__buckets[__i]);
-}
-
-  templatetypename _Key, typename _Value,
-	   typename _Allocator, typename _ExtractKey, typename _Equal,
-	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
-	   bool __chc, bool __cit, bool __uk
-void
-_Hashtable_Key, _Value, _Allocator, _ExtractKey, _Equal,
-	   _H1, _H2, _Hash, _RehashPolicy, __chc, __cit, __uk::
 _M_deallocate_nodes(_Node* __n)
 {
   while (__n)
@@ -1542,11 +1524,10 @@
 	   _H1, _H2, _Hash, _RehashPolicy, __chc, __cit, __uk::
 _M_rehash(size_type __n, const _RehashPolicyState __state)
 {
-  _Bucket* __new_buckets = nullptr;
-  _Node* __p = _M_buckets[_M_begin_bucket_index];
   __try
 	{
-	  __new_buckets = _M_allocate_buckets(__n);
+	  _Bucket* __new_buckets = _M_allocate_buckets(__n);
+	  _Node* __p = _M_buckets[_M_begin_bucket_index];
 	  // First loop to store each node in its new bucket
 	  while (__p)
 	{
@@ -1591,24 +1572,9 @@
 	}
   __catch(...)
 	{
-	  if (__new_buckets)
-	{
-	  // A failure here means that a hash function threw an exception.
-	  // We can't restore the previous state without calling the hash
-	  // function again, so the only sensible recovery is to delete
-	  // everything.
-	  _M_deallocate_nodes(__new_buckets, __n);
-	  _M_deallocate_buckets(__new_buckets, __n);
-	  _M_deallocate_nodes(__p);
-	  __builtin_memset(_M_buckets, 0, sizeof(_Bucket) * _M_bucket_count);
-	  _M_element_count = 0;
-	  _M_begin_bucket_index = _M_bucket_count;
-	  _M_rehash_policy._M_reset(_RehashPolicyState());
-	}
-	  else
-	// A failure here means that buckets allocation failed.  We only
-	// have to restore hash policy previous state.
-	_M_rehash_policy._M_reset(__state);
+	  // A failure here means that buckets allocation failed.  We only
+	  // have to restore hash policy previous state.
+	  _M_rehash_policy._M_reset(__state);
 	  __throw_exception_again;
 	}
 }
Index: testsuite/23_containers/unordered_map/erase/1.cc
===
--- testsuite/23_containers/unordered_map/erase/1.cc	(revision 181729)
+++ testsuite/23_containers/unordered_map/erase/1.cc	(working copy)
@@ -23,6 +23,18 @@
 #include string
 #include testsuite_hooks.h
 
+namespace
+{
+  std

Re: libstdc++.exp patch

2011-04-22 Thread François Dumont

Attached patch applied.

2011-04-21  François Dumont francois.cppd...@free.fr

* testsuite/lib/libstdc++.exp (check_v3_target_time): Discard
unused compilation result thanks to /dev/null.
* testsuite/lib/libstdc++.exp (check_v3_target_debug_mode
check_v3_target_profile_mode check_v3_target_normal_mode
check_v3_target_cstdint check_v3_target_cmath
check_v3_target_atomic_builtins check_v3_target_gthreads
check_v3_target_nanosleep check_v3_target_sched_yield
check_v3_target_string_conversions check_v3_target_swprintf
check_v3_target_binary_io): Use simple preprocessing rather than
compilation. Discard unused preprocessing result thanks to 
/dev/null.


I kept check_v3_target_time execution target for the moment, I check 
that it is used within tests. Do not hesitate to signal an issue Ralf.


Regards

On 04/22/2011 05:06 PM, Paolo Carlini wrote:

Hi,


Here is the patch I submitted some months ago before 4.6.0 release.


My only concern regarding this patch is on the portability of 
/dev/null.
I would say it's Ok: in acinclude.m4 we have quite a few of /dev/null 
and nobody complained so far. I'm adding Ralf in CC, I trust his 
opinion about portability issues (in particular ;)
I also wonder if check_v3_target_time could not simply use the object 
or even assembly target rather than executable, what do you think ?
Can you figure out when / why it has been added? Because that function 
is C89 and should be unconditionally available. If we have the test, 
likely it's because of some target I'm not familiar with, which can 
well declare time it in time.h and then end up not providing it in 
the library, thus the failure happens at *link* time... (would not be 
the fist time we have to live with this kind of annoying situation)


Paolo.



Index: testsuite/lib/libstdc++.exp
===
--- testsuite/lib/libstdc++.exp	(revision 172870)
+++ testsuite/lib/libstdc++.exp	(working copy)
@@ -824,7 +824,6 @@
 	# Set up and compile a C++ test program that tries to use
 	# the time function
 	set src time[pid].cc
-	set exe time[pid].x
 
 	set f [open $src w]
 	puts $f #include time.h
@@ -835,13 +834,12 @@
 	puts $f } 
 	close $f
 
-	set lines [v3_target_compile $src $exe executable ]
+	set lines [v3_target_compile $src /dev/null executable ]
 	file delete $src
 
 	if [string match  $lines] {
 	# No error message, compilation succeeded.
 	verbose check_v3_target_time: compilation succeeded 2
-	remote_file build delete $exe
 	set et_time_saved 1
 	} else {
 	verbose check_v3_target_time: compilation failed 2
@@ -927,25 +925,21 @@
 } else {
 	set et_debug_mode 0
 
-	# Set up and compile a C++ test program that depends
+	# Set up and preprocess a C++ test program that depends
 	# on debug mode activated.
 	set src debug_mode[pid].cc
-	set exe debug_mode[pid].exe
 
 	set f [open $src w]
 	puts $f #ifndef _GLIBCXX_DEBUG
 	puts $f #  error No debug mode
 	puts $f #endif
-	puts $f int main()
-	puts $f { return 0; }
 	close $f
 
-	set lines [v3_target_compile $src $exe executable ]
+	set lines [v3_target_compile $src /dev/null preprocess ]
 	file delete $src
 
 	if [string match  $lines] {
-	# No error message, compilation succeeded.
-	remote_file build delete $exe
+	# No error message, preprocessing succeeded.
 	set et_debug_mode 1
 	}
 }
@@ -977,25 +971,21 @@
 } else {
 	set et_profile_mode 0
 
-	# Set up and compile a C++ test program that depends
+	# Set up and preprocess a C++ test program that depends
 	# on profile mode activated.
 	set src profile_mode[pid].cc
-	set exe profile_mode[pid].exe
 
 	set f [open $src w]
 	puts $f #ifndef _GLIBCXX_PROFILE
 	puts $f #  error No profile mode
 	puts $f #endif
-	puts $f int main()
-	puts $f { return 0; }
 	close $f
 
-	set lines [v3_target_compile $src $exe executable ]
+	set lines [v3_target_compile $src /dev/null preprocess ]
 	file delete $src
 
 	if [string match  $lines] {
-	# No error message, compilation succeeded.
-	remote_file build delete $exe
+	# No error message, preprocessing succeeded.
 	set et_profile_mode 1
 	}
 }
@@ -1030,17 +1020,14 @@
 	# Set up and compile a C++ test program that depends
 	# on normal mode activated.
 	set src normal_mode[pid].cc
-	set exe normal_mode[pid].exe
 
 	set f [open $src w]
 	puts $f #if defined(_GLIBCXX_DEBUG) || defined(_GLIBCXX_PROFILE) || defined(_GLIBCXX_PARALLEL)
 	puts $f #  error No normal mode
 	puts $f #endif
-	puts $f int main()
-	puts $f { return 0; }
 	close $f
 
-	set lines [v3_target_compile $src $exe executable ]
+	set lines [v3_target_compile $src /dev/null preprocess ]
 	file delete $src
 
 	if [string match  $lines] {
@@ -1115,28 +1102,26 @@
 } else {
 	set et_cstdint 0
 
-	# Set up and compile a C++0x test program that depends
+	# Set up and preprocess a C++0x test program that depends
 	# on the C99

Re: pb_ds debug mode patch

2011-05-11 Thread François Dumont

Attached patch applied.

2011-05-11  François Dumont francois.cppd...@free.fr

* include/ext/pb_ds/detail/resize_policy/
hash_load_check_resize_trigger_imp.hpp (assert_valid): Replace
_GLIBCXX_DEBUG_ASSERT calls with PB_DS_DEBUG_VERIFY.
* include/ext/pb_ds/detail/binomial_heap_base_/erase_fn_imps.hpp,
find_fn_imps.hpp, insert_fn_imps.hpp, binomial_heap_base_.hpp,
constructors_destructor_fn_imps.hpp, split_join_fn_imps.hpp
(PB_DS_ASSERT_VALID): Rename in PB_DS_ASSERT_VALID_COND.
* include/ext/pb_ds/detail/debug_map_base.hpp,
splay_tree_/splay_tree_.hpp, ov_tree_map_/ov_tree_map_.hpp,
cc_hash_table_map_/cc_ht_map_.hpp, pat_trie_/pat_trie_.hpp,
leaf.hpp, internal_node.hpp, gp_hash_table_map_/gp_ht_map_.hpp,
bin_search_tree_/bin_search_tree_.hpp, list_update_map_/lu_map_.hpp,
rb_tree_map_/rb_tree_.hpp (PB_DS_ASSERT_VALID, PB_DS_DEBUG_VERIFY,
PB_DS_CHECK_KEY_EXISTS, PB_DS_CHECK_KEY_DOES_NOT_EXIST): Duplicate
macro definitions move...
* include/ext/pb_ds/detail/container_base_dispatch.hpp: ... here...
* include/ext/pb_ds/detail/basic_tree_policy/traits.hpp: ... and here.
* include/ext/pb_ds/detail/binary_heap_/binary_heap_.hpp,
resize_policy.hpp, pairing_heap_/pairing_heap_.hpp,
left_child_next_sibling_heap_/left_child_next_sibling_heap_.hpp,
binomial_heap_/binomial_heap_.hpp, thin_heap_/thin_heap_.hpp,
rc_binomial_heap_/rc_binomial_heap_.hpp, rc.hpp (PB_DS_ASSERT_VALID,
PB_DS_DEBUG_VERIFY): Duplicate macro definitions move...
* include/ext/pb_ds/detail/priority_queue_base_dispatch.hpp:
...here.


Regards


On 05/11/2011 06:36 AM, Benjamin Kosnik wrote:

Ok to commit ?

Yes, thanks.


Note that I just had a testsuite failure regarding pb_ds code that
looks rather serious because it doesn't seem to come from pb_ds debug
code:

Pre-existing.

-benjamin



Index: include/ext/pb_ds/detail/resize_policy/hash_load_check_resize_trigger_imp.hpp
===
--- include/ext/pb_ds/detail/resize_policy/hash_load_check_resize_trigger_imp.hpp	(revision 173552)
+++ include/ext/pb_ds/detail/resize_policy/hash_load_check_resize_trigger_imp.hpp	(working copy)
@@ -286,8 +286,8 @@
 PB_DS_CLASS_C_DEC::
 assert_valid(const char* __file, int __line) const
 {
-  _GLIBCXX_DEBUG_ASSERT(m_load_max  m_load_min);
-  _GLIBCXX_DEBUG_ASSERT(m_next_grow_size = m_next_shrink_size);
+  PB_DS_DEBUG_VERIFY(m_load_max  m_load_min);
+  PB_DS_DEBUG_VERIFY(m_next_grow_size = m_next_shrink_size);
 }
 # undef PB_DS_DEBUG_VERIFY
 #endif
Index: include/ext/pb_ds/detail/binomial_heap_base_/erase_fn_imps.hpp
===
--- include/ext/pb_ds/detail/binomial_heap_base_/erase_fn_imps.hpp	(revision 173552)
+++ include/ext/pb_ds/detail/binomial_heap_base_/erase_fn_imps.hpp	(working copy)
@@ -43,7 +43,7 @@
 PB_DS_CLASS_C_DEC::
 pop()
 {
-  PB_DS_ASSERT_VALID((*this),true)
+  PB_DS_ASSERT_VALID_COND((*this),true)
   _GLIBCXX_DEBUG_ASSERT(!base_type::empty());
 
   if (m_p_max == 0)
@@ -59,7 +59,7 @@
 
   m_p_max = 0;
 
-  PB_DS_ASSERT_VALID((*this),true)
+  PB_DS_ASSERT_VALID_COND((*this),true)
 }
 
 PB_DS_CLASS_T_DEC
@@ -113,7 +113,7 @@
 PB_DS_CLASS_C_DEC::
 erase(point_iterator it)
 {
-  PB_DS_ASSERT_VALID((*this),true)
+  PB_DS_ASSERT_VALID_COND((*this),true)
   _GLIBCXX_DEBUG_ASSERT(!base_type::empty());
 
   base_type::bubble_to_top(it.m_p_nd);
@@ -124,7 +124,7 @@
 
   m_p_max = 0;
 
-  PB_DS_ASSERT_VALID((*this),true)
+  PB_DS_ASSERT_VALID_COND((*this),true)
 }
 
 PB_DS_CLASS_T_DEC
@@ -133,11 +133,11 @@
 PB_DS_CLASS_C_DEC::
 erase_if(Pred pred)
 {
-  PB_DS_ASSERT_VALID((*this),true)
+  PB_DS_ASSERT_VALID_COND((*this),true)
 
   if (base_type::empty())
 {
-  PB_DS_ASSERT_VALID((*this),true)
+  PB_DS_ASSERT_VALID_COND((*this),true)
 
   return 0;
 }
@@ -185,7 +185,7 @@
 
   m_p_max = 0;
 
-  PB_DS_ASSERT_VALID((*this),true)
+  PB_DS_ASSERT_VALID_COND((*this),true)
 
   return ersd;
 }
Index: include/ext/pb_ds/detail/binomial_heap_base_/find_fn_imps.hpp
===
--- include/ext/pb_ds/detail/binomial_heap_base_/find_fn_imps.hpp	(revision 173552)
+++ include/ext/pb_ds/detail/binomial_heap_base_/find_fn_imps.hpp	(working copy)
@@ -43,7 +43,7 @@
 PB_DS_CLASS_C_DEC::
 top() const
 {
-  PB_DS_ASSERT_VALID((*this),false)
+  PB_DS_ASSERT_VALID_COND((*this),false)
   _GLIBCXX_DEBUG_ASSERT(!base_type::empty());
 
   if (m_p_max == 0)
Index: include/ext/pb_ds/detail/binomial_heap_base_/insert_fn_imps.hpp
===
--- include/ext/pb_ds/detail/binomial_heap_base_/insert_fn_imps.hpp	(revision 173552)
+++ include/ext/pb_ds/detail/binomial_heap_base_/insert_fn_imps.hpp	(working copy)
@@ -43,7 +43,7 @@
 PB_DS_CLASS_C_DEC::
 push(const_reference r_val)
 {
-  PB_DS_ASSERT_VALID((*this),true)
+  PB_DS_ASSERT_VALID_COND((*this),true

Re: hashtable exception safety patch

2011-09-14 Thread François Dumont

On 09/13/2011 10:14 PM, Paolo Carlini wrote:

Hi,


To rebase would have forced me to delay the patch of one day which I try to 
avoid... without success. Here it is again.

Sorry about that, I was annoyed seeing a bit of - surely, trivial - work I had 
done on the orig patch nullified. Thanks for your patience.

Paolo

Attached patch applied.

2011-09-14  François Dumont fdum...@gcc.gnu.org
Paolo Carlini paolo.carl...@oracle.com

* include/bits/hashtable.h (_Hashtable::_M_rehash): Take and 
restore

hash policy _M_prev_resize on exception.
(_Hashtable::_M_insert_bucket): Capture hash policy next resize
before using it and use latter method to have it restored on 
exception.

(_Hashtable::_M_insert(_Arg __v, std::false_type): Likewise.
(_Hashtable::insert(_InputIterator, _InputIterator): Likewise.
(_Hashtable::rehash): Likewise.
* testsuite/23_containers/unordered_set/insert/hash_policy.cc: New.
* testsuite/23_containers/unordered_multiset/insert/hash_policy.cc:
Likewise.

François

Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 178792)
+++ include/bits/hashtable.h	(working copy)
@@ -458,8 +458,9 @@
   // reserve, if present, comes from _Rehash_base.
 
 private:
-  // Unconditionally change size of bucket array to n.
-  void _M_rehash(size_type __n);
+  // Unconditionally change size of bucket array to n, restore hash policy
+  // resize value to __next_resize on exception.
+  void _M_rehash(size_type __n, size_type __next_resize);
 };
 
 
@@ -743,7 +744,7 @@
   _M_rehash_policy = __pol;
   size_type __n_bkt = __pol._M_bkt_for_elements(_M_element_count);
   if (__n_bkt  _M_bucket_count)
-	_M_rehash(__n_bkt);
+	_M_rehash(__n_bkt, __pol._M_next_resize);
 }
 
   templatetypename _Key, typename _Value,
@@ -910,6 +911,7 @@
   _M_insert_bucket(_Arg __v, size_type __n,
 		   typename _Hashtable::_Hash_code_type __code)
   {
+	const size_type __saved_next_resize = _M_rehash_policy._M_next_resize;
 	std::pairbool, std::size_t __do_rehash
 	  = _M_rehash_policy._M_need_rehash(_M_bucket_count,
 	_M_element_count, 1);
@@ -920,14 +922,14 @@
 	__n = this-_M_bucket_index(__k, __code, __do_rehash.second);
 	  }
 
-	// Allocate the new node before doing the rehash so that we don't
-	// do a rehash if the allocation throws.
-	_Node* __new_node = _M_allocate_node(std::forward_Arg(__v));
-
+	_Node* __new_node = 0;
 	__try
 	  {
+	// Allocate the new node before doing the rehash so that we
+	// don't do a rehash if the allocation throws.
+	__new_node = _M_allocate_node(std::forward_Arg(__v));
 	if (__do_rehash.first)
-	  _M_rehash(__do_rehash.second);
+	  _M_rehash(__do_rehash.second, __saved_next_resize);
 
 	__new_node-_M_next = _M_buckets[__n];
 	this-_M_store_code(__new_node, __code);
@@ -939,7 +941,10 @@
 	  }
 	__catch(...)
 	  {
-	_M_deallocate_node(__new_node);
+	if (!__new_node)
+	  _M_rehash_policy._M_next_resize = __saved_next_resize;
+	else
+	  _M_deallocate_node(__new_node);
 	__throw_exception_again;
 	  }
   }
@@ -981,11 +986,12 @@
 		 _H1, _H2, _Hash, _RehashPolicy, __chc, __cit, __uk::
   _M_insert(_Arg __v, std::false_type)
   {
+	const size_type __saved_next_resize = _M_rehash_policy._M_next_resize;
 	std::pairbool, std::size_t __do_rehash
 	  = _M_rehash_policy._M_need_rehash(_M_bucket_count,
 	_M_element_count, 1);
 	if (__do_rehash.first)
-	  _M_rehash(__do_rehash.second);
+	  _M_rehash(__do_rehash.second, __saved_next_resize);
 
 	const key_type __k = this-_M_extract(__v);
 	typename _Hashtable::_Hash_code_type __code = this-_M_hash_code(__k);
@@ -1024,11 +1030,12 @@
   insert(_InputIterator __first, _InputIterator __last)
   {
 	size_type __n_elt = __detail::__distance_fw(__first, __last);
+	const size_type __saved_next_resize = _M_rehash_policy._M_next_resize;
 	std::pairbool, std::size_t __do_rehash
 	  = _M_rehash_policy._M_need_rehash(_M_bucket_count,
 	_M_element_count, __n_elt);
 	if (__do_rehash.first)
-	  _M_rehash(__do_rehash.second);
+	  _M_rehash(__do_rehash.second, __saved_next_resize);
 
 	for (; __first != __last; ++__first)
 	  this-insert(*__first);
@@ -1184,9 +1191,11 @@
 	   _H1, _H2, _Hash, _RehashPolicy, __chc, __cit, __uk::
 rehash(size_type __n)
 {
+  const size_type __saved_next_resize = _M_rehash_policy._M_next_resize;
   _M_rehash(std::max(_M_rehash_policy._M_next_bkt(__n),
 			 _M_rehash_policy._M_bkt_for_elements(_M_element_count
-			  + 1)));
+			  + 1)),
+		__saved_next_resize);
 }
 
   templatetypename _Key, typename _Value,
@@ -1196,11 +1205,12 @@
 void
 _Hashtable_Key, _Value, _Allocator, _ExtractKey, _Equal,
 	   _H1, _H2, _Hash, _RehashPolicy, __chc, __cit, __uk

Re: hash policy patch

2011-09-17 Thread François Dumont

On 09/16/2011 03:00 AM, Paolo Carlini wrote:

Ok... but:

+  us.max_load_factor(.5f);
+  VERIFY( us.max_load_factor() == .5f );

as we discussed already (didn't we?), this kind of VERIFY is in 
general very brittle (even if on the widespread base-2 systems 
probably we are lucky in this *specific* case): please just remove it, 
I don't think we'll miss much anyway.
I also wondered if in __rehash_policy method we shouldn't rehash as 
soon as __n_bkt != _M_bucket_count rather than only when __n_bkt  
_M_bucket_count. Users might change max load factor also to reduce 
the number of buckets...
I should find the time to check C++11 about this. I'll let you know my 
opinion ASAP.

Attached patch applied.

2011-09-17  François Dumont fdum...@gcc.gnu.org

* include/bits/hashtable.h (_Hashtable::__rehash_policy(const
_RehashPolicy)): Commit the modification of the policy only if no
exception occured.
* 
testsuite/23_containers/unordered_set/max_load_factor/robustness.cc:

New.

Paolo, I know that using float equality comparison is not reliable in 
general and I have remove the suspicious line but in this case I can't 
imagine a system where it could fail. When I do


const float f = 0.5f;
float foo = f;
assert( foo == f );

I can't imagine a system where the assert would fail, no ? Even if the 
system is not able to represent 0.5f in an acurate way this inaccuracy 
will be taken into account in the equality comparison. Unless you mean 
that on a C++ Standard point of view users should not expect 
max_load_factor() to return a value equal the one passed through 
max_load_factor(float). The Standard indeed does not make it explicit.


François

Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 178926)
+++ include/bits/hashtable.h	(working copy)
@@ -741,10 +741,10 @@
 	   _H1, _H2, _Hash, _RehashPolicy, __chc, __cit, __uk::
 __rehash_policy(const _RehashPolicy __pol)
 {
-  _M_rehash_policy = __pol;
   size_type __n_bkt = __pol._M_bkt_for_elements(_M_element_count);
   if (__n_bkt  _M_bucket_count)
-	_M_rehash(__n_bkt, __pol._M_next_resize);
+	_M_rehash(__n_bkt, _M_rehash_policy._M_next_resize);
+  _M_rehash_policy = __pol;
 }
 
   templatetypename _Key, typename _Value,
Index: testsuite/23_containers/unordered_set/max_load_factor/robustness.cc
===
--- testsuite/23_containers/unordered_set/max_load_factor/robustness.cc	(revision 0)
+++ testsuite/23_containers/unordered_set/max_load_factor/robustness.cc	(revision 0)
@@ -0,0 +1,77 @@
+// { dg-options -std=gnu++0x }
+
+// Copyright (C) 2011 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+//
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+//
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// http://www.gnu.org/licenses/.
+
+#include unordered_set
+#include limits
+#include ext/throw_allocator.h
+#include testsuite_hooks.h
+
+void test01()
+{
+  bool test __attribute__((unused)) = true;
+
+  typedef std::numeric_limitsstd::size_t nl_size_t;
+  std::unordered_setint, std::hashint, std::equal_toint,
+		 __gnu_cxx::throw_allocator_limitint  us;
+  int val = 0;
+  for (; val != 100; ++val)
+{
+  VERIFY( us.insert(val).second) ;
+  VERIFY( us.load_factor() = us.max_load_factor() );
+}
+
+  float cur_max_load_factor = us.max_load_factor();
+  int counter = 0;
+  std::size_t thrown_exceptions = 0;
+  while (true)
+{
+  __gnu_cxx::limit_condition::set_limit(counter++);
+  bool do_break = false;
+  try
+	{
+	  us.max_load_factor(.5f);
+	  do_break = true;
+	}
+  catch (const __gnu_cxx::forced_error)
+	{
+	  VERIFY( us.max_load_factor() == cur_max_load_factor );
+	  ++thrown_exceptions;
+	}
+  // Lets check that unordered_set will still be correctly resized
+  // when needed
+  __gnu_cxx::limit_condition::set_limit(nl_size_t::max());
+  for (;;)
+	{
+	  VERIFY( us.load_factor() = us.max_load_factor() );
+	  size_t nbkts = us.bucket_count();
+	  VERIFY( us.insert(val++).second );
+	  if (us.bucket_count() != nbkts)
+	break;
+	}
+  if (do_break)
+	break;
+}
+  VERIFY( thrown_exceptions  0 );
+}
+
+int main()
+{
+  test01();
+  return 0;
+}


Re: debug safe iterator patch

2012-02-06 Thread François Dumont

Attached patch applied

2012-02-06  François Dumont fdum...@gcc.gnu.org

* include/debug/safe_iterator.h
(_Safe_iterator::_M_before_dereferenceable): Avoid the expensive
creation of a _Safe_iterator instance to do the check.

François

On 02/05/2012 06:30 PM, Paolo Carlini wrote:

On 02/05/2012 06:29 PM, François Dumont wrote:

Hi

Here is a small performance patch for the debug mode. Nothing 
urgent, just tell me if I can apply it on trunk at the moment.
It impacts only debug-mode, thus it's pretty safe. If you tested it 
check-debug I guess you can commit it to mainline even now.


Thanks,
Paolo.



Index: include/debug/safe_iterator.h
===
--- include/debug/safe_iterator.h	(revision 183913)
+++ include/debug/safe_iterator.h	(working copy)
@@ -380,8 +380,12 @@
   bool
   _M_before_dereferenceable() const
   {
-	_Self __it = *this;
-	return __it._M_incrementable()  (++__it)._M_dereferenceable();
+	if (this-_M_incrementable())
+	{
+	  _Iterator __base = base();
+	  return ++__base != _M_get_sequence()-_M_base().end();
+	}
+	return false;
   }
 
   /// Is the iterator incrementable?


Re: [so_7-2] DR 13631 patch

2012-03-01 Thread François Dumont

Here is what I have finally commited to libstdcxx_so_7-2 branch.

2012-03-01  François Dumont fdum...@gcc.gnu.org

DR libstdc++/13631
* config/locale/gnu/messages_member.h, messages_member.cc: Prefer
dgettext usage to gettext to allow usage of several catalogs at the
same time. Add an internal cache to map catalog names to 
catalog ids.

* testsuite/22_locale/messages/13631.cc: New.

I have integrated your remarks Paolo and also:
- Add a destructor to the Catalogs class, an application that correctly 
close its catalogs will make this destructor useless but it is normal to 
have one.
- I change the _MapEntry from paircatalog, string to paircatalog, 
const char*. This way _M_get do not have a string copy anymore.


Tested under linux x86_64.

François

On 02/29/2012 12:02 PM, Paolo Carlini wrote:

Hi,

Hi

I finally spend some more time to enhance this patch.

I used a sorted array to store the mapping, doing so I do not 
need to export the _Rb_Tree instantiation anymore. I also simplified 
the test, to reproduce the 13631 we don't need an other valid 
catalog, any attempt to open a catalog after having open the 
'libstdc++' one show the issue. So we do not need add any 
dg-require-XXX macro to validate a catalog presence.


If no one see any problem with this patch I will commit it to 
libstdcxx_so_7-2 branch.
I'm Ok with the patch, besides a few minor stylistic nits below. You 
didn't post the ChangeLog entry: remember to have the PR number in it.

Index: testsuite/22_locale/messages/13631.cc
===
--- testsuite/22_locale/messages/13631.cc   (revision 0)
+++ testsuite/22_locale/messages/13631.cc   (revision 0)
@@ -0,0 +1,57 @@
+// { dg-require-namedlocale fr_FR }
+
+// Copyright (C) 2012 Free Software Foundation
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+//http://www.gnu.org/licenses/.
+
+#includelocale
+#includetestsuite_hooks.h
+
+int main(int argc, char **argv)
+{
+  bool test __attribute__((unused)) = true;
+  // This is defined through CXXFLAGS in scripts/testsuite_flags[.in].
+  const char* dir = LOCALEDIR;
+
+  std::locale l(fr_FR);
+
+  typedef std::messageschar  messages;
+
+  const messagesmsgs_facet = std::use_facetmessages(l);
+
+  messages::catalog msgs = msgs_facet.open(libstdc++, l, dir);
+  VERIFY( msgs= 0 );
+
+  const char msgid[] = please;
+  std::string translation1 = msgs_facet.get(msgs, 0, 0, msgid);
+
+  // Without a real translation this test doesn't mean anything:
+  VERIFY( translation1 != msgid );
+
+  // Opening an other catalog was enough to show the problem, even a fake
+  // catalog.
+  messages::catalog fake_msgs = msgs_facet.open(fake, l);
+
+  std::string translation2 = msgs_facet.get(msgs, 0, 0, msgid);
+
+  // Close catalogs before doing the check to avoid leaks.
+  msgs_facet.close(fake_msgs);
+  msgs_facet.close(msgs);
+
+  VERIFY( translation1 == translation2 );
+
+  return 0;
+}

Normally we have the test proper in a separate funtion called from main.

Index: config/locale/gnu/messages_members.cc
===
--- config/locale/gnu/messages_members.cc   (revision 184372)
+++ config/locale/gnu/messages_members.cc   (working copy)
@@ -1,6 +1,7 @@
  // std::messages implementation details, GNU version -*- C++ -*-

-// Copyright (C) 2001, 2002, 2005, 2009, 2010 Free Software Foundation, Inc.
+// Copyright (C) 2001, 2002, 2005, 2009, 2010, 2012
+// Free Software Foundation, Inc.
  //
  // This file is part of the GNU ISO C++ Library.  This library is free
  // software; you can redistribute it and/or modify it under the
@@ -31,54 +32,180 @@
  #includelocale
  #includebits/c++locale_internal.h

+#includealgorithm
+#includeutility
+#includeext/concurrence.h
+
+namespace
+{
+  using namespace std;
+
+  struct Comp
+  {
+typedef messages_base::catalog catalog;
+typedef paircatalog, string  _Mapping;
+
+bool operator () (catalog __cat, const _Mapping* __pair) const
+{ return __cat  __pair-first; }

No space after operator


+
+bool operator () (const _Mapping* __pair, catalog __cat) const
+{ return __pair-first  __cat; }

Likewise.


+  };
+
+  class Catalogs
+  {
+typedef messages_base::catalog catalog;
+typedef paircatalog, string

Re: [v3] fix libstdc++/52476

2012-03-16 Thread François Dumont

Attached patch applied.

2012-03-16  François Dumont fdum...@gcc.gnu.org

PR libstdc++/52476
* include/bits/hashtable.h (_Hashtable::_M_rehash_aux): Add.
(_Hashtable::_M_rehash): Use the latter.
* testsuite/23_containers/unordered_multimap/insert/52476.cc: New.
* testsuite/23_containers/unordered_multiset/insert/52476.cc: New.

Regards

On 03/16/2012 10:19 AM, Paolo Carlini wrote:

Hi,

Regarding the testcase, the code in the ticket is showing the problem but 
is not a test. The test might seem a little bit complicated but I try to make 
it independent to how elements are inserted into the container which is not 
defined by the Standard. Even if we change implementation and store 
0-3,0-2,0-1,0-0 rather than 0-0,0-1,0-2,0-3 the test will still work and only 
check the Standard point which is that the order of those elements should be 
preserve on rehash.

Understood, thanks for adding a second testcase for multiset.

Tested under Linux x86_64.

Ok for mainline ?

Yes, thanks a lot. Please keep in mind that barring special issues we want the 
fix for 4.7.1 too.

Thanks
Paolo


Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 185475)
+++ include/bits/hashtable.h	(working copy)
@@ -596,6 +596,12 @@
   // reserve, if present, comes from _Rehash_base.
 
 private:
+  // Helper rehash method used when keys are unique.
+  void _M_rehash_aux(size_type __n, std::true_type);
+
+  // Helper rehash method used when keys can be non-unique.
+  void _M_rehash_aux(size_type __n, std::false_type);
+
   // Unconditionally change size of bucket array to n, restore hash policy
   // state to __state on exception.
   void _M_rehash(size_type __n, const _RehashPolicyState __state);
@@ -1592,41 +1598,145 @@
 {
   __try
 	{
-	  _Bucket* __new_buckets = _M_allocate_buckets(__n);
-	  _Node* __p = _M_begin();
-	  _M_before_begin._M_nxt = nullptr;
-	  std::size_t __cur_bbegin_bkt;
-	  while (__p)
+	  _M_rehash_aux(__n, integral_constantbool, __uk());
+	}
+  __catch(...)
+	{
+	  // A failure here means that buckets allocation failed.  We only
+	  // have to restore hash policy previous state.
+	  _M_rehash_policy._M_reset(__state);
+	  __throw_exception_again;
+	}
+}
+
+  // Rehash when there is no equivalent elements.
+  templatetypename _Key, typename _Value,
+	   typename _Allocator, typename _ExtractKey, typename _Equal,
+	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
+	   bool __chc, bool __cit, bool __uk
+void
+_Hashtable_Key, _Value, _Allocator, _ExtractKey, _Equal,
+	   _H1, _H2, _Hash, _RehashPolicy, __chc, __cit, __uk::
+_M_rehash_aux(size_type __n, std::true_type)
+{
+  _Bucket* __new_buckets = _M_allocate_buckets(__n);
+  _Node* __p = _M_begin();
+  _M_before_begin._M_nxt = nullptr;
+  std::size_t __bbegin_bkt;
+  while (__p)
+	{
+	  _Node* __next = __p-_M_next();
+	  std::size_t __bkt = _HCBase::_M_bucket_index(__p, __n);
+	  if (!__new_buckets[__bkt])
 	{
-	  _Node* __next = __p-_M_next();
-	  std::size_t __new_index = _HCBase::_M_bucket_index(__p, __n);
-	  if (!__new_buckets[__new_index])
+	  __p-_M_nxt = _M_before_begin._M_nxt;
+	  _M_before_begin._M_nxt = __p;
+	  __new_buckets[__bkt] = _M_before_begin;
+	  if (__p-_M_nxt)
+		__new_buckets[__bbegin_bkt] = __p;
+	  __bbegin_bkt = __bkt;
+	}
+	  else
+	{
+	  __p-_M_nxt = __new_buckets[__bkt]-_M_nxt;
+	  __new_buckets[__bkt]-_M_nxt = __p;
+	}
+	  __p = __next;
+	}
+  _M_deallocate_buckets(_M_buckets, _M_bucket_count);
+  _M_bucket_count = __n;
+  _M_buckets = __new_buckets;
+}
+
+  // Rehash when there can be equivalent elements, preserve their relative
+  // order.
+  templatetypename _Key, typename _Value,
+	   typename _Allocator, typename _ExtractKey, typename _Equal,
+	   typename _H1, typename _H2, typename _Hash, typename _RehashPolicy,
+	   bool __chc, bool __cit, bool __uk
+void
+_Hashtable_Key, _Value, _Allocator, _ExtractKey, _Equal,
+	   _H1, _H2, _Hash, _RehashPolicy, __chc, __cit, __uk::
+_M_rehash_aux(size_type __n, std::false_type)
+{
+  _Bucket* __new_buckets = _M_allocate_buckets(__n);
+
+  _Node* __p = _M_begin();
+  _M_before_begin._M_nxt = nullptr;
+  std::size_t __bbegin_bkt;
+  std::size_t __prev_bkt;
+  _Node* __prev_p = nullptr;
+  bool __check_bucket = false;
+
+  while (__p)
+	{
+	  bool __check_now = true;
+	  _Node* __next = __p-_M_next();
+	  std::size_t __bkt = _HCBase::_M_bucket_index(__p, __n);
+
+	  if (!__new_buckets[__bkt])
+	{
+	  __p-_M_nxt = _M_before_begin._M_nxt;
+	  _M_before_begin._M_nxt = __p;
+	  __new_buckets[__bkt] = _M_before_begin;
+	  if (__p-_M_nxt)
+		__new_buckets[__bbegin_bkt] = __p;
+	  __bbegin_bkt = __bkt

Re: PR 51386

2011-12-07 Thread François Dumont

Attached patch applied:

2011-12-07  François Dumont fdum...@gcc.gnu.org

PR libstdc++/51386
* include/bits/hashtable_policy.h 
(_Prime_rehash_policy::_M_next_bkt):

Fix computation of _M_prev_resize so that hashtable do not keep on
being rehashed when _M_max_load_factor is lower than 1.

François

On 12/07/2011 11:21 AM, Paolo Carlini wrote:

Hi,

Ok to commit ?
Yes, thanks a lot for handling this. Please remember to add a proper 
header to the ChangeLog entry.


Thanks again,
Paolo.



Index: include/bits/hashtable_policy.h
===
--- include/bits/hashtable_policy.h	(revision 181975)
+++ include/bits/hashtable_policy.h	(working copy)
@@ -300,23 +300,30 @@
   {
 // Optimize lookups involving the first elements of __prime_list.
 // (useful to speed-up, eg, constructors)
-static const unsigned long __fast_bkt[12]
+static const unsigned char __fast_bkt[12]
   = { 2, 2, 2, 3, 5, 5, 7, 7, 11, 11, 11, 11 };
 
+if (__n = 11)
+  {
+	_M_prev_resize = 0;
+	_M_next_resize
+	  = __builtin_ceil(__fast_bkt[__n] * (long double)_M_max_load_factor);
+	return __fast_bkt[__n];
+  }
+
 const unsigned long* __p
-  = __n = 11 ? __fast_bkt + __n
-		  : std::lower_bound(__prime_list + 5,
- __prime_list + _S_n_primes, __n);
+  = std::lower_bound(__prime_list + 5, __prime_list + _S_n_primes, __n);
 
-_M_prev_resize = __builtin_floor(*__p * (long double)_M_max_load_factor);
-if (__p != __fast_bkt)
-  _M_prev_resize = std::min(_M_prev_resize,
-static_caststd::size_t(*(__p - 1)));
-// Lets guaranty a minimal grow step of 11:
+// Shrink will take place only if the number of elements is small enough
+// so that the prime number 2 steps before __p is large enough to still
+// conform to the max load factor:
+_M_prev_resize
+  = __builtin_floor(*(__p - 2) * (long double)_M_max_load_factor);
+
+// Let's guaranty that a minimal grow step of 11 is used
 if (*__p - __n  11)
-  __p = std::lower_bound(__prime_list + 5,
-			 __prime_list + _S_n_primes, __n + 11);
-_M_next_resize = __builtin_floor(*__p * (long double)_M_max_load_factor);
+  __p = std::lower_bound(__p, __prime_list + _S_n_primes, __n + 11);
+_M_next_resize = __builtin_ceil(*__p * (long double)_M_max_load_factor);
 return *__p;
   }
 


Re: unordered containers emplace

2011-12-09 Thread François Dumont

Attached patch applied.

2011-12-09  François Dumont fdum...@gcc.gnu.org

* include/bits/hashtable.h (_Hashtable::emplace,
_Hashtable::emplace_hint): Add.
* include/debug/unordered_set (unordered_set::emplace,
unordered_set::emplace_hint, unordered_multiset::emplace,
unordered_multiset::emplace_hint): Add.
* include/profile/unordered_set: Likewise.
* include/debug/unordered_map (unordered_map::emplace,
unordered_map::emplace_hint, unordered_multimap::emplace,
unordered_multimap::emplace_hint): Add.
* include/profile/unordered_map: Likewise.
* testsuite/23_containers/unordered_map/modifiers/emplace.cc: New.
* testsuite/23_containers/unordered_multimap/modifiers/emplace.cc:
New.
* testsuite/23_containers/unordered_set/modifiers/emplace.cc: New.
* testsuite/23_containers/unordered_multiset/modifiers/emplace.cc:
New.
* testsuite/util/testsuite_container_traits.h
(traits_base::has_emplace): Add and defined as std::true_type for
unordered containers.
* testsuite/util/exception/safety.h (emplace, emplace_hint): 
Add and

use them in basic_safety exception test case.
* doc/xml/manual/status_cxx2011.xml: Update unordered containers
status.

Regards

On 12/09/2011 11:17 AM, Paolo Carlini wrote:

Ok to commit ?

Sure, thanks!

Paolo.



Index: doc/xml/manual/status_cxx2011.xml
===
--- doc/xml/manual/status_cxx2011.xml	(revision 182133)
+++ doc/xml/manual/status_cxx2011.xml	(working copy)
@@ -1403,11 +1403,10 @@
   entryMissing emplace members/entry
 /row
 row
-  ?dbhtml bgcolor=#B0B0B0 ?
   entry23.2.5/entry
   entryUnordered associative containers/entry
-  entryPartial/entry
-  entryMissing emplace members/entry
+  entryY/entry
+  entry/
 /row
 row
   entry23.3/entry
Index: include/debug/unordered_map
===
--- include/debug/unordered_map	(revision 182133)
+++ include/debug/unordered_map	(working copy)
@@ -204,6 +204,29 @@
   cend(size_type __b) const
   { return const_local_iterator(_Base::cend(__b), __b, this); }
 
+  templatetypename... _Args
+	std::pairiterator, bool
+	emplace(_Args... __args)
+	{
+	  size_type __bucket_count = this-bucket_count();
+	  std::pair_Base_iterator, bool __res
+	= _Base::emplace(std::forward_Args(__args)...);
+	  _M_check_rehashed(__bucket_count);
+	  return std::make_pair(iterator(__res.first, this), __res.second);
+	}
+
+  templatetypename... _Args
+	iterator
+	emplace_hint(const_iterator __hint, _Args... __args)
+	{
+	  __glibcxx_check_insert(__hint);
+	  size_type __bucket_count = this-bucket_count();
+	  _Base_iterator __it = _Base::emplace_hint(__hint.base(),
+	std::forward_Args(__args)...);
+	  _M_check_rehashed(__bucket_count);
+	  return iterator(__it, this);
+	}
+
   std::pairiterator, bool
   insert(const value_type __obj)
   {
@@ -587,6 +610,29 @@
   cend(size_type __b) const
   { return const_local_iterator(_Base::cend(__b), __b, this); }
 
+  templatetypename... _Args
+	iterator
+	emplace(_Args... __args)
+	{
+	  size_type __bucket_count = this-bucket_count();
+	  _Base_iterator __it
+	= _Base::emplace(std::forward_Args(__args)...);
+	  _M_check_rehashed(__bucket_count);
+	  return iterator(__it, this);
+	}
+
+  templatetypename... _Args
+	iterator
+	emplace_hint(const_iterator __hint, _Args... __args)
+	{
+	  __glibcxx_check_insert(__hint);
+	  size_type __bucket_count = this-bucket_count();
+	  _Base_iterator __it = _Base::emplace_hint(__hint.base(),
+	std::forward_Args(__args)...);
+	  _M_check_rehashed(__bucket_count);
+	  return iterator(__it, this);
+	}
+
   iterator
   insert(const value_type __obj)
   {
Index: include/debug/unordered_set
===
--- include/debug/unordered_set	(revision 182133)
+++ include/debug/unordered_set	(working copy)
@@ -204,6 +204,29 @@
   cend(size_type __b) const
   { return const_local_iterator(_Base::cend(__b), __b, this); }
 
+  templatetypename... _Args
+	std::pairiterator, bool
+	emplace(_Args... __args)
+	{
+	  size_type __bucket_count = this-bucket_count();
+	  std::pair_Base_iterator, bool __res
+	= _Base::emplace(std::forward_Args(__args)...);
+	  _M_check_rehashed(__bucket_count);
+	  return std::make_pair(iterator(__res.first, this), __res.second);
+	}
+
+  templatetypename... _Args
+	iterator
+	emplace_hint(const_iterator __hint, _Args... __args)
+	{
+	  __glibcxx_check_insert(__hint);
+	  size_type __bucket_count = this-bucket_count();
+	  _Base_iterator __it = _Base::emplace_hint(__hint.base(),
+	std::forward_Args(__args)...);
+	  _M_check_rehashed(__bucket_count);
+	  return iterator(__it

Re: profile mode patch

2011-12-10 Thread François Dumont

Attached patch applied.

2011-12-12  François Dumont fdum...@gcc.gnu.org

* include/profile/unordered_set: Minor formatting changes.
(unordered_set::_M_profile_destruct,
unordered_multiset::_M_profile_destruct): Fix implementation 
to not

rely on normal implementation details anymore.
(unordered_set::_M_profile_resize,
unordered_multiset::_M_profile_resize): Implement consistently
accross all unordered containers.
(unordered_set::emplace, unordered_set::emplace_hint,
unordered_multiset::emplace, 
unordered_multset::emplace_hint): Add

to signal rehash to profiling system.
* include/profile/unordered_map: Likewise for unordered_map and
unordered_multimap.


Thanks Paolo for the help on the ChangeLog, this is quite hard sometimes 
to find the correct level of details.


François
Index: include/profile/unordered_map
===
--- include/profile/unordered_map	(revision 182173)
+++ include/profile/unordered_map	(working copy)
@@ -171,6 +171,28 @@
 _Base::clear();
   }
 
+  templatetypename... _Args
+	std::pairiterator, bool
+	emplace(_Args... __args)
+	{
+	  size_type __old_size = _Base::bucket_count();
+	  std::pairiterator, bool __res
+	= _Base::emplace(std::forward_Args(__args)...);
+	  _M_profile_resize(__old_size);
+	  return __res;
+	}
+
+  templatetypename... _Args
+	iterator
+	emplace_hint(const_iterator __it, _Args... __args)
+	{
+	  size_type __old_size = _Base::bucket_count();
+	  iterator __res
+	= _Base::emplace_hint(__it, std::forward_Args(__args)...);
+	  _M_profile_resize(__old_size);
+	  return __res;
+	}
+
   void
   insert(std::initializer_listvalue_type __l)
   { 
@@ -182,7 +204,7 @@
   std::pairiterator, bool
   insert(const value_type __obj)
   {
-size_type __old_size =  _Base::bucket_count();
+size_type __old_size = _Base::bucket_count();
 std::pairiterator, bool __res = _Base::insert(__obj);
 _M_profile_resize(__old_size); 
 return __res;
@@ -203,7 +225,7 @@
 std::pairiterator, bool
 insert(_Pair __obj)
 {
-	  size_type __old_size =  _Base::bucket_count();
+	  size_type __old_size = _Base::bucket_count();
 	  std::pairiterator, bool __res
 	= _Base::insert(std::forward_Pair(__obj));
 	  _M_profile_resize(__old_size); 
@@ -243,7 +265,7 @@
   mapped_type
   operator[](const _Key __k)
   {
-size_type __old_size =  _Base::bucket_count();
+size_type __old_size = _Base::bucket_count();
 mapped_type __res = _M_base()[__k];
 _M_profile_resize(__old_size); 
 return __res;
@@ -252,7 +274,7 @@
   mapped_type
   operator[](_Key __k)
   {
-size_type __old_size =  _Base::bucket_count();
+size_type __old_size = _Base::bucket_count();
 mapped_type __res = _M_base()[std::move(__k)];
 _M_profile_resize(__old_size); 
 return __res;
@@ -264,9 +286,9 @@
 
   void rehash(size_type __n)
   {
-size_type __old_size =  _Base::bucket_count();
-_Base::rehash(__n);
-_M_profile_resize(__old_size); 
+	size_type __old_size = _Base::bucket_count();
+	_Base::rehash(__n);
+	_M_profile_resize(__old_size); 
   }
 
 private:
@@ -274,33 +296,33 @@
   _M_profile_resize(size_type __old_size)
   {
 	size_type __new_size = _Base::bucket_count();
-if (__old_size != __new_size)
+	if (__old_size != __new_size)
 	  __profcxx_hashtable_resize(this, __old_size, __new_size);
   }
 
   void
   _M_profile_destruct()
   {
-size_type __hops = 0, __lc = 0, __chain = 0;
-for (iterator __it = _M_base().begin(); __it != _M_base().end();
-	 ++__it)
+	size_type __hops = 0, __lc = 0, __chain = 0;
+	iterator __it = this-begin();
+	while (__it != this-end())
 	  {
-	while (__it._M_cur_node-_M_next)
-	  {
-		++__chain;
-		++__it;
-	  }
+	size_type __bkt = this-bucket(__it-first);
+	for (++__it; __it != this-end()
+			  this-bucket(__it-first) == __bkt;
+		 ++__it)
+	  ++__chain;
 	if (__chain)
 	  {
 		++__chain;
-		__lc = __lc  __chain ? __lc : __chain;  
+		__lc = __lc  __chain ? __lc : __chain;
 		__hops += __chain * (__chain - 1) / 2;
 		__chain = 0;
 	  }
 	  }
-__profcxx_hashtable_destruct2(this, __lc,  _Base::size(), __hops); 
+	__profcxx_hashtable_destruct2(this, __lc, _Base::size(), __hops);
   }
-   };
+  };
 
   templatetypename _Key, typename _Tp, typename _Hash,
 	   typename _Pred, typename _Alloc
@@ -429,12 +451,6 @@
 _M_profile_destruct();
   }
 
-  _Base
-  _M_base() noexcept   { return *this; }
-
-  const _Base
-  _M_base() const noexcept { return *this; }
-
   void
   clear() noexcept
   {
@@ -444,20 +460,42 @@
 _Base::clear();
   }
 
+  templatetypename

Re: RE :Re: RE :Re: hashtable local iterator

2012-01-03 Thread François Dumont

Attached patch applied.

2012-01-03  François Dumont fdum...@gcc.gnu.org

* include/bits/hashtable_policy.h (_Ebo_helper): Rename to 
the more
specific _Hashtable_ebo_helper. Hide this implementation detail 
thanks

to private inheritance.

I was about to roll the ChangeLog but I saw that there is already a 
January  entry in it so I keep on adding to the current one.


François


On 01/02/2012 10:13 PM, Paolo Carlini wrote:

On 01/02/2012 09:01 PM, François Dumont wrote:

On 01/02/2012 02:27 PM, Paolo Carlini wrote:

Hi,

Hi

Here is a proposition of patch compiling all your remarks.

2012-01-02  François Dumont fdum...@gcc.gnu.org

* include/bits/hashtable_policy.h (_Ebo_helper): Rename 
into the
more specific _Hashtable_ebo_helper. Hide this 
implementation detail

thanks to private inheritance.

Tested under x86_64 linux normal and debug mode.

Ok to commit ?
Can you please also adjust those comments referring to the 
deprecated unary_function? Otherwise the patch looks good to me.

Like this Paolo ?
Humm, now I see there is no underscore anywhere in those comments, 
thus there is no risk of confusion with the deprecated unary_function 
and binary_function. I'm sorry I may have misread, therefore I think 
you can just go ahead with the rest of your work and leave out the 
changes to the comments in include/bits/hashtable.h.


Thanks again,
Paolo.



Index: include/bits/hashtable_policy.h
===
--- include/bits/hashtable_policy.h	(revision 182854)
+++ include/bits/hashtable_policy.h	(working copy)
@@ -1,6 +1,6 @@
 // Internal policy header for unordered_set and unordered_map -*- C++ -*-
 
-// Copyright (C) 2010, 2011 Free Software Foundation, Inc.
+// Copyright (C) 2010, 2011, 2012 Free Software Foundation, Inc.
 //
 // This file is part of the GNU ISO C++ Library.  This library is free
 // software; you can redistribute it and/or modify it under the
@@ -517,43 +517,43 @@
   // and when it worth it, type is empty.
   templateint _Nm, typename _Tp,
 	   bool __use_ebo = !__is_final(_Tp)  __is_empty(_Tp)
-struct _Ebo_helper;
+struct _Hashtable_ebo_helper;
 
   // Specialization using EBO.
   templateint _Nm, typename _Tp
-struct _Ebo_helper_Nm, _Tp, true : _Tp
+struct _Hashtable_ebo_helper_Nm, _Tp, true : private _Tp
 {
-  _Ebo_helper() = default;
-  _Ebo_helper(const _Tp __tp) : _Tp(__tp)
+  _Hashtable_ebo_helper() = default;
+  _Hashtable_ebo_helper(const _Tp __tp) : _Tp(__tp)
   { }
 
   static const _Tp
-  _S_cget(const _Ebo_helper __eboh)
+  _S_cget(const _Hashtable_ebo_helper __eboh)
   { return static_castconst _Tp(__eboh); }
 
   static _Tp
-  _S_get(_Ebo_helper __eboh)
+  _S_get(_Hashtable_ebo_helper __eboh)
   { return static_cast_Tp(__eboh); }
 };
 
   // Specialization not using EBO.
   templateint _Nm, typename _Tp
-struct _Ebo_helper_Nm, _Tp, false
+struct _Hashtable_ebo_helper_Nm, _Tp, false
 {
-  _Ebo_helper() = default;
-  _Ebo_helper(const _Tp __tp) : __m_tp(__tp)
+  _Hashtable_ebo_helper() = default;
+  _Hashtable_ebo_helper(const _Tp __tp) : _M_tp(__tp)
   { }
 
   static const _Tp
-  _S_cget(const _Ebo_helper __eboh)
-  { return __eboh.__m_tp; }
+  _S_cget(const _Hashtable_ebo_helper __eboh)
+  { return __eboh._M_tp; }
 
   static _Tp
-  _S_get(_Ebo_helper __eboh)
-  { return __eboh.__m_tp; }
+  _S_get(_Hashtable_ebo_helper __eboh)
+  { return __eboh._M_tp; }
 
 private:
-  _Tp __m_tp;
+  _Tp _M_tp;
 };
 
   // Class template _Hash_code_base.  Encapsulates two policy issues that
@@ -583,11 +583,13 @@
   templatetypename _Key, typename _Value, typename _ExtractKey, 
 	   typename _H1, typename _H2, typename _Hash
 struct _Hash_code_base_Key, _Value, _ExtractKey, _H1, _H2, _Hash, false
-: _Ebo_helper0, _ExtractKey, _Ebo_helper1, _Hash
+: private _Hashtable_ebo_helper0, _ExtractKey,
+  private _Hashtable_ebo_helper1, _Hash
 {
 private:
-  typedef _Ebo_helper0, _ExtractKey _EboExtractKey;
-  typedef _Ebo_helper1, _Hash _EboHash;
+  typedef _Hashtable_ebo_helper0, _ExtractKey _EboExtractKey;
+  typedef _Hashtable_ebo_helper1, _Hash _EboHash;
+
 protected:
   // We need the default constructor for the local iterators.
   _Hash_code_base() = default;
@@ -655,12 +657,14 @@
 	   typename _H1, typename _H2
 struct _Hash_code_base_Key, _Value, _ExtractKey, _H1, _H2,
 			   _Default_ranged_hash, false
-: _Ebo_helper0, _ExtractKey, _Ebo_helper1, _H1, _Ebo_helper2, _H2
+: private _Hashtable_ebo_helper0, _ExtractKey,
+  private _Hashtable_ebo_helper1, _H1,
+  private _Hashtable_ebo_helper2, _H2
 {
 private:
-  typedef _Ebo_helper0, _ExtractKey _EboExtractKey;
-  typedef _Ebo_helper1, _H1 _EboH1;
-  typedef _Ebo_helper2, _H2 _EboH2

Re: [PATCH] hashtable insert enhancement

2012-01-13 Thread François Dumont

Attached patch applied.

2012-01-13  François Dumont fdum...@gcc.gnu.org

* include/bits/hashtable_policy.h (_Hash_node_base): New, use it as
base class of ...
(_Hash_nodeValue, true, _Hash_nodeValue, false): ... those.
* include/bits/hashtable.h (_Hashtable): Replace 
_M_begin_bucket_index

by _M_before_begin. Review implementation so that we do not need to
look for previous non-empty bucket when inserting nodes.

François


On 01/13/2012 12:03 AM, Paolo Carlini wrote:

On 01/09/2012 09:36 PM, François Dumont wrote:
Same patch proposition as the previous one except that I have 
revisited the _M_rehash method that was still trying to keep nodes 
ordered by their bucket index.

Please go ahead.

Thanks,
Paolo.



Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 183031)
+++ include/bits/hashtable.h	(working copy)
@@ -93,13 +93,13 @@
   // and unordered_multimap.
   /**
* Here's _Hashtable data structure, each _Hashtable has:
-   * - _Bucket[] _M_buckets
-   * - size_type _M_bucket_count
-   * - size_type _M_begin_bucket_index
-   * - size_type _M_element_count
+   * - _Bucket[]   _M_buckets
+   * - _Hash_node_base _M_before_begin
+   * - size_type   _M_bucket_count
+   * - size_type   _M_element_count
*
-   * with _Bucket being _Node* and _Node:
-   * - _Node*_M_next
+   * with _Bucket being _Hash_node* and _Hash_node constaining:
+   * - _Hash_node*   _M_next
* - Tp_M_value
* - size_t_M_code if cache_hash_code is true
*
@@ -107,36 +107,34 @@
* - std::forward_list_Node containing the elements
* - std::vectorstd::forward_list_Node::iterator representing the buckets
*
-   * The first non-empty bucket with index _M_begin_bucket_index contains the
-   * first container node which is also the first bucket node whereas other
-   * non-empty buckets contain the node before the first bucket node. This is so
-   * to implement something like a std::forward_list::erase_after on container
-   * erase calls.
+   * The non-empty buckets contain the node before the first bucket node. This
+   * design allow to implement something like a std::forward_list::insert_after
+   * on container insertion and std::forward_list::erase_after on container
+   * erase calls. _M_before_begin is equivalent to
+   * std::foward_list::before_begin. Empty buckets are containing nullptr.
+   * Note that one of the non-empty bucket contains _M_before_begin which is
+   * not a derefenrenceable node so the node pointers in buckets shall never be
+   * derefenrenced, only its next node can be.
* 
-   * Access to the bucket last element require a check on the hash code to see
-   * if the node is still in the bucket. Such a design impose a quite efficient
-   * hash functor and is one of the reasons it is highly advise to set
+   * Walk through a bucket nodes require a check on the hash code to see if the
+   * node is still in the bucket. Such a design impose a quite efficient hash
+   * functor and is one of the reasons it is highly advise to set
* __cache_hash_code to true.
*
* The container iterators are simply built from nodes. This way incrementing
-   * the iterator is perfectly efficient no matter how many empty buckets there
-   * are in the container.
+   * the iterator is perfectly efficient independent of how many empty buckets
+   * there are in the container.
*
* On insert we compute element hash code and thanks to it find the bucket
-   * index. If the element is the first one in the bucket we must find the
-   * previous non-empty bucket where the previous node rely. To keep this loop
-   * minimal it is important that the number of bucket is not too high compared
-   * to the number of elements. So the hash policy must be carefully design so
-   * that it computes a bucket count large enough to respect the user defined
-   * load factor but also not too large to limit impact on the insert operation.
+   * index. If the element must be inserted on an empty bucket we add it at the
+   * beginning of the singly linked list and make the bucket point to
+   * _M_before_begin. The bucket that used to point to _M_before_begin, if any,
+   * is updated to point to its new before begin node.
*
* On erase, the simple iterator design impose to use the hash functor to get
* the index of the bucket to update. For this reason, when __cache_hash_code
* is set to false, there is a static assertion that the hash functor cannot
* throw.
-   *
-   * _M_begin_bucket_index is used to offer contant time access to the container
-   * begin iterator.
*/
 
   templatetypename _Key, typename _Value, typename _Allocator,
@@ -182,6 +180,8 @@
 	using __if_hash_code_not_cached
 	  = __or_integral_constantbool, __cache_hash_code, _Cond;
 
+  // When hash codes are not cached

Re: libstdc++/51866 too, sorry

2012-01-18 Thread François Dumont

Attached patch applied.

2012-01-18  François Dumont fdum...@gcc.gnu.org
Roman Kononov ro...@binarylife.net

PR libstdc++/51866
* include/bits/hashtable.h (_Hashtable::_M_insert(_Arg, 
false_type)):

Do not keep a reference to a potentially moved instance.
* testsuite/23_containers/unordered_multiset/insert/51866.cc: New.
* testsuite/23_containers/unordered_multimap/insert/51866.cc: New.

François

On 01/18/2012 12:35 AM, Paolo Carlini wrote:

On 01/17/2012 11:31 PM, Jonathan Wakely wrote:

On 17 January 2012 21:14, François Dumont wrote:

Ok to commit ?

OK, thanks.
Great. Please also double check with the submitters of libstdc++/51845 
(ask on the audit trail?) that it's actually fixed by the same patch, 
and in case resolve it as duplicate.


Thanks again,
Paolo.




Index: include/bits/hashtable.h
===
--- include/bits/hashtable.h	(revision 183164)
+++ include/bits/hashtable.h	(working copy)
@@ -1227,7 +1227,7 @@
 	  = this-_M_hash_code(__k);
 	this-_M_store_code(__new_node, __code);
 
-	// Second,  do rehash if necessary.
+	// Second, do rehash if necessary.
 	if (__do_rehash.first)
 		_M_rehash(__do_rehash.second, __saved_state);
 
@@ -1347,21 +1347,24 @@
 	  = _M_rehash_policy._M_need_rehash(_M_bucket_count,
 	_M_element_count, 1);
 
-	const key_type __k = this-_M_extract()(__v);
-	typename _Hashtable::_Hash_code_type __code = this-_M_hash_code(__k);
+	// First compute the hash code so that we don't do anything if it throws.
+	typename _Hashtable::_Hash_code_type __code
+	  = this-_M_hash_code(this-_M_extract()(__v));
 
 	_Node* __new_node = nullptr;
 	__try
 	  {
-	// First allocate new node so that we don't rehash if it throws.
+	// Second allocate new node so that we don't rehash if it throws.
 	__new_node = _M_allocate_node(std::forward_Arg(__v));
 	this-_M_store_code(__new_node, __code);
 	if (__do_rehash.first)
 		_M_rehash(__do_rehash.second, __saved_state);
 
-	// Second, find the node before an equivalent one.
-	size_type __n = _M_bucket_index(__k, __code);
-	_BaseNode* __prev = _M_find_before_node(__n, __k, __code);
+	// Third, find the node before an equivalent one.
+	size_type __bkt = _M_bucket_index(__new_node);
+	_BaseNode* __prev
+	  = _M_find_before_node(__bkt, this-_M_extract()(__new_node-_M_v),
+__code);
 	if (__prev)
 	  {
 		// Insert after the node before the equivalent one.
@@ -1372,7 +1375,7 @@
 	  // The inserted node has no equivalent in the hashtable. We must
 	  // insert the new node at the beginning of the bucket to preserve
 	  // equivalent elements relative positions.
-	  _M_insert_bucket_begin(__n, __new_node);
+	  _M_insert_bucket_begin(__bkt, __new_node);
 	++_M_element_count;
 	return iterator(__new_node);
 	  }
Index: testsuite/23_containers/unordered_multimap/insert/51866.cc
===
--- testsuite/23_containers/unordered_multimap/insert/51866.cc	(revision 0)
+++ testsuite/23_containers/unordered_multimap/insert/51866.cc	(revision 0)
@@ -0,0 +1,87 @@
+// { dg-options -std=gnu++0x }
+//
+// Copyright (C) 2012 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+//
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+//
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// http://www.gnu.org/licenses/.
+
+#include unordered_map
+#include testsuite_hooks.h
+
+struct num
+{
+  int value;
+  num(int n) : value(n) {}
+  num(num const)= default;
+  num operator=(num const) = default;
+  num(num o)   : value(o.value)
+  { o.value = -1; }
+  num operator=(num o)
+  {
+if (this != o)
+  {
+	value = o.value;
+	o.value = -1;
+  }
+return *this;
+  }
+};
+
+struct num_hash
+{
+  size_t operator()(num const a) const
+  { return a.value; }
+};
+
+struct num_equal
+{
+  static bool _S_invalid_equal_call;
+  bool operator()(num const a, num const b) const
+  {
+if (a.value == -1 || b.value == -1)
+  _S_invalid_equal_call = true;
+return a.value == b.value;
+  }
+};
+
+bool num_equal::_S_invalid_equal_call = false;
+
+// libstdc++/51866
+void test01()
+{
+  bool test __attribute__((unused)) = true;
+  
+  std::unordered_multimapnum, int, num_hash, num_equal mmap;
+  mmap.insert

Re: PR 58148 patch

2013-08-29 Thread François Dumont
Indeed, I check the Standard about const_pointer, so here is another 
attempt.


Tested under Linux x86_64.

2013-08-29  François Dumont  fdum...@gcc.gnu.org

PR libstdc++/58148
* include/debug/functions.h (__foreign_iterator_aux4): Use
sequence const_pointer as common type to compare pointers. Add a
fallback overload in case pointers cannot be cast to sequence
const_pointer.
* testsuite/23_containers/vector/modifiers/insert/58148.cc: New.

Ok to commit ?

François


On 08/28/2013 10:50 PM, Paolo Carlini wrote:

Hi,

On 08/28/2013 09:30 PM, François Dumont wrote:

- std::addressof(*(__it._M_get_sequence()-_M_base().begin(
+ (*(__it._M_get_sequence()-_M_base().begin(
I'm not convinced that you can avoid these std::addressof: it seems to 
me that the value_type can still have an overloaded operator


Paolo.



Index: include/debug/functions.h
===
--- include/debug/functions.h	(revision 201966)
+++ include/debug/functions.h	(working copy)
@@ -36,7 +36,7 @@
 #include bits/move.h// for __addressof and addressof
 #if __cplusplus = 201103L
 # include bits/stl_function.h		  // for less and greater_equal
-# include type_traits			  // for common_type
+# include type_traits			  // for is_lvalue_reference and __and_
 #endif
 #include debug/formatter.h
 
@@ -172,17 +172,16 @@
 }
 
 #if __cplusplus = 201103L
+  // Default implementation.
   templatetypename _Iterator, typename _Sequence,
-	   typename _InputIterator,
-	   typename _PointerType1,
-	   typename _PointerType2
+	   typename _InputIterator
 inline bool
 __foreign_iterator_aux4(const _Safe_iterator_Iterator, _Sequence __it,
 			_InputIterator __other,
-			_PointerType1, _PointerType2)
+			typename _Sequence::const_pointer,
+			typename _Sequence::const_pointer)
 {
-  typedef typename std::common_type_PointerType1,
-	_PointerType2::type _PointerType;
+  typedef typename _Sequence::const_pointer _PointerType;
   constexpr std::less_PointerType __l{};
   constexpr std::greater_equal_PointerType __ge{};
 
@@ -192,7 +191,16 @@
 		  std::addressof(*(__it._M_get_sequence()-_M_base().end()
    - 1)) + 1));
 }
-			  
+
+  // Fallback when address type cannot be implicitely casted to sequence
+  // const_pointer.
+  templatetypename _Iterator, typename _Sequence,
+	   typename _InputIterator
+inline bool
+__foreign_iterator_aux4(const _Safe_iterator_Iterator, _Sequence,
+			_InputIterator, ...)
+{ return true; }
+
   templatetypename _Iterator, typename _Sequence, typename _InputIterator
 inline bool
 __foreign_iterator_aux3(const _Safe_iterator_Iterator, _Sequence __it,
@@ -223,7 +231,7 @@
 			std::false_type)
 { return true; }
 #endif
-			   
+
   /** Checks that iterators do not belong to the same sequence. */
   templatetypename _Iterator, typename _Sequence, typename _OtherIterator
 inline bool
Index: testsuite/23_containers/vector/modifiers/insert/58148.cc
===
--- testsuite/23_containers/vector/modifiers/insert/58148.cc	(revision 0)
+++ testsuite/23_containers/vector/modifiers/insert/58148.cc	(revision 0)
@@ -0,0 +1,35 @@
+// Copyright (C) 2013 Free Software Foundation, Inc.
+//
+// This file is part of the GNU ISO C++ Library.  This library is free
+// software; you can redistribute it and/or modify it under the
+// terms of the GNU General Public License as published by the
+// Free Software Foundation; either version 3, or (at your option)
+// any later version.
+
+// This library is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+
+// You should have received a copy of the GNU General Public License along
+// with this library; see the file COPYING3.  If not see
+// http://www.gnu.org/licenses/.
+
+// { dg-options -std=gnu++11 }
+// { dg-do compile }
+
+#include vector
+
+void
+test01()
+{
+  std::vectorwchar_t v;
+  char c = 'a';
+  v.insert(v.begin(), c, c);
+}
+
+int main()
+{
+  test01();
+  return 0;
+}


Re: PR 58191 patch

2013-08-30 Thread François Dumont

Hi

I finally generalized  this method to other debug functions, it is 
more consistent and clean the implementation of the debug checks. For 
4.8 branch I will limit it to just what need to be really fixed.


2013-08-30  François Dumont  fdum...@gcc.gnu.org

PR libstdc++/58191
* include/debug/macros.h (__glibcxx_check_partitioned_lower): Add
__gnu_debug::__base calls on iterators passed to internal debug
check.
(__glibcxx_check_partitioned_lower_pred): Likewise.
(__glibcxx_check_partitioned_upper): Likewise.
(__glibcxx_check_partitioned_upper_pred): Likewise.
(__glibcxx_check_sorted): Likewise.
(__glibcxx_check_sorted_pred): Likewise.
(__glibcxx_check_sorted_set): Likewise.
(__glibcxx_check_sorted_set_pred): Likewise.
* include/debug/functions.h (__check_partitioned_lower):
Remove code to detect safe iterators.
(__check_partitioned_upper): Likewise.
(__check_sorted): Likewise.

François


On 08/27/2013 11:08 PM, Paolo Carlini wrote:

On 08/27/2013 10:57 PM, François Dumont wrote:

Hi

Here is a patch to fix the small PR 58191 regression. I don't 
remember why I hadn't used the __gnu_debug::__base from the start 
rather than trying to reproduce its behavior within the 
__check_partitioned* methods. This way we only detect random access 
safe iterator to enhance performance but do not check the iterator 
category otherwise, concept checks are there for that reason.


The patch is generated from the 4.8 branch. I still need to reg 
test it but if it succeeded is it ok to commit ? Just in trunk or 
also in 4.8 branch ?
Thanks. Let's play safe, let's apply it to mainline and let's ask 
people on the audit trail of the bug too to test it. If everything 
goes well let's backport it to the branch in a few weeks.


Thanks again,
Paolo.



Index: include/debug/functions.h
===
--- include/debug/functions.h	(revision 201966)
+++ include/debug/functions.h	(working copy)
@@ -336,15 +336,6 @@
   return true;
 }
 
-  // For performance reason, as the iterator range has been validated, check on
-  // random access safe iterators is done using the base iterator.
-  templatetypename _Iterator, typename _Sequence
-inline bool
-__check_sorted_aux(const _Safe_iterator_Iterator, _Sequence __first,
-		   const _Safe_iterator_Iterator, _Sequence __last,
-		   std::random_access_iterator_tag __tag)
-  { return __check_sorted_aux(__first.base(), __last.base(), __tag); }
-
   // Can't check if an input iterator sequence is sorted, because we can't step
   // through the sequence.
   templatetypename _InputIterator, typename _Predicate
@@ -371,17 +362,6 @@
   return true;
 }
 
-  // For performance reason, as the iterator range has been validated, check on
-  // random access safe iterators is done using the base iterator.
-  templatetypename _Iterator, typename _Sequence,
-	   typename _Predicate
-inline bool
-__check_sorted_aux(const _Safe_iterator_Iterator, _Sequence __first,
-		   const _Safe_iterator_Iterator, _Sequence __last,
-		   _Predicate __pred,
-		   std::random_access_iterator_tag __tag)
-  { return __check_sorted_aux(__first.base(), __last.base(), __pred, __tag); }
-
   // Determine if a sequence is sorted.
   templatetypename _InputIterator
 inline bool
@@ -470,11 +450,13 @@
   return __check_sorted_set_aux(__first, __last, __pred, _SameType());
}
 
+  // _GLIBCXX_RESOLVE_LIB_DEFECTS
+  // 270. Binary search requirements overly strict
+  // Determine if a sequence is partitioned w.r.t. this element.
   templatetypename _ForwardIterator, typename _Tp
 inline bool
-  __check_partitioned_lower_aux(_ForwardIterator __first,
-_ForwardIterator __last, const _Tp __value,
-std::forward_iterator_tag)
+__check_partitioned_lower(_ForwardIterator __first,
+			  _ForwardIterator __last, const _Tp __value)
 {
   while (__first != __last  *__first  __value)
 	++__first;
@@ -487,38 +469,11 @@
   return __first == __last;
 }
 
-  // For performance reason, as the iterator range has been validated, check on
-  // random access safe iterators is done using the base iterator.
-  templatetypename _Iterator, typename _Sequence, typename _Tp
-inline bool
-__check_partitioned_lower_aux(
-			const _Safe_iterator_Iterator, _Sequence __first,
-			const _Safe_iterator_Iterator, _Sequence __last,
-			const _Tp __value,
-			std::random_access_iterator_tag __tag)
-{
-  return __check_partitioned_lower_aux(__first.base(), __last.base(),
-	   __value, __tag);
-}
-
-  // _GLIBCXX_RESOLVE_LIB_DEFECTS
-  // 270. Binary search requirements overly strict
-  // Determine if a sequence is partitioned w.r.t. this element.
   templatetypename _ForwardIterator, typename _Tp
 inline bool
-__check_partitioned_lower(_ForwardIterator __first,
+__check_partitioned_upper(_ForwardIterator __first

Re: PR 58148 patch

2013-08-30 Thread François Dumont

Yes, this was cleaner, tested and committed this way.

2013-08-30  François Dumont  fdum...@gcc.gnu.org

PR libstdc++/58148
* include/debug/functions.h (__foreign_iterator_aux4): Use
sequence const_pointer as common type to compare pointers. Add a
fallback overload in case pointers cannot be cast to sequence
const_pointer.
* testsuite/23_containers/vector/modifiers/insert/58148.cc: New.

François


On 08/30/2013 11:44 AM, Paolo Carlini wrote:

Hi,

On 08/29/2013 09:37 PM, François Dumont wrote:
Indeed, I check the Standard about const_pointer, so here is another 
attempt.


Tested under Linux x86_64.

2013-08-29  François Dumont  fdum...@gcc.gnu.org

PR libstdc++/58148
* include/debug/functions.h (__foreign_iterator_aux4): Use
sequence const_pointer as common type to compare pointers. Add a
fallback overload in case pointers cannot be cast to sequence
const_pointer.
* testsuite/23_containers/vector/modifiers/insert/58148.cc: New.

Ok to commit ?
Seems pretty good to me. I have been thinking: when the non-trivial 
__foreign_iterator_aux4 is selected it actually has available as the 
last two arguments


std::addressof(*(__it._M_get_sequence()-_M_base().begin()))
std::addressof(*__other)

we could as well give the parameters names and avoid passing __other. 
Also, I think we can do everything with std::less. I'm attaching below 
something I quickly hacked, untested, see if you like it in case 
commit something similar.


Thanks!
Paolo.

//




Re: Remove algo logic duplication Round 3

2013-09-28 Thread François Dumont

On 09/28/2013 02:45 AM, Paolo Carlini wrote:
.. by the way, in the current stl_algo* I'm still seeing many, many, 
functions which should be inline not declared as such: each function 
which has a few __glibcxx_requires* at the beginning (which normally 
boil down to nothing) and then forwards to a std::__* helper should be 
inline.




Fixed with the attached patch tested under Linux x86_64.

I also get your remark about the open round bracket, I didn't know that 
round bracket was the other name for parentheses ! I also fix the one 
you pointed me, I will be more careful next  time.


2013-09-28  François Dumont  fdum...@gcc.gnu.org

* include/bits/stl_algo.h (remove_copy, remove_copy_if): Declare
inline.
(rotate_copy, stable_partition, partial_sort_copy): Likewise.
(lower_bound, upper_bound, equal_range, inplace_merge): Likewise.
(includes, next_permutation, prev_permutation): Likewise.
(replace_copy, replace_copy_if, is_sorted_until): Likewise.
(minmax_element, is_permutation, adjacent_find): Likewise.
(count, count_if, search, search_n, merge): Likewise.
(set_intersection, set_difference): Likewise.
(set_symmetric_difference, min_element, max_element): Likewise.
* include/bits/stl_algobase.h (lower_bound): Likewise.
(lexicographical_compare, mismatch): Likewise.

I consider it trivial enough to commit it.

François

Index: include/bits/stl_algo.h
===
--- include/bits/stl_algo.h	(revision 203005)
+++ include/bits/stl_algo.h	(working copy)
@@ -661,7 +661,7 @@
*  are copied is unchanged.
   */
   templatetypename _InputIterator, typename _OutputIterator, typename _Tp
-_OutputIterator
+inline _OutputIterator
 remove_copy(_InputIterator __first, _InputIterator __last,
 		_OutputIterator __result, const _Tp __value)
 {
@@ -694,7 +694,7 @@
   */
   templatetypename _InputIterator, typename _OutputIterator,
 	   typename _Predicate
-_OutputIterator
+inline _OutputIterator
 remove_copy_if(_InputIterator __first, _InputIterator __last,
 		   _OutputIterator __result, _Predicate __pred)
 {
@@ -1414,9 +1414,8 @@
   __glibcxx_requires_valid_range(__first, __middle);
   __glibcxx_requires_valid_range(__middle, __last);
 
-  typedef typename iterator_traits_ForwardIterator::iterator_category
-	_IterType;
-  std::__rotate(__first, __middle, __last, _IterType());
+  std::__rotate(__first, __middle, __last,
+		std::__iterator_category(__first));
 }
 
   /**
@@ -1440,7 +1439,7 @@
*  for each @p n in the range @p [0,__last-__first).
   */
   templatetypename _ForwardIterator, typename _OutputIterator
-_OutputIterator
+inline _OutputIterator
 rotate_copy(_ForwardIterator __first, _ForwardIterator __middle,
 _ForwardIterator __last, _OutputIterator __result)
 {
@@ -1647,7 +1646,7 @@
*  relative ordering after calling @p stable_partition().
   */
   templatetypename _ForwardIterator, typename _Predicate
-_ForwardIterator
+inline _ForwardIterator
 stable_partition(_ForwardIterator __first, _ForwardIterator __last,
 		 _Predicate __pred)
 {
@@ -1733,7 +1732,7 @@
*  The value returned is @p __result_first+N.
   */
   templatetypename _InputIterator, typename _RandomAccessIterator
-_RandomAccessIterator
+inline _RandomAccessIterator
 partial_sort_copy(_InputIterator __first, _InputIterator __last,
 		  _RandomAccessIterator __result_first,
 		  _RandomAccessIterator __result_last)
@@ -1782,7 +1781,7 @@
   */
   templatetypename _InputIterator, typename _RandomAccessIterator,
 	   typename _Compare
-_RandomAccessIterator
+inline _RandomAccessIterator
 partial_sort_copy(_InputIterator __first, _InputIterator __last,
 		  _RandomAccessIterator __result_first,
 		  _RandomAccessIterator __result_last,
@@ -2016,7 +2015,7 @@
*  the function used for the initial sort.
   */
   templatetypename _ForwardIterator, typename _Tp, typename _Compare
-_ForwardIterator
+inline _ForwardIterator
 lower_bound(_ForwardIterator __first, _ForwardIterator __last,
 		const _Tp __val, _Compare __comp)
 {
@@ -2073,7 +2072,7 @@
*  @ingroup binary_search_algorithms
   */
   templatetypename _ForwardIterator, typename _Tp
-_ForwardIterator
+inline _ForwardIterator
 upper_bound(_ForwardIterator __first, _ForwardIterator __last,
 		const _Tp __val)
 {
@@ -2105,7 +2104,7 @@
*  the function used for the initial sort.
   */
   templatetypename _ForwardIterator, typename _Tp, typename _Compare
-_ForwardIterator
+inline _ForwardIterator
 upper_bound(_ForwardIterator __first, _ForwardIterator __last,
 		const _Tp __val, _Compare __comp)
 {
@@ -2179,7 +2178,7 @@
*  but does not actually call those functions.
   */
   templatetypename _ForwardIterator, typename _Tp
-pair_ForwardIterator, _ForwardIterator
+inline

Debug functions review

2013-10-22 Thread François Dumont

Hi

Here is a patch to clean up a little some debug functions. I got 
rid of the __check_singular_aux, simply playing with __check_singular 
overloads was enough. I also added the missing __check_dereferenceable 
for safe local iterators.


2013-10-22  François Dumont fdum...@gcc.gnu.org

* include/debug/formatter.h (__check_singular): Add const on
iterator reference.
* include/debug/functions.h (__check_singular_aux): Delete.
(__check_singular(const _Ite)): Add const on iterator reference.
(__check_singular(const _Safe_iterator_Ite, _Seq)): Delete.
(__check_dereferenceable(const _Ite)): Add const on iterator
reference.
(__check_dereferenceable(const _Safe_local_iterator)): New.
* include/debug/safe_iterator.h (__check_singular_aux): Delete.
(__check_singular(const _Safe_iterator_base)): New.

Tested under Linux x86_64 debug mode.

Ok to commit ?

François

Index: include/debug/formatter.h
===
--- include/debug/formatter.h	(revision 203909)
+++ include/debug/formatter.h	(working copy)
@@ -38,7 +38,7 @@
   using std::type_info;
 
   templatetypename _Iterator
-bool __check_singular(_Iterator);
+bool __check_singular(const _Iterator);
 
   class _Safe_sequence_base;
 
Index: include/debug/functions.h
===
--- include/debug/functions.h	(revision 203909)
+++ include/debug/functions.h	(working copy)
@@ -45,20 +45,19 @@
   templatetypename _Iterator, typename _Sequence
 class _Safe_iterator;
 
+  templatetypename _Iterator, typename _Sequence
+class _Safe_local_iterator;
+
   templatetypename _Sequence
 struct _Insert_range_from_self_is_safe
 { enum { __value = 0 }; };
 
-  // An arbitrary iterator pointer is not singular.
-  inline bool
-  __check_singular_aux(const void*) { return false; }
-
-  // We may have an iterator that derives from _Safe_iterator_base but isn't
-  // a _Safe_iterator.
+  /** Assume that some arbitrary iterator is not singular, because we
+  can't prove that it is. */
   templatetypename _Iterator
 inline bool
-__check_singular(_Iterator __x)
-{ return __check_singular_aux(__x); }
+__check_singular(const _Iterator __x)
+{ return false; }
 
   /** Non-NULL pointers are nonsingular. */
   templatetypename _Tp
@@ -66,17 +65,11 @@
 __check_singular(const _Tp* __ptr)
 { return __ptr == 0; }
 
-  /** Safe iterators know if they are singular. */
-  templatetypename _Iterator, typename _Sequence
-inline bool
-__check_singular(const _Safe_iterator_Iterator, _Sequence __x)
-{ return __x._M_singular(); }
-
   /** Assume that some arbitrary iterator is dereferenceable, because we
   can't prove that it isn't. */
   templatetypename _Iterator
 inline bool
-__check_dereferenceable(_Iterator)
+__check_dereferenceable(const _Iterator)
 { return true; }
 
   /** Non-NULL pointers are dereferenceable. */
@@ -85,12 +78,19 @@
 __check_dereferenceable(const _Tp* __ptr)
 { return __ptr; }
 
-  /** Safe iterators know if they are singular. */
+  /** Safe iterators know if they are dereferenceable. */
   templatetypename _Iterator, typename _Sequence
 inline bool
 __check_dereferenceable(const _Safe_iterator_Iterator, _Sequence __x)
 { return __x._M_dereferenceable(); }
 
+  /** Safe local iterators know if they are dereferenceable. */
+  templatetypename _Iterator, typename _Sequence
+inline bool
+__check_dereferenceable(const _Safe_local_iterator_Iterator,
+		   _Sequence __x)
+{ return __x._M_dereferenceable(); }
+
   /** If the distance between two random access iterators is
*  nonnegative, assume the range is valid.
   */
Index: include/debug/safe_iterator.h
===
--- include/debug/safe_iterator.h	(revision 203909)
+++ include/debug/safe_iterator.h	(working copy)
@@ -56,13 +56,10 @@
   { return __it == __seq-_M_base().begin(); }
 };
 
-  /** Iterators that derive from _Safe_iterator_base but that aren't
-   *  _Safe_iterators can be determined singular or non-singular via
-   *  _Safe_iterator_base.
-   */
-  inline bool 
-  __check_singular_aux(const _Safe_iterator_base* __x)
-  { return __x-_M_singular(); }
+  /** _Safe_iterators can be determined singular or non-singular. */
+  inline bool
+  __check_singular(const _Safe_iterator_base __x)
+  { return __x._M_singular(); }
 
   /** The precision to which we can calculate the distance between
*  two iterators.



Re: Debug functions review

2013-10-23 Thread François Dumont

On 10/23/2013 12:37 AM, Paolo Carlini wrote:


Hi,

François Dumont frs.dum...@gmail.com ha scritto:

Hi

 Here is a patch to clean up a little some debug functions. I got
rid of the __check_singular_aux, simply playing with __check_singular
overloads was enough. I also added the missing __check_dereferenceable
for safe local iterators.

This is probably straightforward but I want to be sure I understand your 
previous message + this one: do they mean that in some cases, due to that 
missing 'const', we weren't catching non-dereferenceable iterators? Thus, 
should we also add a testcase?

Paolo

You are right, I am preparing a test case. However you have to know 
that __check_dereferenceable is simply not used for the moment. It is 
only because I have started using it for a debug mode evolution that I 
discovered the issue.


François



  1   2   3   4   5   6   7   8   9   10   >