> > > > > > size_type > > > _M_check_len(size_type __n, const char* __s) const > > > { > > > const size_type __size = size(); > > > const size_type __max_size = max_size(); > > > > > > if (__is_same(allocator_type, allocator<_Tp>) > > > && __size > __max_size / 2) > > > > > > > This check is wrong for C++17 and older standards, because max_size() > > changed value in C++20. > > > > In C++17 it was PTRDIFF_MAX / sizeof(T) but in C++20 it's SIZE_MAX / > > sizeof(T). So on 32-bit targets using C++17, it's possible a std::vector > > could use PTRDIFF_MAX/2 bytes, and then the size <= max_size/2 assumption > > would not hold. > > Can we go with this perhaps only for 64bit targets? > I am not sure how completely safe this idea is in 32bit world: I guess > one can have OS that lets you to allocate half of address space as one > allocation.
Perhaps something like: size > std::min ((uint64_t)__max_size, ((uint64_t)1 << 62) / sizeof (_Tp)) is safe for all allocators and 32bit, so we won't need __is_same test and test for 64bit? Honza > > Thanks! > Honza