On Tue, Jun 20, 2023 at 09:50:25AM +0200, Jan Hubicka wrote:
> > >
> > >       size_type
> > >       _M_check_len(size_type __n, const char* __s) const
> > >       {
> > >         const size_type __size = size();
> > >         const size_type __max_size = max_size();
> > >
> > >         if (__is_same(allocator_type, allocator<_Tp>)
> > >               && __size > __max_size / 2)
> > >
> > 
> > This check is wrong for C++17 and older standards, because max_size()
> > changed value in C++20.
> > 
> > In C++17 it was PTRDIFF_MAX / sizeof(T) but in C++20 it's SIZE_MAX /
> > sizeof(T). So on 32-bit targets using C++17, it's possible a std::vector
> > could use PTRDIFF_MAX/2 bytes, and then the size <= max_size/2 assumption
> > would not hold.
> 
> Can we go with this perhaps only for 64bit targets?
> I am not sure how completely safe this idea is in 32bit world: I guess
> one can have OS that lets you to allocate half of address space as one
> allocation.

Is it safe even on 64bit targets?  I mean, doesn't say PowerPC already allow
full 64-bit virtual address space?  The assumption that one can't have
more than half of virtual address space allocations is true right now at
least on x86-64, aarch64 and others, but isn't that something that can
change with newer versions of CPUs without the need to recompile
applications (add another level or two of page tables)?
By being hardcoded in libstdc++ headers those assumptions will be hardcoded
in lots of applications.

        Jakub

Reply via email to