https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999

--- Comment #14 from Daniel Micay <danielmicay at gmail dot com> ---
(In reply to Florian Weimer from comment #12)
> (In reply to Daniel Micay from comment #10)
> > (In reply to Florian Weimer from comment #7)
> > > If this is not a GCC bug and it is the responsibility of allocators not to
> > > produce huge objects, do we also have to make sure that no object crosses
> > > the boundary between 0x7fff_ffff and 0x8000_0000?  If pointers are treated
> > > as de-facto signed, this is where signed overflow would occur.
> > 
> > No, that's fine.
> 
> Is this based on your reading of the standard, the GCC sources, or both? 
> (It is unusual to see people making such definite statements about
> middle-end/back-end behavior, that's why I have to ask.)

It's not the kind of thing the standard is concerned with: it'd be perfectly
valid for an implementation to forbid that, as long as it was enforced
throughout the implementation. It would be just crazy to have a requirement
like that. As far as I know, the use of signed offsets for pointer arithmetic
in GCC is just a design decision with known consequences. That's definitely the
case in LLVM, since it's very explicitly documented as being a signed offset
with undefined overflow.

Reply via email to