[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #28 from Daniel Micay --- I got jemalloc / Bionic libc (Android) to report errors for malloc and mmap/mremap larger than PTRDIFF_MAX a while ago (along with fixing a missing case for mremap in musl), but glibc needs to be convinced to do the same. It would be a lot easier to convince them with this officially documented. I think it's perfectly reasonable if it's clearly stated that objects larger than PTRDIFF_MAX are not supported and that the libc implementation needs to deal with it.
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 Martin Sebor changed: What|Removed |Added CC||msebor at gcc dot gnu.org --- Comment #27 from Martin Sebor --- (In reply to Alexander Cherepanov from comment #4) > Interesting. In particular, this means that the warning "Argument 'size' of > function malloc has a fishy (possibly negative) value" from valgrind is a > serious thing. Is this gcc limitation documented somewhere? Is there a > better reference than this bug? GCC 7 and later mention it in the documentation of the -Walloc-size-larger-than=n option. It should be documented more prominently. They also diagnose the program in comment #0: pr67999.c: In function ‘main’: pr67999.c:7:15: warning: argument 1 value ‘3221225472’ exceeds maximum object size 2147483647 [-Walloc-size-larger-than=] char *buf = malloc(len); ^~~ In file included from pr67999.c:2: /usr/include/stdlib.h:427:14: note: in a call to allocation function ‘malloc’ declared here extern void *malloc (size_t __size) __THROW __attribute_malloc__ __wur; ^~ > > Am I right that the C standards do not allow for such a limitation (and > hence this should not be reported to glibc as a bug) and gcc is not > standards-compliant in this regard? Or I'm missing something? I think malloc() should fail for such large requests because objects that big don't satisfy the basic requirements on pointer arithmetic.
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 Florian Weimer changed: What|Removed |Added See Also||https://gcc.gnu.org/bugzill ||a/show_bug.cgi?id=63303 --- Comment #26 from Florian Weimer --- (In reply to Florian Weimer from comment #12) > (In reply to Daniel Micay from comment #10) > > (In reply to Florian Weimer from comment #7) > > > If this is not a GCC bug and it is the responsibility of allocators not to > > > produce huge objects, do we also have to make sure that no object crosses > > > the boundary between 0x7fff_ and 0x8000_? If pointers are treated > > > as de-facto signed, this is where signed overflow would occur. > > > > No, that's fine. > > Is this based on your reading of the standard, the GCC sources, or both? > (It is unusual to see people making such definite statements about > middle-end/back-end behavior, that's why I have to ask.) As I suspect, the claim that this is fine seems to be incorrect, see bug 63303 comment 13.
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #25 from Alexander Cherepanov --- On 28.10.2015 03:12, joseph at codesourcery dot com wrote: >> What is missing in the discussion is a cost of support in gcc of objects >> with size > PTRDIFF_MAX. I guess overhead in compiled code would be >> minimal while headache in gcc itself is noticable. But I could be wrong. > > I think the biggest overhead would include that every single pointer > subtraction, where the target type is (or might be, in the case or VLAs) > larger than one byte, would either need to have conditional code for what > order the pointers are in, E.g. by using __builtin_sub_overflow. > or would need to extend to a wider type, > subtract in that type, divide in that type and then reduce to ptrdiff_t; > it would no longer be possible to do (ptrdiff_t subtraction, then > EXACT_DIV_EXPR on ptrdiff_t). Do you expect many such cases? My wild guess would be that most cases of pointer subtraction are for char* or known (at compile-time) to be positive or both. > There would be other things, such as > pointer addition / subtraction of integers needing to handle values > outside the range of ptrdiff_t, At first sight, these don't require special treatment and could just wrap. But their handling is probably trickier in optimizer.
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #21 from Alexander Cherepanov --- On 2015-10-21 06:21, danielmicay at gmail dot com wrote: >> I think several issues are mixed: > > A conforming C implementation requires either fixing both the compiler and > libc > functions to handle > PTRDIFF_MAX objects or preventing them from being > allocated via standard mechanisms (and *ideally* documenting the restriction). Yes, but: 1) a practical C implementation is not isolated and have to be able to work with external objects (e.g. received from a kernel); 2) a conforming C implementation could be freestanding; 3) the situation is not symmetric. You cannot make a libc be able process huge objects until a compiler is able to do it. IOW if the compiler supports huge objects then you have a freedom choose whether you want your libc to support them or not. > Since there are many other issues with > PTRDIFF_MAX objects (p - q, > read/write > and similar uses of ssize_t, etc.) and few reasons to allow it, it really > makes > the most sense to tackle it in libc. Other issues where? In typical user code? Then compiler/libc shouldn't create them objects with size > PTRDIFF_MAX. It doesn't mean shouldn't be able to deal with them. E.g., I can imagine a libc where malloc doesn't create such object by default but have a system-wide, per-user or even compile-time option to enable such a feature. Or you can limit memory with some system feature (ulimit, cgroups) independently from libc (mentioned by Florian Weimer elsewhere). Lack of compiler's support more or less makes all these possimilities impossible. What is missing in the discussion is a cost of support in gcc of objects with size > PTRDIFF_MAX. I guess overhead in compiled code would be minimal while headache in gcc itself is noticable. But I could be wrong. >> How buggy? Are there bugs filed? Searching for PTRDIFF_MAX finds Zarro Boogs. > > It hasn't been treated as a systemic issue or considered as something related > to PTRDIFF_MAX. You'd need to search for issues like ssize_t overflow to find > them. If you really want one specific example, it looks like there's at least > one case of `end - start` in stdlib/qsort.c among other places (char *mid = lo > + size * ((hi - lo) / size >> 1);). Ok, in this specific example, 'end - start' is divided by a value of size_t type and, hence, is casted to an unsigned type giving a right thing in the end. > I don't think fixing every usage of `end - > start` on arbitrarily sized objects is the right way to go, so it's not > something I'd audit for and file bugs about. I was going to try to submit this bug but the code turned out to be working fine. Not that the code is valid C but the situation is a bit trickier than simple "the function doesn't work for this data". Another example? >> For this to work a compiler have to support for working with huge objects, >> right? > > Well, they might just need a contiguous allocation without the need to > actually > use it all at once. It doesn't necessarily require compiler support, but it > could easily go wrong without compiler support if the semantics of the > implementation aren't clearly laid out (and at the moment it's definitely not > documented). Exactly! It's a mine field.
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #22 from joseph at codesourcery dot com --- On Tue, 27 Oct 2015, ch3root at openwall dot com wrote: > What is missing in the discussion is a cost of support in gcc of objects > with size > PTRDIFF_MAX. I guess overhead in compiled code would be > minimal while headache in gcc itself is noticable. But I could be wrong. I think the biggest overhead would include that every single pointer subtraction, where the target type is (or might be, in the case or VLAs) larger than one byte, would either need to have conditional code for what order the pointers are in, or would need to extend to a wider type, subtract in that type, divide in that type and then reduce to ptrdiff_t; it would no longer be possible to do (ptrdiff_t subtraction, then EXACT_DIV_EXPR on ptrdiff_t). There would be other things, such as pointer addition / subtraction of integers needing to handle values outside the range of ptrdiff_t, but it's pointer subtraction that I expect would have the main runtime overhead. (On strict alignment targets, for naturally-aligned power-of-two element sizes, you could do logical shifts on the pointers before doing a signed subtraction, so that case needn't be quite as inefficient as the general case.)
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #23 from Rich Felker --- I think you can always do the right-shift first. Pointer subtraction is undefined unless both pointers point to elements of the same array, and the addresses of elements of an array will inherently be congruent modulo the size of an element.
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #24 from joseph at codesourcery dot com --- I suppose that while EXACT_DIV_EXPR on the individual halves of a subtraction wouldn't be correct, it would also be correct (given any constant element size) to do the (right shift, multiply by reciprocal of odd part) combination, that works for exact division without needing a multiplication high part, on each pointer, because the subtraction would cancel out the matching errors from the two pointers not being divisible by the size of the type.
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #18 from Alexander Cherepanov --- I guess nobody doubts that the current situation in gcc+glibc (and clang+glibc) should be fixed as valid programs are miscompiled. And it's easy to imagine security consequences of this when buffers have sizes controlled by attackers. The problem is not limited to the comparisons of the form 'p + a < p', all comparison of the form 'p + a < p + b' are probably miscompiled. And subtraction of pointers is problematic too: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=45779 The issue affects not only all 32-bit platforms (i386, x32, arm, etc.) but also 16-bit ones, right? Or all of them are dead? Recently even 18-bit one was mentioned... Whether gcc violates C11 or not is not clear. The standard mostly speaks about compiler+library. OTOH gcc can be used as a freestanding implementation and even in a hosted environment, in practice, AIUI there could be external objects, not from compiler or libc. Hence IMHO this limitation should at least be documented in a user-visible place. (The same for libc's: if they cannot deal with huge objects it should be documented even if they cannot create them.)
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #19 from Alexander Cherepanov --- (In reply to Daniel Micay from comment #5) > Objects larger than PTRDIFF_MAX are forbidden with musl (malloc, mmap and > friends report ENOMEM and it's explicitly undefined to create them in > another way and use them with libc), Is it documented? > Since glibc's implementations of standard library functions don't handle > objects larger than PTRDIFF_MAX, this can definitely be considered as a > buggy area in glibc. How buggy? Are there bugs filed? Searching for PTRDIFF_MAX finds Zarro Boogs. > FWIW, Clang also treats `ptr + size` with `size > PTRDIFF_MAX` as undefined > for standard C pointer arithmetic because the underlying getelementptr > instruction in LLVM is inherently a signed arithmetic operation. Clang marks This terminology is quite misleading. It's neither signed nor unsigned. Pointer operand is unsigned while offsets are signed. > standard C pointer arithmetic operations as "inbounds", which among other > things makes use of the value returned from wrapping into undefined > behavior. Ok, I've read the doc you linked to in another comment (thanks for that!). "inbound" means that base pointer and all sums of it with offsets should be in bounds of an actual object. And additions are done "with infinitely precise signed arithmetic". This is the same restriction as in C11 which is satisfied in the provided example. (If one large number is a problem replace "buf + len" by "(int *)buf + len / sizeof(int)".) So it should work in Clang? (In reply to Daniel Micay from comment #13) > They'd still be able to make a mmap system call via syscall(...) to avoid > the check, so it seems like it's mostly an ABI compatibility issue. For this to work a compiler have to support for working with huge objects, right? I think several issues are mixed: - support in a compiler for working with huge objects; - support in a libc for creation of huge objects (via malloc etc.); - support in a libc for processing of huge objects. All of them could be tackled separately. Not all combinations are sensible though:-)
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #20 from Daniel Micay --- > I think several issues are mixed: A conforming C implementation requires either fixing both the compiler and libc functions to handle > PTRDIFF_MAX objects or preventing them from being allocated via standard mechanisms (and *ideally* documenting the restriction). Since there are many other issues with > PTRDIFF_MAX objects (p - q, read/write and similar uses of ssize_t, etc.) and few reasons to allow it, it really makes the most sense to tackle it in libc. > Is it documented? I don't think musl has documentation like that in general. > This terminology is quite misleading. It's neither signed nor unsigned. > Pointer operand is unsigned while offsets are signed. Agreed. > How buggy? Are there bugs filed? Searching for PTRDIFF_MAX finds Zarro Boogs. It hasn't been treated as a systemic issue or considered as something related to PTRDIFF_MAX. You'd need to search for issues like ssize_t overflow to find them. If you really want one specific example, it looks like there's at least one case of `end - start` in stdlib/qsort.c among other places (char *mid = lo + size * ((hi - lo) / size >> 1);). I don't think fixing every usage of `end - start` on arbitrarily sized objects is the right way to go, so it's not something I'd audit for and file bugs about. I did do some testing a while ago by passing > PTRDIFF_MAX size objects to the standard libc functions taking pointer and size parameters so I'm aware that it's problematic. > This is the same restriction as in C11 which is satisfied in the provided > example. (If one large number is a problem replace "buf + len" by "(int *)buf > + len / sizeof(int)".) So it should work in Clang? The length is cast to a signed integer of the same size and that negative signed offset is given as an argument to the inbounds GEP instruction, which is undefined since it wraps. > For this to work a compiler have to support for working with huge objects, > right? Well, they might just need a contiguous allocation without the need to actually use it all at once. It doesn't necessarily require compiler support, but it could easily go wrong without compiler support if the semantics of the implementation aren't clearly laid out (and at the moment it's definitely not documented).
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #11 from Florian Weimer --- (In reply to Daniel Micay from comment #9) > I don't think there's much of a use case for allocating a single >2G > allocation in a 3G or 4G address space. The main OpenJDK heap (well, it was Java back then) has to be once contiguous memory mapping, and there was significant demand to get past 2 GiB. For users who are tied to 32-bit VMs due to JNI and other considerations, this demand probably still exists. Oracle database apparently tried to use large shared-memory mappings as well. If I read the old documentation correctly, it actually had to be in one piece, too. (The documentation talks about changing the SHMMAX parameter to a large value, not just SHMALL.) PostgreSQL definitely needs a single large shared-memory mapping, but its buffering behavior is significantly different, so I think there was less demand to create these huge mappings. > It has a high chance of failure > simply due to virtual memory fragmentation, especially since the kernel's > mmap allocation algorithm is so naive (keeps going downwards and ignores > holes until it runs out, rather than using first-best-fit). The mappings are created early during process life-time, and if I recall correctly, this requirement limited ASLR for 32-bit processes quite significantly. > Was the demand for a larger address space or was it really for the ability > to allocate all that memory in one go? In the Java case, it was for a contiguous memory mapping larger than 2 GiB. I'm less sure about the Oracle use case.
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #14 from Daniel Micay --- (In reply to Florian Weimer from comment #12) > (In reply to Daniel Micay from comment #10) > > (In reply to Florian Weimer from comment #7) > > > If this is not a GCC bug and it is the responsibility of allocators not to > > > produce huge objects, do we also have to make sure that no object crosses > > > the boundary between 0x7fff_ and 0x8000_? If pointers are treated > > > as de-facto signed, this is where signed overflow would occur. > > > > No, that's fine. > > Is this based on your reading of the standard, the GCC sources, or both? > (It is unusual to see people making such definite statements about > middle-end/back-end behavior, that's why I have to ask.) It's not the kind of thing the standard is concerned with: it'd be perfectly valid for an implementation to forbid that, as long as it was enforced throughout the implementation. It would be just crazy to have a requirement like that. As far as I know, the use of signed offsets for pointer arithmetic in GCC is just a design decision with known consequences. That's definitely the case in LLVM, since it's very explicitly documented as being a signed offset with undefined overflow.
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #15 from Daniel Micay --- i.e. AFAIK the offsets are intended to be treated as signed but treating pointers as signed would be a serious bug rather than a design choice
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 Florian Weimer changed: What|Removed |Added CC||fw at gcc dot gnu.org --- Comment #7 from Florian Weimer --- If this is not a GCC bug and it is the responsibility of allocators not to produce huge objects, do we also have to make sure that no object crosses the boundary between 0x7fff_ and 0x8000_? If pointers are treated as de-facto signed, this is where signed overflow would occur.
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #12 from Florian Weimer --- (In reply to Daniel Micay from comment #10) > (In reply to Florian Weimer from comment #7) > > If this is not a GCC bug and it is the responsibility of allocators not to > > produce huge objects, do we also have to make sure that no object crosses > > the boundary between 0x7fff_ and 0x8000_? If pointers are treated > > as de-facto signed, this is where signed overflow would occur. > > No, that's fine. Is this based on your reading of the standard, the GCC sources, or both? (It is unusual to see people making such definite statements about middle-end/back-end behavior, that's why I have to ask.)
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #8 from Florian Weimer --- (In reply to Alexander Cherepanov from comment #4) > Am I right that the C standards do not allow for such a limitation (and > hence this should not be reported to glibc as a bug) and gcc is not > standards-compliant in this regard? Or I'm missing something? The standard explicitly acknowledges the possibility of arrays that have more than PTRDIFF_MAX elements (it says that the difference of two pointers within the same array is not necessarily representable in ptrdiff_t). I'm hesitant to put in artificial limits into glibc because in the mast, there was significant demand for huge mappings in 32-bit programs (to the degree that Red Hat even shipped special kernels for this purpose).
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 Jakub Jelinek changed: What|Removed |Added CC||jakub at gcc dot gnu.org --- Comment #17 from Jakub Jelinek --- (In reply to Richard Biener from comment #16) > GCC assumes objects will not wrap around zero only (well, it assumes objects > cannot live at address zero but it also assumes that the > pointer-to-one-element-after isn't zero or wraps around zero). Well, we still have a bug mentioned in https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63303#c3 for pointer subtraction, though perhaps it is hard to construct a testcase where it would make a difference except for the sanitizers.
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #9 from Daniel Micay --- (In reply to Florian Weimer from comment #8) > (In reply to Alexander Cherepanov from comment #4) > > > Am I right that the C standards do not allow for such a limitation (and > > hence this should not be reported to glibc as a bug) and gcc is not > > standards-compliant in this regard? Or I'm missing something? > > The standard explicitly acknowledges the possibility of arrays that have > more than PTRDIFF_MAX elements (it says that the difference of two pointers > within the same array is not necessarily representable in ptrdiff_t). > > I'm hesitant to put in artificial limits into glibc because in the mast, > there was significant demand for huge mappings in 32-bit programs (to the > degree that Red Hat even shipped special kernels for this purpose). I don't think there's much of a use case for allocating a single >2G allocation in a 3G or 4G address space. It has a high chance of failure simply due to virtual memory fragmentation, especially since the kernel's mmap allocation algorithm is so naive (keeps going downwards and ignores holes until it runs out, rather than using first-best-fit). Was the demand for a larger address space or was it really for the ability to allocate all that memory in one go?
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #10 from Daniel Micay --- (In reply to Florian Weimer from comment #7) > If this is not a GCC bug and it is the responsibility of allocators not to > produce huge objects, do we also have to make sure that no object crosses > the boundary between 0x7fff_ and 0x8000_? If pointers are treated > as de-facto signed, this is where signed overflow would occur. No, that's fine. It's the offsets that are treated as ptrdiff_t. Clang/LLVM handle it the same way. There's a very important assumption for optimizations that pointer arithmetic cannot wrap (per the standard) and all offsets are treated as signed integers. AFAIK, `ptr + size` is equivalent to `ptr + (ptrdiff_t)size` in both Clang and GCC. There's documentation on how this is handled in LLVM IR here, specifically the inbounds marker which is added to all standard C pointer arithmetic: http://llvm.org/docs/LangRef.html#getelementptr-instruction I expect GCC works very similarly, but I'm not familiar with the GCC internals. It's not really a compiler bug because the standard allows object size limits, but the compiler and standard C library both need to be aware of those limits and enforce them if they exist. So it's a bug in GCC + glibc or Clang + glibc, not either of them alone. I think dealing with it in libc is the only full solution though due to issues like `p - q` and the usage of ssize_t for sizes in functions like read/write.
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #13 from Daniel Micay --- They'd still be able to make a mmap system call via syscall(...) to avoid the check, so it seems like it's mostly an ABI compatibility issue. Of course, they'd have to be very careful to avoid all of the caveats of a mapping that large too. It could be dealt with as it usually is by making new symbols with the checks to avoid changing anything for old binaries. And yeah, the vanilla kernel ASLR is incredibly weak. It only uses up to 1MiB of virtual memory (8-bit ASLR) rather than 256MiB (16-bit) like PaX.
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #16 from Richard Biener --- GCC assumes objects will not wrap around zero only (well, it assumes objects cannot live at address zero but it also assumes that the pointer-to-one-element-after isn't zero or wraps around zero).
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 Daniel Micay changed: What|Removed |Added CC||danielmicay at gmail dot com --- Comment #5 from Daniel Micay --- An implementation can have object size limits, but I think it's incorrect if those limits are not enforced for all standard ways of allocating objects. Objects larger than PTRDIFF_MAX are forbidden with musl (malloc, mmap and friends report ENOMEM and it's explicitly undefined to create them in another way and use them with libc), and it would make a lot of sense for glibc to do the same thing. I recently landed the same feature in Android's Bionic libc for mmap. Since glibc's implementations of standard library functions don't handle objects larger than PTRDIFF_MAX, this can definitely be considered as a buggy area in glibc. The typical issue is `end - start` but compilers considering addition to be undefined in this case isn't surprising either. FWIW, Clang also treats `ptr + size` with `size > PTRDIFF_MAX` as undefined for standard C pointer arithmetic because the underlying getelementptr instruction in LLVM is inherently a signed arithmetic operation. Clang marks standard C pointer arithmetic operations as "inbounds", which among other things makes use of the value returned from wrapping into undefined behavior. Last time I checked, the non-standard GNU C `void *` arithmetic doesn't get tagged as "inbounds" by Clang, so wrapping is well-defined for that.
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 Rich Felker changed: What|Removed |Added CC||bugdal at aerifal dot cx --- Comment #6 from Rich Felker --- IMO there's nothing wrong with what GCC's doing, but library implementations that allow allocations > PTRDIFF_MAX are buggy. musl has always gotten this right and Bionic has fixed it recently; see https://android-review.googlesource.com/#/c/170800/ Somebody should probably file a bug with glibc if there's not one already, but clearly they're aware of this issue (Alexander Cherepanov pointed this out to me): https://sourceware.org/ml/libc-alpha/2011-12/msg00066.html The key part is: "I don't think there's anything that can sensibly be done in the compiler about this issue; I think the only way to avoid security problems there is for malloc and other allocation functions to refuse to allocate objects using half or more of the address space..."
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #2 from Andrew Pinski --- ssize_t is a signed integer and in the case of x86, it is 32bits which means exactly what Marc wrote.
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #1 from Marc Glisse --- I don't think gcc supports using more than half the address space in a single allocation. At least I've seen reports of bugs in the past, and I seem to remember people not being very concerned...
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #3 from Andreas Schwab--- Did you mean ptrdiff_t?
[Bug c/67999] Wrong optimization of pointer comparisons
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=67999 --- Comment #4 from Alexander Cherepanov --- Interesting. In particular, this means that the warning "Argument 'size' of function malloc has a fishy (possibly negative) value" from valgrind is a serious thing. Is this gcc limitation documented somewhere? Is there a better reference than this bug? Am I right that the C standards do not allow for such a limitation (and hence this should not be reported to glibc as a bug) and gcc is not standards-compliant in this regard? Or I'm missing something?