Hi Jonathan,

Yep, unfortunately #pragma GCC poison is far too restrictive, it
doesn't check if it is a function call to that particular banned
function, it restricts any and all use of that identifier in the code
altogether. Not only does this mean you can't use overloads of a
banned function, you can't use that identifier whatsoever, not even in
naming your own function. This presents some trouble, for example this
codebase has the following snippets in it:

void* outerp = os::malloc(total_sz, mtInternal);

And the declaration and definitions for os::malloc as well:

// General allocation (must be MT-safe)
static void* malloc  (size_t size, MemTag mem_tag, const
NativeCallStack& stack);
static void* malloc  (size_t size, MemTag mem_tag);

This is not the standard library malloc of course, but had I put
#pragma GCC poison malloc into a general header, these areas would've
immediately been flagged and the compilation would've been terminated
in an error. It's also prohibitively difficult to unpoison an
identifier poisoned this way, making it unsuitable for use in this
codebase, unfortunately

best regards,
Julian

On Mon, Apr 14, 2025 at 6:32 PM Jonathan Wakely <jwakely....@gmail.com> wrote:
>
> On Mon, 14 Apr 2025 at 10:11, Julian Waters via Gcc <gcc@gcc.gnu.org> wrote:
> >
> > Hi all,
> >
> > A codebase I'm working with has decided that poisoning certain
> > standard library functions is necessary, as it explicitly does not use
> > those functions unless absolutely necessary for its own reasons (This
> > was not my decision to make). As such, we've been looking into ways to
> > implement that.
> >
> > The problem is there's really no way at all to forbid standard library
> > functions. The crucial thing is that this mechanism must have a
> > disable switch for when a callsite must actually call a forbidden
> > function, complicating the whole process.
> >
> > It would be nice to have compiler support to help with this. Something like:
> >
> > #pragma GCC forbidden void *malloc(size_t) // Forbids malloc from
> > being called anywhere
>
> Have you looked at
> https://gcc.gnu.org/onlinedocs/cpp/Pragmas.html#index-_0023pragma-GCC-poison
> ?
>
>
> > #pragma GCC permitted push
> > #pragma GCC permitted void *malloc(size_t)
> > void *ptr = malloc(1); // This callsite must use malloc, bypassing the
> > usual restriction
> > #pragma GCC permitted pop
> >
> > However, I'm guessing this would be a bit complex to implement. There
> > are other ways I have tried as well, such as:
> >
> > #include <cstdlib>
> >
> > namespace permitted {
> >     using ::malloc;
> > }
> >
> > inline namespace forbidden {
> >     [[deprecated("use os::malloc")]] void *malloc(size_t) throw() = delete;
> > }
> >
> > int main() {
> >     void *ptr = malloc(1); // Should raise an error, must call
> > permitted::malloc(1) to avoid
> >     free(ptr);
> > }
> >
> > The problem with the above is that the error isn't from the function
> > being deleted or marked as deprecated, it's an ambiguity error that is
> > confusing and doesn't tell the developer why the error is actually
> > happening (Because the function has been forbidden). Additionally,
> > there is no way to disable this system in third party headers included
> > from the testing framework, making it unusable. Given this, could I
> > request some sort of compiler specific "shadow" attribute that can be
> > applied to a namespace? With this attribute, the symbols in this
> > namespace will take precedence over the symbols of the namespace it is
> > in (Including the global namespace, if its parent is in the global
> > namespace) if it is injected into the parent, either because it is
> > inline, unnamed, or because of a using namespace x; declaration. This
> > contrasts with the standard behaviour of raising an ambiguity error
> > when trying to resolve which symbol to use, instead, the symbol from
> > the namespace marked as shadow will always be given precedence.
> >
> > #include <cstdlib>
> >
> > namespace permitted {
> >     using ::malloc;
> > }
> >
> > inline namespace [[gnu::shadow]] forbidden {
> >     [[deprecated("use os::malloc")]] void *malloc(size_t) throw() = delete;
> > }
> >
> > int main() {
> >     void *ptr = malloc(1); // Correct error shown, forbidden::malloc
> > chosen over standard library
> >     free(ptr);
> > }
> >
> > Ideally shadow could have another effect on using declarations.
> > Normally, trying to do a using declaration to choose between 2
> > conflicting symbols will just fail:
> >
> > #include <cstdlib>
> >
> > namespace permitted {
> >     using ::malloc;
> > }
> >
> > inline namespace forbidden {
> >     [[deprecated("use os::malloc")]] void *malloc(size_t) throw() = delete;
> > }
> >
> > using forbidden::malloc;
> >
> > int main() {
> >     void *ptr = malloc(1);
> >     free(ptr);
> > }
> >
> > <source>:11:18: error: 'void* forbidden::malloc(size_t)' conflicts
> > with a previous declaration
> > 11 | using forbidden::malloc;
> > | ^~~~~~
> >
> > Perhaps, if one of those symbols comes from a namespace marked as
> > shadow, the using declaration would instead behave differently,
> > choosing which declaration is used instead of just bringing the
> > declaration into scope:
> >
> > #include <cstdlib>
> >
> > namespace permitted {
> >     using ::malloc;
> > }
> >
> > inline namespace [[gnu::shadow]] forbidden {
> >     [[deprecated("use os::malloc")]] void *malloc(size_t) throw() = delete;
> > }
> >
> > using forbidden::malloc; // From now on, any calls to malloc use
> > forbidden::malloc
> > using ::malloc; // From now on, any call to malloc go to ::malloc
> > using permitted::malloc; // From now on, any call to malloc goes to
> > permitted::malloc
> >
> > Or alternatively, is there a better way to do this instead of adding
> > custom support to the compiler? The gcc static analyzer doesn't seem
> > to support C++ yet nor does it seem to have a configurable check for
> > whether you are calling a given function, neither does VC for that
> > matter. Maybe this could be an analyzer request instead of a compiler
> > feature request. Either way, I'm open to suggestions.
> >
> > best regards,
> > Julian

Reply via email to