Re: [RFC] GCC Security policy

2024-02-13 Thread Siddhesh Poyarekar

On 2024-02-12 10:00, Richard Biener wrote:

GCC driver support is then to the extent identifying the inputs and the outputs.


We already have -MM to generate a list of non-system dependencies, so 
gcc should be able to pass on inputs to the tool, which could then map 
those files (and the system headers directory) into the sandbox before 
invocation.  The output file could perhaps be enforced as having to be a 
new one, i.e. fail if the target is already found.



I'm not sure a generic utility can achieve this unless the outputs need to be
retrieved from somewhere else (not "usual" place when invoking un-sandboxed).

Even the driver doesn't necessarily know all files read/written.

So I suppose better defining of the actual goal is in order.


gcc -sandboxed -O2 -c t.ii -fdump-tree-all


what should this do?  IMO invoked tools (gas, cc1plus) should be restricted
to access the input files.  Ideally the dumps would appear where they
appear when not sandboxed but clearly overwriting existing files would be
problematic, writing new files not so much, but only to the standard (or
specified) auxiliary output file paths.


Couldn't we get away with not having to handle dump files?  They don't 
seem to be sensitive targets.


Thanks,
Sid


Re: [RFC] GCC Security policy

2024-02-12 Thread Richard Biener
On Mon, Feb 12, 2024 at 2:35 PM Siddhesh Poyarekar  wrote:
>
> On 2024-02-12 08:16, Martin Jambor wrote:
> >> This probably ties in somewhat with an idea David Malcolm had riffed on
> >> with me earlier, of caching files for diagnostics.  If we could unify
> >> file accesses somehow, we could make this happen, i.e. open/read files
> >> as root and then do all execution as non-root.
> >>
> >> Sandboxing will have similar requirements, i.e. map in input files and
> >> an output file handle upfront and then unshare() into a sandbox to do
> >> the actual compilation.  This will make sure that at least the
> >> processing of inputs does not affect the system on which the compilation
> >> is being run.
> >
> > Right.  As we often just download some (sometimes large) pre-processed
> > source from Bugzilla and then happily run GCC on it on our computers,
> > this feature might be actually useful for us (still, we'd probably need
> > a more concrete description of what we want, would e.g. using "-wrapper
> > gdb,--args" work in such a sandbox?).  I agree that for some even
> > semi-complex builds, a more general sandboxing solution is probably
> > better.
>
> Joseph seems to be leaning towards nudging people to a general
> sandboxing solution too.  The question then is whether this takes the
> shape of a utility in, e.g. contrib that builds such a sandbox or simply
> a wiki page.

GCC driver support is then to the extent identifying the inputs and the outputs.
I'm not sure a generic utility can achieve this unless the outputs need to be
retrieved from somewhere else (not "usual" place when invoking un-sandboxed).

Even the driver doesn't necessarily know all files read/written.

So I suppose better defining of the actual goal is in order.

> gcc -sandboxed -O2 -c t.ii -fdump-tree-all

what should this do?  IMO invoked tools (gas, cc1plus) should be restricted
to access the input files.  Ideally the dumps would appear where they
appear when not sandboxed but clearly overwriting existing files would be
problematic, writing new files not so much, but only to the standard (or
specified) auxiliary output file paths.

Richard.

> Thanks,
> Sid


Re: [RFC] GCC Security policy

2024-02-12 Thread Siddhesh Poyarekar

On 2024-02-12 08:16, Martin Jambor wrote:

This probably ties in somewhat with an idea David Malcolm had riffed on
with me earlier, of caching files for diagnostics.  If we could unify
file accesses somehow, we could make this happen, i.e. open/read files
as root and then do all execution as non-root.

Sandboxing will have similar requirements, i.e. map in input files and
an output file handle upfront and then unshare() into a sandbox to do
the actual compilation.  This will make sure that at least the
processing of inputs does not affect the system on which the compilation
is being run.


Right.  As we often just download some (sometimes large) pre-processed
source from Bugzilla and then happily run GCC on it on our computers,
this feature might be actually useful for us (still, we'd probably need
a more concrete description of what we want, would e.g. using "-wrapper
gdb,--args" work in such a sandbox?).  I agree that for some even
semi-complex builds, a more general sandboxing solution is probably
better.


Joseph seems to be leaning towards nudging people to a general 
sandboxing solution too.  The question then is whether this takes the 
shape of a utility in, e.g. contrib that builds such a sandbox or simply 
a wiki page.


Thanks,
Sid


Re: [RFC] GCC Security policy

2024-02-12 Thread Siddhesh Poyarekar

On 2024-02-09 15:06, Joseph Myers wrote:

Ideally dependencies would be properly set up so that everything is built
in the original build, and ideally there would be no need to relink at
install time (I'm not sure of the exact circumstances in which it might be
needed, or on what OSes to e.g. encode the right library paths in final
installed executables).  In practice I think it's common for some building
to take place at install time.

There is a more general principle here of composability: it's not helpful
for being able to write scripts or makefiles combining invocations of
different utilities and have them behave predictably if some of those
utilities start making judgements about whether it's a good idea to run
them in a particular environment rather than just doing their job
independent of irrelevant aspects of the environment.  The semantics of
invoking "gcc" have nothing to do with whether it's run as root; it should
never need to look up what user it's running as at all.  (And it's
probably also a bad idea for lots of separate utilities to gain their own
ways to run in a restricted environment, for similar reasons; rather than
teaching "gcc" a way to create a restricted environment itself, ensure
there are easy-to-use more general utilities for running arbitrary
programs on untrusted input in a contained environment.)


I see your point.  The way you put it, there's no GCC project here at 
all then.


Sid


Re: [RFC] GCC Security policy

2024-02-12 Thread Martin Jambor
Hi,

On Fri, Feb 09 2024, Siddhesh Poyarekar wrote:
> On 2024-02-09 10:38, Martin Jambor wrote:
>> If anyone is interested in scoping this and then mentoring this as a
>> Google Summer of Code project this year then now is the right time to
>> speak up!
>
> I can help with mentoring and reviews, although I'll need someone to 
> assist with actual approvals.

I'm sure that we could manage that.  The project does not look like it
would be a huge one.

>
> There are two distinct sets of ideas to explore, one is privilege 
> management and the other sandboxing.
>
> For privilege management we could add a --allow-root driver flag that 
> allows gcc to run as root.  Without the flag one could either outright 
> refuse to run or drop privileges and run.  Dropping privileges will be a 
> bit tricky to implement because it would need a user to drop privileges 
> to and then there would be the question of how to manage file access to 
> read the compiler input and write out the compiler output.  If there's 
> no such user, gcc could refuse to run as root by default.  I wonder 
> though if from a security posture perspective it makes sense to simply 
> discourage running as root all the time and not bother trying to make it 
> work with dropped privileges and all that.  Of course it would mean that 
> this would be less of a "project"; it'll be a simple enough patch to 
> refuse to run until --allow-root is specified.

Yeah, this would not be enough for a GSoC project, not even for their
new small project category.

Additionally, I think that many, if not all, Linux distributions that
build binary packages do it in a VM/container/chroot where they do it
simply under root because the whole environment is there just for the
build.  So this would complicate lives for an important set of our
users.

>
> This probably ties in somewhat with an idea David Malcolm had riffed on 
> with me earlier, of caching files for diagnostics.  If we could unify 
> file accesses somehow, we could make this happen, i.e. open/read files 
> as root and then do all execution as non-root.
>
> Sandboxing will have similar requirements, i.e. map in input files and 
> an output file handle upfront and then unshare() into a sandbox to do 
> the actual compilation.  This will make sure that at least the 
> processing of inputs does not affect the system on which the compilation 
> is being run.

Right.  As we often just download some (sometimes large) pre-processed
source from Bugzilla and then happily run GCC on it on our computers,
this feature might be actually useful for us (still, we'd probably need
a more concrete description of what we want, would e.g. using "-wrapper
gdb,--args" work in such a sandbox?).  I agree that for some even
semi-complex builds, a more general sandboxing solution is probably
better.

Martin


Re: [RFC] GCC Security policy

2024-02-09 Thread Joseph Myers
On Fri, 9 Feb 2024, Siddhesh Poyarekar wrote:

> > I think disallowing running as root would be a big problem in practice -
> > the typical problem case is when people build software as non-root and run
> > "make install" as root, and for some reason "make install" wants to
> > (re)build or (re)link something.
> 
> Isn't that a problematic practice though?  Or maybe have those invocations be
> separated out as CC_ROOT?

Ideally dependencies would be properly set up so that everything is built 
in the original build, and ideally there would be no need to relink at 
install time (I'm not sure of the exact circumstances in which it might be 
needed, or on what OSes to e.g. encode the right library paths in final 
installed executables).  In practice I think it's common for some building 
to take place at install time.

There is a more general principle here of composability: it's not helpful 
for being able to write scripts or makefiles combining invocations of 
different utilities and have them behave predictably if some of those 
utilities start making judgements about whether it's a good idea to run 
them in a particular environment rather than just doing their job 
independent of irrelevant aspects of the environment.  The semantics of 
invoking "gcc" have nothing to do with whether it's run as root; it should 
never need to look up what user it's running as at all.  (And it's 
probably also a bad idea for lots of separate utilities to gain their own 
ways to run in a restricted environment, for similar reasons; rather than 
teaching "gcc" a way to create a restricted environment itself, ensure 
there are easy-to-use more general utilities for running arbitrary 
programs on untrusted input in a contained environment.)

-- 
Joseph S. Myers
josmy...@redhat.com



Re: [RFC] GCC Security policy

2024-02-09 Thread Siddhesh Poyarekar

On 2024-02-09 12:14, Joseph Myers wrote:

On Fri, 9 Feb 2024, Siddhesh Poyarekar wrote:


For privilege management we could add a --allow-root driver flag that allows
gcc to run as root.  Without the flag one could either outright refuse to run
or drop privileges and run.  Dropping privileges will be a bit tricky to
implement because it would need a user to drop privileges to and then there
would be the question of how to manage file access to read the compiler input
and write out the compiler output.  If there's no such user, gcc could refuse
to run as root by default.  I wonder though if from a security posture
perspective it makes sense to simply discourage running as root all the time
and not bother trying to make it work with dropped privileges and all that.
Of course it would mean that this would be less of a "project"; it'll be a
simple enough patch to refuse to run until --allow-root is specified.


I think disallowing running as root would be a big problem in practice -
the typical problem case is when people build software as non-root and run
"make install" as root, and for some reason "make install" wants to
(re)build or (re)link something.


Isn't that a problematic practice though?  Or maybe have those 
invocations be separated out as CC_ROOT?


Thanks,
Sid


Re: [RFC] GCC Security policy

2024-02-09 Thread Joseph Myers
On Fri, 9 Feb 2024, Siddhesh Poyarekar wrote:

> For privilege management we could add a --allow-root driver flag that allows
> gcc to run as root.  Without the flag one could either outright refuse to run
> or drop privileges and run.  Dropping privileges will be a bit tricky to
> implement because it would need a user to drop privileges to and then there
> would be the question of how to manage file access to read the compiler input
> and write out the compiler output.  If there's no such user, gcc could refuse
> to run as root by default.  I wonder though if from a security posture
> perspective it makes sense to simply discourage running as root all the time
> and not bother trying to make it work with dropped privileges and all that.
> Of course it would mean that this would be less of a "project"; it'll be a
> simple enough patch to refuse to run until --allow-root is specified.

I think disallowing running as root would be a big problem in practice - 
the typical problem case is when people build software as non-root and run 
"make install" as root, and for some reason "make install" wants to 
(re)build or (re)link something.

-- 
Joseph S. Myers
josmy...@redhat.com



Re: [RFC] GCC Security policy

2024-02-09 Thread Siddhesh Poyarekar

On 2024-02-09 10:38, Martin Jambor wrote:

If anyone is interested in scoping this and then mentoring this as a
Google Summer of Code project this year then now is the right time to
speak up!


I can help with mentoring and reviews, although I'll need someone to 
assist with actual approvals.


There are two distinct sets of ideas to explore, one is privilege 
management and the other sandboxing.


For privilege management we could add a --allow-root driver flag that 
allows gcc to run as root.  Without the flag one could either outright 
refuse to run or drop privileges and run.  Dropping privileges will be a 
bit tricky to implement because it would need a user to drop privileges 
to and then there would be the question of how to manage file access to 
read the compiler input and write out the compiler output.  If there's 
no such user, gcc could refuse to run as root by default.  I wonder 
though if from a security posture perspective it makes sense to simply 
discourage running as root all the time and not bother trying to make it 
work with dropped privileges and all that.  Of course it would mean that 
this would be less of a "project"; it'll be a simple enough patch to 
refuse to run until --allow-root is specified.


This probably ties in somewhat with an idea David Malcolm had riffed on 
with me earlier, of caching files for diagnostics.  If we could unify 
file accesses somehow, we could make this happen, i.e. open/read files 
as root and then do all execution as non-root.


Sandboxing will have similar requirements, i.e. map in input files and 
an output file handle upfront and then unshare() into a sandbox to do 
the actual compilation.  This will make sure that at least the 
processing of inputs does not affect the system on which the compilation 
is being run.


Sid


Re: [RFC] GCC Security policy

2024-02-09 Thread Martin Jambor
Hi,

On Tue, Aug 08 2023, Richard Biener via Gcc-patches wrote:
> On Tue, Aug 8, 2023 at 2:33 PM Siddhesh Poyarekar  wrote:
>>
>> On 2023-08-08 04:16, Richard Biener wrote:
>> > On Mon, Aug 7, 2023 at 7:30 PM David Edelsohn via Gcc-patches
>> >  wrote:
>> >>
>> >> FOSS Best Practices recommends that projects have an official Security
>> >> policy stated in a SECURITY.md or SECURITY.txt file at the root of the
>> >> repository.  GLIBC and Binutils have added such documents.
>> >>
>> >> Appended is a prototype for a Security policy file for GCC based on the
>> >> Binutils document because GCC seems to have more affinity with Binutils as
>> >> a tool. Do the runtime libraries distributed with GCC, especially libgcc,
>> >> require additional security policies?
>> >>
>> >> [ ] Is it appropriate to use the Binutils SECURITY.txt as the starting
>> >> point or should GCC use GLIBC SECURITY.md as the starting point for the 
>> >> GCC
>> >> Security policy?
>> >>
>> >> [ ] Does GCC, or some components of GCC, require additional care because 
>> >> of
>> >> runtime libraries like libgcc and libstdc++, and because of gcov and
>> >> profile-directed feedback?
>> >
>> > I do think that the runtime libraries should at least be explicitly 
>> > mentioned
>> > because they fall into the "generated output" category and bugs in the
>> > runtime are usually more severe as affecting a wider class of inputs.
>>
>> Ack, I'd expect libstdc++ and libgcc to be aligned with glibc's
>> policies.  libiberty and others on the other hand, would probably be
>> more suitably aligned with binutils libbfd, where we assume trusted input.
>>
>> >> Thoughts?
>> >>
>> >> Thanks, David
>> >>
>> >> GCC Security Process
>> >> 
>> >>
>> >> What is a GCC security bug?
>> >> ===
>> >>
>> >>  A security bug is one that threatens the security of a system or
>> >>  network, or might compromise the security of data stored on it.
>> >>  In the context of GCC there are two ways in which such
>> >>  bugs might occur.  In the first, the programs themselves might be
>> >>  tricked into a direct compromise of security.  In the second, the
>> >>  tools might introduce a vulnerability in the generated output that
>> >>  was not already present in the files used as input.
>> >>
>> >>  Other than that, all other bugs will be treated as non-security
>> >>  issues.  This does not mean that they will be ignored, just that
>> >>  they will not be given the priority that is given to security bugs.
>> >>
>> >>  This stance applies to the creation tools in the GCC (e.g.,
>> >>  gcc, g++, gfortran, gccgo, gccrs, gnat, cpp, gcov, etc.) and the
>> >>  libraries that they use.
>> >>
>> >> Notes:
>> >> ==
>> >>
>> >>  None of the programs in GCC need elevated privileges to operate and
>> >>  it is recommended that users do not use them from accounts where such
>> >>  privileges are automatically available.
>> >
>> > I'll note that we could ourselves mitigate some of that by handling 
>> > privileged
>> > invocation of the driver specially, dropping privs on exec of the sibling 
>> > tools
>> > and possibly using temporary files or pipes to do the parts of the I/O that
>> > need to be privileged.
>>
>> It's not a bad idea, but it ends up giving legitimizing running the
>> compiler as root, pushing the responsibility of privilege management to
>> the driver.  How about rejecting invocation as root altogether by
>> default, bypassed with a --run-as-root flag instead?
>>
>> I've also been thinking about a --sandbox flag that isolates the build
>> process (for gcc as well as binutils) into a separate namespace so that
>> it's usable in a restricted mode on untrusted sources without exposing
>> the rest of the system to it.
>
> There's probably external tools to do this, not sure if we should replicate
> things in the driver for this.
>
> But sure, I think the driver is the proper point to address any of such
> issues - iff we want to address them at all.  Maybe a nice little
> google summer-of-code project ;)
>

If anyone is interested in scoping this and then mentoring this as a
Google Summer of Code project this year then now is the right time to
speak up!

Thanks,

Martin


Re: [RFC] GCC Security policy

2023-09-20 Thread Arnaud Charlet
This is a great initiative I think.

See reference to AdaCore's security email below (among Debian, Red Hat,
SUSE)

On Mon, Aug 7, 2023 at 7:30 PM David Edelsohn via Gcc-patches <
gcc-patches@gcc.gnu.org> wrote:

> FOSS Best Practices recommends that projects have an official Security
> policy stated in a SECURITY.md or SECURITY.txt file at the root of the
> repository.  GLIBC and Binutils have added such documents.
>
> Appended is a prototype for a Security policy file for GCC based on the
> Binutils document because GCC seems to have more affinity with Binutils as
> a tool. Do the runtime libraries distributed with GCC, especially libgcc,
> require additional security policies?
>
> [ ] Is it appropriate to use the Binutils SECURITY.txt as the starting
> point or should GCC use GLIBC SECURITY.md as the starting point for the GCC
> Security policy?
>
> [ ] Does GCC, or some components of GCC, require additional care because of
> runtime libraries like libgcc and libstdc++, and because of gcov and
> profile-directed feedback?
>
> Thoughts?
>
> Thanks, David
>
> GCC Security Process
> 
>
> What is a GCC security bug?
> ===
>
> A security bug is one that threatens the security of a system or
> network, or might compromise the security of data stored on it.
> In the context of GCC there are two ways in which such
> bugs might occur.  In the first, the programs themselves might be
> tricked into a direct compromise of security.  In the second, the
> tools might introduce a vulnerability in the generated output that
> was not already present in the files used as input.
>
> Other than that, all other bugs will be treated as non-security
> issues.  This does not mean that they will be ignored, just that
> they will not be given the priority that is given to security bugs.
>
> This stance applies to the creation tools in the GCC (e.g.,
> gcc, g++, gfortran, gccgo, gccrs, gnat, cpp, gcov, etc.) and the
> libraries that they use.
>
> Notes:
> ==
>
> None of the programs in GCC need elevated privileges to operate and
> it is recommended that users do not use them from accounts where such
> privileges are automatically available.
>
> Reporting private security bugs
> 
>
>*All bugs reported in the GCC Bugzilla are public.*
>
>In order to report a private security bug that is not immediately
>public, please contact one of the downstream distributions with
>security teams.  The following teams have volunteered to handle
>such bugs:
>
>   Debian:  secur...@debian.org
>   Red Hat: secal...@redhat.com
>   SUSE:secur...@suse.de


Can you also please add:

AdaCore:  product-secur...@adacore.com


>
>Please report the bug to just one of these teams.  It will be shared
>with other teams as necessary.
>
>The team contacted will take care of details such as vulnerability
>rating and CVE assignment (http://cve.mitre.org/about/).  It is likely
>that the team will ask to file a public bug because the issue is
>sufficiently minor and does not warrant an embargo.  An embargo is not
>a requirement for being credited with the discovery of a security
>vulnerability.
>
> Reporting public security bugs
> ==
>
>It is expected that critical security bugs will be rare, and that most
>security bugs can be reported in GCC, thus making
>them public immediately.  The system can be found here:
>
>   https://gcc.gnu.org/bugzilla/
>


Re: [RFC] GCC Security policy

2023-09-06 Thread Siddhesh Poyarekar

Hello folks,

Here's v3 of the top part of the security policy.  Hopefully this 
addresses all concerns raised so far.


Thanks,
Sid


What is a GCC security bug?
===

A security bug is one that threatens the security of a system or
network, or might compromise the security of data stored on it.
In the context of GCC there are multiple ways in which this might
happen and they're detailed below.

Compiler drivers, programs, libgccjit and support libraries
---

The compiler driver processes source code, invokes other programs
such as the assembler and linker and generates the output result,
which may be assembly code or machine code.  Compiling untrusted
sources can result in arbitrary code execution and unconstrained
resource consumption in the compiler. As a result, compilation of
such code should be done inside a sandboxed environment to ensure
that it does not compromise the development environment.

The libgccjit library can, despite the name, be used both for
ahead-of-time compilation and for just-in-compilation.  In both
cases it can be used to translate input representations (such as
source code) in the application context; in the latter case the
generated code is also run in the application context.

Limitations that apply to the compiler driver, apply here too in
terms of sanitizing inputs and it is recommended that both the
compilation *and* execution context of the code are appropriately
sandboxed to contain the effects of any bugs in libgccjit, the
application code using it, or its generated code to the sandboxed
environment.

Support libraries such as libiberty, libcc1 libvtv and libcpp have
been developed separately to share code with other tools such as
binutils and gdb.  These libraries again have similar challenges to
compiler drivers.  While they are expected to be robust against
arbitrary input, they should only be used with trusted inputs.

Libraries such as zlib that bundled into GCC to build it will be
treated the same as the compiler drivers and programs as far as
security coverage is concerned.  However if you find an issue in
these libraries independent of their use in GCC, you should reach
out to their upstream projects to report them.

As a result, the only case for a potential security issue in the
compiler is when it generates vulnerable application code for
trusted input source code that is conforming to the relevant
programming standard or extensions documented as supported by GCC
and the algorithm expressed in the source code does not have the
vulnerability.  The output application code could be considered
vulnerable if it produces an actual vulnerability in the target
application, specifically in the following cases:

- The application dereferences an invalid memory location despite
  the application sources being valid.
- The application reads from or writes to a valid but incorrect
  memory location, resulting in an information integrity issue or an
  information leak.
- The application ends up running in an infinite loop or with
  severe degradation in performance despite the input sources having
  no such issue, resulting in a Denial of Service.  Note that
  correct but non-performant code is not a security issue candidate,
  this only applies to incorrect code that may result in performance
  degradation severe enough to amount to a denial of service.
- The application crashes due to the generated incorrect code,
  resulting in a Denial of Service.

Language runtime libraries
--

GCC also builds and distributes libraries that are intended to be
used widely to implement runtime support for various programming
languages.  These include the following:

* libada
* libatomic
* libbacktrace
* libcc1
* libcody
* libcpp
* libdecnumber
* libffi
* libgcc
* libgfortran
* libgm2
* libgo
* libgomp
* libiberty
* libitm
* libobjc
* libphobos
* libquadmath
* libsanitizer
* libssp
* libstdc++

These libraries are intended to be used in arbitrary contexts and as
a result, bugs in these libraries may be evaluated for security
impact.  However, some of these libraries, e.g. libgo, libphobos,
etc.  are not maintained in the GCC project, due to which the GCC
project may not be the correct point of contact for them.  You are
encouraged to look at README files within those library directories
to locate the canonical security contact point for those projects
and include them in the report.  Once the issue is fixed in the
upstream project, the fix will be synced into GCC in a future
release.

Most security vulnerabilities in these runtime 

Re: [RFC] GCC Security policy

2023-08-16 Thread Alexander Monakov


On Wed, 16 Aug 2023, Siddhesh Poyarekar wrote:

> > Yeah, indicating scenarios that fall outside of intended guarantees should
> > be helpful. I feel the exact text quoted above will be hard to decipher
> > without knowing the discussion that led to it. Some sort of supplementary
> > section with examples might help there.
> 
> Ah, so I had started out by listing examples but dropped them before emailing.
> How about:
> 
> Similarly, GCC may transform code in a way that the correctness of
> the expressed algorithm is preserved but supplementary properties
> that are observable only outside the program or through a
> vulnerability in the program, may not be preserved.  Examples
> of such supplementary properties could be the state of memory after
> it is no longer in use, performance and timing characteristics of a
> program, state of the CPU cache, etc. Such issues are not security
> vulnerabilities in GCC and in such cases, the vulnerability that
> caused exposure of the supplementary properties must be fixed.

I would say that as follows:

Similarly, GCC may transform code in a way that the correctness of
the expressed algorithm is preserved, but supplementary properties
that are not specifically expressible in a high-level language
are not preserved. Examples of such supplementary properties
include absence of sensitive data in the program's address space
after an attempt to wipe it, or data-independent timing of code.
When the source code attempts to express such properties, failure
to preserve them in resulting machine code is not a security issue
in GCC.

Alexander


Re: [RFC] GCC Security policy

2023-08-16 Thread Siddhesh Poyarekar

On 2023-08-16 11:06, Alexander Monakov wrote:

No I understood the distinction you're trying to make, I just wanted to point
out that the effect isn't all that different.  The intent of the wording is
not to prescribe a solution, but to describe what the compiler cannot do and
hence, users must find a way to do this.  I think we have a consensus on this
part of the wording though because we're not really responsible for the
prescription here and I'm happy with just asking users to sandbox.


Nice!


I suppose it's kinda like saying "don't try this at home".  You know many will
and some will break their leg while others will come out of it feeling
invincible.  Our job is to let them know that they will likely break their leg
:)


Continuing this analogy, I was protesting against doing our job by telling
users "when trying this at home, make sure to wear vibranium shielding"
while knowing for sure that nobody can, in fact, obtain said shielding,
making our statement not helpful and rather tautological.


:)


How about this in the last section titled "Security features implemented in
GCC", since that's where we also deal with security hardening.

 Similarly, GCC may transform code in a way that the correctness of
 the expressed algorithm is preserved but supplementary properties
 that are observable only outside the program or through a
 vulnerability in the program, may not be preserved.  This is not a
 security issue in GCC and in such cases, the vulnerability that
 caused exposure of the supplementary properties must be fixed.


Yeah, indicating scenarios that fall outside of intended guarantees should
be helpful. I feel the exact text quoted above will be hard to decipher
without knowing the discussion that led to it. Some sort of supplementary
section with examples might help there.


Ah, so I had started out by listing examples but dropped them before 
emailing.  How about:


Similarly, GCC may transform code in a way that the correctness of
the expressed algorithm is preserved but supplementary properties
that are observable only outside the program or through a
vulnerability in the program, may not be preserved.  Examples
of such supplementary properties could be the state of memory after
it is no longer in use, performance and timing characteristics of a
program, state of the CPU cache, etc. Such issues are not security
vulnerabilities in GCC and in such cases, the vulnerability that
caused exposure of the supplementary properties must be fixed.


In any case, I hope further discussion, clarification and wordsmithing
goes productively for you both here on the list and during the Cauldron.


Thanks!

Sid


Re: [RFC] GCC Security policy

2023-08-16 Thread Alexander Monakov


On Wed, 16 Aug 2023, Siddhesh Poyarekar wrote:

> No I understood the distinction you're trying to make, I just wanted to point
> out that the effect isn't all that different.  The intent of the wording is
> not to prescribe a solution, but to describe what the compiler cannot do and
> hence, users must find a way to do this.  I think we have a consensus on this
> part of the wording though because we're not really responsible for the
> prescription here and I'm happy with just asking users to sandbox.

Nice!

> I suppose it's kinda like saying "don't try this at home".  You know many will
> and some will break their leg while others will come out of it feeling
> invincible.  Our job is to let them know that they will likely break their leg
> :)

Continuing this analogy, I was protesting against doing our job by telling
users "when trying this at home, make sure to wear vibranium shielding"
while knowing for sure that nobody can, in fact, obtain said shielding,
making our statement not helpful and rather tautological.

> How about this in the last section titled "Security features implemented in
> GCC", since that's where we also deal with security hardening.
> 
> Similarly, GCC may transform code in a way that the correctness of
> the expressed algorithm is preserved but supplementary properties
> that are observable only outside the program or through a
> vulnerability in the program, may not be preserved.  This is not a
> security issue in GCC and in such cases, the vulnerability that
> caused exposure of the supplementary properties must be fixed.

Yeah, indicating scenarios that fall outside of intended guarantees should
be helpful. I feel the exact text quoted above will be hard to decipher
without knowing the discussion that led to it. Some sort of supplementary
section with examples might help there.

In any case, I hope further discussion, clarification and wordsmithing
goes productively for you both here on the list and during the Cauldron.

Thanks.
Alexander


Re: [RFC] GCC Security policy

2023-08-16 Thread Paul Koning via Gcc-patches



> On Aug 16, 2023, at 3:53 AM, Alexander Monakov  wrote:
> 
>> ...
>> Is "timing-safety" a security property?  Not the way I understand that
>> term.  It sounds like another way to say that the code meets real time
>> constraints or requirements.
> 
> I meant in the sense of not admitting timing attacks:
> https://en.wikipedia.org/wiki/Timing_attack
> 
>> No, compilers don't help with that (at least C doesn't -- Ada might be
>> better here but I don't know enough).  For sufficiently strict
>> requirements you'd have to examine both the generated machine code and
>> understand, in gruesome detail, what the timing behaviors of the executing
>> hardware are.  Good luck if it's a modern billion-transistor machine.
> 
> Yes. On the other hand, the reality in the FOSS ecosystem is that
> cryptographic libraries heavily lean on the ability to express
> a constant-time algorithm in C and get machine code that is actually
> constant-time. There's a bit of a conflict here between what we
> can promise and what people might expect of GCC, and it seems
> relevant when discussing what goes into the Security Policy.

I agree.  What should be said is that such techniques are erroneous.  The kind 
of code you're talking about inserts steps not strictly needed for the 
calculation to make it constant time (or more nearly so).  But clearly that has 
to rely on an assumption that the optimizer isn't smart enough to spot those 
unnecessary operations and delete them.  Never mind the fact that it relies on 
a notion that C statements have timing properties in the first place, which the 
standard doesn't do.

So I would argue that a serious attempt to cure timing attacks has to be coded 
in assembly language.  Even then, of course, optimizations in modern machine 
pipelines may give you trouble, but at least in that case you're writing 
explicitly for a specific ISA and are in a position to take into account its 
timing properties, to the extent they are known and defined.

paul




Re: [RFC] GCC Security policy

2023-08-16 Thread Siddhesh Poyarekar

On 2023-08-15 19:07, Alexander Monakov wrote:


On Tue, 15 Aug 2023, Siddhesh Poyarekar wrote:


Thanks, this is nicer (see notes below). My main concern is that we
shouldn't pretend there's some method of verifying that arbitrary source
code is "safe" to pass to an unsandboxed compiler, nor should we push
the responsibility of doing that on users.


But responsibility would be pushed to users, wouldn't it?


Making users responsible for verifying that sources are "safe" is not okay
(we cannot teach them how to do that since there's no general method).
Making users responsible for sandboxing the compiler is fine (there's
a range of sandboxing solutions, from which they can choose according
to their requirements and threat model). Sorry about the ambiguity.


No I understood the distinction you're trying to make, I just wanted to 
point out that the effect isn't all that different.  The intent of the 
wording is not to prescribe a solution, but to describe what the 
compiler cannot do and hence, users must find a way to do this.  I think 
we have a consensus on this part of the wording though because we're not 
really responsible for the prescription here and I'm happy with just 
asking users to sandbox.


I suppose it's kinda like saying "don't try this at home".  You know 
many will and some will break their leg while others will come out of it 
feeling invincible.  Our job is to let them know that they will likely 
break their leg :)



inside a sandboxed environment to ensure that it does not compromise the
development environment.  Note that this still does not guarantee safety of
the produced output programs and that such programs should still either be
analyzed thoroughly for safety or run only inside a sandbox or an isolated
system to avoid compromising the execution environment.


The last statement seems to be a new addition. It is too broad and again
makes a reference to analysis that appears quite theoretical. It might be
better to drop this (and instead talk in more specific terms about any
guarantees that produced binary code matches security properties intended
by the sources; I believe Richard Sandiford raised this previously).


OK, so I actually cover this at the end of the section; Richard's point AFAICT
was about hardening, which I added another note for to make it explicit that
missed hardening does not constitute a CVE-worthy threat:


Thanks for the reminder. To illustrate what I was talking about, let me give
two examples:

1) safety w.r.t timing attacks: even if the source code is written in
a manner that looks timing-safe, it might be transformed in a way that
mounting a timing attack on the resulting machine code is possible;

2) safety w.r.t information leaks: even if the source code attempts
to discard sensitive data (such as passwords and keys) immediately
after use, (partial) copies of that data may be left on stack and
in registers, to be leaked later via a different vulnerability.

For both 1) and 2), GCC is not engineered to respect such properties
during optimization and code generation, so it's not appropriate for such
tasks (a possible solution is to isolate such sensitive functions to
separate files, compile to assembly, inspect the assembly to check that it
still has the required properties, and use the inspected asm in subsequent
builds instead of the original high-level source).


How about this in the last section titled "Security features implemented 
in GCC", since that's where we also deal with security hardening.


Similarly, GCC may transform code in a way that the correctness of
the expressed algorithm is preserved but supplementary properties
that are observable only outside the program or through a
vulnerability in the program, may not be preserved.  This is not a
security issue in GCC and in such cases, the vulnerability that
caused exposure of the supplementary properties must be fixed.

Thanks,
Sid


Re: [RFC] GCC Security policy

2023-08-16 Thread Alexander Monakov
> > Unfortunately the lines that follow:
> > 
> >>   either sanitized by an external program to allow only trusted,
> >>   safe compilation and execution in the context of the application,
> > 
> > again make a reference to a purely theoretical "external program" that
> > is not going to exist in reality, and I made a fuss about that in another
> > subthread (sorry Siddhesh). We shouldn't speak as if this solution is
> > actually available to users.
> > 
> > I know this is not the main point of your email, but we came up with
> > a better wording for the compiler driver, and it would be good to align
> > this text with that.
> 
> How about:
> 
> The libgccjit library can, despite the name, be used both for
> ahead-of-time compilation and for just-in-compilation.  In both
> cases it can be used to translate input representations (such as
> source code) in the application context; in the latter case the
> generated code is also run in the application context.
> 
> Limitations that apply to the compiler driver, apply here too in
> terms of sanitizing inputs and it is recommended that both the

I'd prefer 'trusting inputs' instead of 'sanitizing inputs' above.

> compilation *and* execution context of the code are appropriately
> sandboxed to contain the effects of any bugs in libgccjit, the
> application code using it, or its generated code to the sandboxed
> environment.

*thumbs up*

Thanks.
Alexander


Re: [RFC] GCC Security policy

2023-08-16 Thread Siddhesh Poyarekar

On 2023-08-16 04:25, Alexander Monakov wrote:


On Tue, 15 Aug 2023, David Malcolm via Gcc-patches wrote:


I'd prefer to reword this, as libgccjit was a poor choice of name for
the library (sorry!), to make it clearer it can be used for both ahead-
of-time and just-in-time compilation, and that as used for compilation,
the host considerations apply, not just those of the generated target
code.

How about:

  The libgccjit library can, despite the name, be used both for
  ahead-of-time compilation and for just-in-compilation.  In both
  cases it can be used to translate input representations (such as
  source code) in the application context; in the latter case the
  generated code is also run in the application context.
  Limitations that apply to the compiler driver, apply here too in
  terms of sanitizing inputs, so it is recommended that inputs are


Thanks David!



Unfortunately the lines that follow:


  either sanitized by an external program to allow only trusted,
  safe compilation and execution in the context of the application,


again make a reference to a purely theoretical "external program" that
is not going to exist in reality, and I made a fuss about that in another
subthread (sorry Siddhesh). We shouldn't speak as if this solution is
actually available to users.

I know this is not the main point of your email, but we came up with
a better wording for the compiler driver, and it would be good to align
this text with that.


How about:

The libgccjit library can, despite the name, be used both for
ahead-of-time compilation and for just-in-compilation.  In both
cases it can be used to translate input representations (such as
source code) in the application context; in the latter case the
generated code is also run in the application context.

Limitations that apply to the compiler driver, apply here too in
terms of sanitizing inputs and it is recommended that both the
compilation *and* execution context of the code are appropriately
sandboxed to contain the effects of any bugs in libgccjit, the
application code using it, or its generated code to the sandboxed
environment.


Re: [RFC] GCC Security policy

2023-08-16 Thread Toon Moene

On 8/16/23 01:07, Alexander Monakov wrote:


On Tue, 15 Aug 2023, Siddhesh Poyarekar wrote:


Thanks, this is nicer (see notes below). My main concern is that we
shouldn't pretend there's some method of verifying that arbitrary source
code is "safe" to pass to an unsandboxed compiler, nor should we push
the responsibility of doing that on users.


But responsibility would be pushed to users, wouldn't it?


Making users responsible for verifying that sources are "safe" is not okay
(we cannot teach them how to do that since there's no general method).


While there is no "general method" for this, there exists a whole 
Working Group under ISO whose responsibility is to identify and list 
vulnerabilities in programming languages - Working Group 23.


Its web page is: https://www.open-std.org/jtc1/sc22/wg23/

Kind regards,

--
Toon Moene - e-mail: t...@moene.org - phone: +31 346 214290
Saturnushof 14, 3738 XG  Maartensdijk, The Netherlands



Re: [RFC] GCC Security policy

2023-08-16 Thread Alexander Monakov


On Tue, 15 Aug 2023, David Malcolm via Gcc-patches wrote:

> I'd prefer to reword this, as libgccjit was a poor choice of name for
> the library (sorry!), to make it clearer it can be used for both ahead-
> of-time and just-in-time compilation, and that as used for compilation,
> the host considerations apply, not just those of the generated target
> code.
> 
> How about:
> 
>  The libgccjit library can, despite the name, be used both for
>  ahead-of-time compilation and for just-in-compilation.  In both
>  cases it can be used to translate input representations (such as
>  source code) in the application context; in the latter case the
>  generated code is also run in the application context.
>  Limitations that apply to the compiler driver, apply here too in
>  terms of sanitizing inputs, so it is recommended that inputs are

Unfortunately the lines that follow:

>  either sanitized by an external program to allow only trusted,
>  safe compilation and execution in the context of the application,

again make a reference to a purely theoretical "external program" that
is not going to exist in reality, and I made a fuss about that in another
subthread (sorry Siddhesh). We shouldn't speak as if this solution is
actually available to users.

I know this is not the main point of your email, but we came up with
a better wording for the compiler driver, and it would be good to align
this text with that.

Thanks.
Alexander


Re: [RFC] GCC Security policy

2023-08-16 Thread Alexander Monakov


On Tue, 15 Aug 2023, Paul Koning wrote:

> Now I'm confused.  I thought the whole point of what GCC is trying to, and
> wants to document, is that it DOES preserve security properties.  If the
> source code is standards-compliant and contains algorithms free of security
> holes, then the compiler is supposed to deliver output code that is likewise
> free of holes -- in other words, the transformation performed by GCC does not
> introduce holes in a hole-free input.

Yes, we seem to broadly agree here. The text given by Siddhesh enumerates
scenarios were an incorrent transform could be considered a security bug.
My examples explore situations outside of those scenarios, picking two
popular security properties that cannot be always attained by writing
C source that vaguely appears to conform, and expecting the compiler
to translate in to machine code that actually conforms.

> > Granted, it is a bit of a stretch since the notion of timing-safety is
> > not really well-defined for C source code, but I didn't come up with
> > better examples.
> 
> Is "timing-safety" a security property?  Not the way I understand that
> term.  It sounds like another way to say that the code meets real time
> constraints or requirements.

I meant in the sense of not admitting timing attacks:
https://en.wikipedia.org/wiki/Timing_attack

> No, compilers don't help with that (at least C doesn't -- Ada might be
> better here but I don't know enough).  For sufficiently strict
> requirements you'd have to examine both the generated machine code and
> understand, in gruesome detail, what the timing behaviors of the executing
> hardware are.  Good luck if it's a modern billion-transistor machine.

Yes. On the other hand, the reality in the FOSS ecosystem is that
cryptographic libraries heavily lean on the ability to express
a constant-time algorithm in C and get machine code that is actually
constant-time. There's a bit of a conflict here between what we
can promise and what people might expect of GCC, and it seems
relevant when discussing what goes into the Security Policy.

Thanks.
Alexander


Re: [RFC] GCC Security policy

2023-08-15 Thread Paul Koning via Gcc-patches



> On Aug 15, 2023, at 8:37 PM, Alexander Monakov  wrote:
> 
>> ...
>> At some point the system tools need to respect the programmer or operator.
>> There is a difference between writing "Hello, World" and writing
>> performance critical or safety critical code.  That is the responsibility
>> of the programmer and the development team to choose the right software
>> engineers and right tools.  And to have the development environment and
>> checks in place to ensure that the results are meeting the requirements.
>> 
>> It is not the role of GCC or its security policy to tell people how to do
>> their job or hobby.  This isn't a safety tag required to be attached to a
>> new mattress.
> 
> Yes (though I'm afraid the analogy with the mattress is a bit lost on me).
> Those examples were meant to illustrate the point I tried to make earlier,
> not as additions proposed for the Security Policy. Specific examples
> where we can tell people in advance that compiler output needs to be
> verified, because the compiler is not engineered to preserve those
> security-relevant properties from the source code (and we would not
> accept such accidents as security bugs).

Now I'm confused.  I thought the whole point of what GCC is trying to, and 
wants to document, is that it DOES preserve security properties.  If the source 
code is standards-compliant and contains algorithms free of security holes, 
then the compiler is supposed to deliver output code that is likewise free of 
holes -- in other words, the transformation performed by GCC does not introduce 
holes in a hole-free input.

> Granted, it is a bit of a stretch since the notion of timing-safety is
> not really well-defined for C source code, but I didn't come up with
> better examples.

Is "timing-safety" a security property?  Not the way I understand that term.  
It sounds like another way to say that the code meets real time constraints or 
requirements.  No, compilers don't help with that (at least C doesn't -- Ada 
might be better here but I don't know enough).  For sufficiently strict 
requirements you'd have to examine both the generated machine code and 
understand, in gruesome detail, what the timing behaviors of the executing 
hardware are.  Good luck if it's a modern billion-transistor machine.

Again, I don't see that as a security property.  If it's considered desirable 
to say something about this, fine, but the words Siddesh crafted don't fit for 
that kind of property.

paul



Re: [RFC] GCC Security policy

2023-08-15 Thread Alexander Monakov


On Tue, 15 Aug 2023, David Edelsohn wrote:

> > Making users responsible for verifying that sources are "safe" is not okay
> > (we cannot teach them how to do that since there's no general method).
> > Making users responsible for sandboxing the compiler is fine (there's
> > a range of sandboxing solutions, from which they can choose according
> > to their requirements and threat model). Sorry about the ambiguity.
> >
> 
> Alex.
> 
> The compiler should faithfully implement the algorithms described by the
> programmer.  The compiler is responsible if it generates incorrect code for
> a well-defined, language-conforming program.  The compiler cannot be
> responsible for security issues inherent in the user code, whether that
> causes the compiler to function in a manner that deteriorates adversely
> affects the system or generates code that behaves in a manner that
> adversely affects the system.
> 
> If "safe" is the wrong word. What word would you suggest?

I think "safe" is the right word here. We also used "trusted" in a similar
sense. I believe we were on the same page about that.

> > For both 1) and 2), GCC is not engineered to respect such properties
> > during optimization and code generation, so it's not appropriate for such
> > tasks (a possible solution is to isolate such sensitive functions to
> > separate files, compile to assembly, inspect the assembly to check that it
> > still has the required properties, and use the inspected asm in subsequent
> > builds instead of the original high-level source).
> >
> 
> At some point the system tools need to respect the programmer or operator.
> There is a difference between writing "Hello, World" and writing
> performance critical or safety critical code.  That is the responsibility
> of the programmer and the development team to choose the right software
> engineers and right tools.  And to have the development environment and
> checks in place to ensure that the results are meeting the requirements.
> 
> It is not the role of GCC or its security policy to tell people how to do
> their job or hobby.  This isn't a safety tag required to be attached to a
> new mattress.

Yes (though I'm afraid the analogy with the mattress is a bit lost on me).
Those examples were meant to illustrate the point I tried to make earlier,
not as additions proposed for the Security Policy. Specific examples
where we can tell people in advance that compiler output needs to be
verified, because the compiler is not engineered to preserve those
security-relevant properties from the source code (and we would not
accept such accidents as security bugs).

Granted, it is a bit of a stretch since the notion of timing-safety is
not really well-defined for C source code, but I didn't come up with
better examples.

Alexander


Re: [RFC] GCC Security policy

2023-08-15 Thread David Edelsohn via Gcc-patches
On Tue, Aug 15, 2023 at 7:07 PM Alexander Monakov 
wrote:

>
> On Tue, 15 Aug 2023, Siddhesh Poyarekar wrote:
>
> > > Thanks, this is nicer (see notes below). My main concern is that we
> > > shouldn't pretend there's some method of verifying that arbitrary
> source
> > > code is "safe" to pass to an unsandboxed compiler, nor should we push
> > > the responsibility of doing that on users.
> >
> > But responsibility would be pushed to users, wouldn't it?
>
> Making users responsible for verifying that sources are "safe" is not okay
> (we cannot teach them how to do that since there's no general method).
> Making users responsible for sandboxing the compiler is fine (there's
> a range of sandboxing solutions, from which they can choose according
> to their requirements and threat model). Sorry about the ambiguity.
>

Alex.

The compiler should faithfully implement the algorithms described by the
programmer.  The compiler is responsible if it generates incorrect code for
a well-defined, language-conforming program.  The compiler cannot be
responsible for security issues inherent in the user code, whether that
causes the compiler to function in a manner that deteriorates adversely
affects the system or generates code that behaves in a manner that
adversely affects the system.

If "safe" is the wrong word. What word would you suggest?


> > So:
> >
> > The compiler driver processes source code, invokes other programs such
> as the
> > assembler and linker and generates the output result, which may be
> assembly
> > code or machine code.  Compiling untrusted sources can result in
> arbitrary
> > code execution and unconstrained resource consumption in the compiler.
> As a
> > result, compilation of such code should be done inside a sandboxed
> environment
> > to ensure that it does not compromise the development environment.
>
> I'm happy with this, thanks for bearing with me.
>
> > >> inside a sandboxed environment to ensure that it does not compromise
> the
> > >> development environment.  Note that this still does not guarantee
> safety of
> > >> the produced output programs and that such programs should still
> either be
> > >> analyzed thoroughly for safety or run only inside a sandbox or an
> isolated
> > >> system to avoid compromising the execution environment.
> > >
> > > The last statement seems to be a new addition. It is too broad and
> again
> > > makes a reference to analysis that appears quite theoretical. It might
> be
> > > better to drop this (and instead talk in more specific terms about any
> > > guarantees that produced binary code matches security properties
> intended
> > > by the sources; I believe Richard Sandiford raised this previously).
> >
> > OK, so I actually cover this at the end of the section; Richard's point
> AFAICT
> > was about hardening, which I added another note for to make it explicit
> that
> > missed hardening does not constitute a CVE-worthy threat:
>
> Thanks for the reminder. To illustrate what I was talking about, let me
> give
> two examples:
>
> 1) safety w.r.t timing attacks: even if the source code is written in
> a manner that looks timing-safe, it might be transformed in a way that
> mounting a timing attack on the resulting machine code is possible;
>
> 2) safety w.r.t information leaks: even if the source code attempts
> to discard sensitive data (such as passwords and keys) immediately
> after use, (partial) copies of that data may be left on stack and
> in registers, to be leaked later via a different vulnerability.
>
> For both 1) and 2), GCC is not engineered to respect such properties
> during optimization and code generation, so it's not appropriate for such
> tasks (a possible solution is to isolate such sensitive functions to
> separate files, compile to assembly, inspect the assembly to check that it
> still has the required properties, and use the inspected asm in subsequent
> builds instead of the original high-level source).
>

At some point the system tools need to respect the programmer or operator.
There is a difference between writing "Hello, World" and writing
performance critical or safety critical code.  That is the responsibility
of the programmer and the development team to choose the right software
engineers and right tools.  And to have the development environment and
checks in place to ensure that the results are meeting the requirements.

It is not the role of GCC or its security policy to tell people how to do
their job or hobby.  This isn't a safety tag required to be attached to a
new mattress.

Thanks, David


>
> Cheers.
> Alexander
>


Re: [RFC] GCC Security policy

2023-08-15 Thread David Malcolm via Gcc-patches
On Mon, 2023-08-14 at 09:26 -0400, Siddhesh Poyarekar wrote:
> Hi,
> 
> Here's the updated draft of the top part of the security policy with all 
> of the recommendations incorporated.
> 
> Thanks,
> Sid
> 
> 
> What is a GCC security bug?
> ===
> 
>  A security bug is one that threatens the security of a system or
>  network, or might compromise the security of data stored on it.
>  In the context of GCC there are multiple ways in which this might
>  happen and they're detailed below.
> 
> Compiler drivers, programs, libgccjit and support libraries
> ---
> 
>  The compiler driver processes source code, invokes other programs
>  such as the assembler and linker and generates the output result,
>  which may be assembly code or machine code.  It is necessary that
>  all source code inputs to the compiler are trusted, since it is
>  impossible for the driver to validate input source code beyond
>  conformance to a programming language standard.
> 
>  The GCC JIT implementation, libgccjit, is intended to be plugged
>  into applications to translate input source code in the application
>  context.  Limitations that apply to the compiler
>  driver, apply here too in terms of sanitizing inputs, so it is
>  recommended that inputs are either sanitized by an external program
>  to allow only trusted, safe execution in the context of the
>  application or the JIT execution context is appropriately sandboxed
>  to contain the effects of any bugs in the JIT or its generated code
>  to the sandboxed environment.

I'd prefer to reword this, as libgccjit was a poor choice of name for
the library (sorry!), to make it clearer it can be used for both ahead-
of-time and just-in-time compilation, and that as used for compilation,
the host considerations apply, not just those of the generated target
code.

How about:

 The libgccjit library can, despite the name, be used both for
 ahead-of-time compilation and for just-in-compilation.  In both
 cases it can be used to translate input representations (such as
 source code) in the application context; in the latter case the
 generated code is also run in the application context.
 Limitations that apply to the compiler driver, apply here too in
 terms of sanitizing inputs, so it is recommended that inputs are
 either sanitized by an external program to allow only trusted,
 safe compilation and execution in the context of the application,
 or that both the compilation *and* execution context of the code
 are appropriately sandboxed to contain the effects of any bugs in
 libgccjit, the application code using it, or its generated code to
 the sandboxed environment.

...or similar.

[...snip...]

Thanks
Dave



Re: [RFC] GCC Security policy

2023-08-15 Thread Alexander Monakov


On Tue, 15 Aug 2023, Siddhesh Poyarekar wrote:

> > Thanks, this is nicer (see notes below). My main concern is that we
> > shouldn't pretend there's some method of verifying that arbitrary source
> > code is "safe" to pass to an unsandboxed compiler, nor should we push
> > the responsibility of doing that on users.
> 
> But responsibility would be pushed to users, wouldn't it?

Making users responsible for verifying that sources are "safe" is not okay
(we cannot teach them how to do that since there's no general method).
Making users responsible for sandboxing the compiler is fine (there's
a range of sandboxing solutions, from which they can choose according
to their requirements and threat model). Sorry about the ambiguity.

> So:
> 
> The compiler driver processes source code, invokes other programs such as the
> assembler and linker and generates the output result, which may be assembly
> code or machine code.  Compiling untrusted sources can result in arbitrary
> code execution and unconstrained resource consumption in the compiler. As a
> result, compilation of such code should be done inside a sandboxed environment
> to ensure that it does not compromise the development environment.

I'm happy with this, thanks for bearing with me.

> >> inside a sandboxed environment to ensure that it does not compromise the
> >> development environment.  Note that this still does not guarantee safety of
> >> the produced output programs and that such programs should still either be
> >> analyzed thoroughly for safety or run only inside a sandbox or an isolated
> >> system to avoid compromising the execution environment.
> > 
> > The last statement seems to be a new addition. It is too broad and again
> > makes a reference to analysis that appears quite theoretical. It might be
> > better to drop this (and instead talk in more specific terms about any
> > guarantees that produced binary code matches security properties intended
> > by the sources; I believe Richard Sandiford raised this previously).
> 
> OK, so I actually cover this at the end of the section; Richard's point AFAICT
> was about hardening, which I added another note for to make it explicit that
> missed hardening does not constitute a CVE-worthy threat:

Thanks for the reminder. To illustrate what I was talking about, let me give
two examples:

1) safety w.r.t timing attacks: even if the source code is written in
a manner that looks timing-safe, it might be transformed in a way that
mounting a timing attack on the resulting machine code is possible;

2) safety w.r.t information leaks: even if the source code attempts
to discard sensitive data (such as passwords and keys) immediately
after use, (partial) copies of that data may be left on stack and
in registers, to be leaked later via a different vulnerability.

For both 1) and 2), GCC is not engineered to respect such properties
during optimization and code generation, so it's not appropriate for such
tasks (a possible solution is to isolate such sensitive functions to
separate files, compile to assembly, inspect the assembly to check that it
still has the required properties, and use the inspected asm in subsequent
builds instead of the original high-level source).

Cheers.
Alexander


Re: [RFC] GCC Security policy

2023-08-15 Thread Siddhesh Poyarekar

On 2023-08-15 10:07, Alexander Monakov wrote:


On Tue, 15 Aug 2023, Siddhesh Poyarekar wrote:


Does this as the first paragraph address your concerns:


Thanks, this is nicer (see notes below). My main concern is that we shouldn't
pretend there's some method of verifying that arbitrary source code is "safe"
to pass to an unsandboxed compiler, nor should we push the responsibility of
doing that on users.


But responsibility would be pushed to users, wouldn't it?


The compiler driver processes source code, invokes other programs such as the
assembler and linker and generates the output result, which may be assembly
code or machine code.  It is necessary that all source code inputs to the
compiler are trusted, since it is impossible for the driver to validate input
source code for safety.


The statement begins with "It is necessary", but the next statement offers
an alternative in case the code is untrusted. This is a contradiction.
Is it necessary or not in the end?

I'd suggest to drop this statement and instead make a brief note that
compiling crafted/untrusted sources can result in arbitrary code execution
and unconstrained resource consumption in the compiler.


So:

The compiler driver processes source code, invokes other programs such 
as the assembler and linker and generates the output result, which may 
be assembly code or machine code.  Compiling untrusted sources can 
result in arbitrary code execution and unconstrained resource 
consumption in the compiler. As a result, compilation of such code 
should be done inside a sandboxed environment to ensure that it does not 
compromise the development environment.



For untrusted code should compilation should be done

  ^^
 typo (spurious 'should')


Ack, thanks.




inside a sandboxed environment to ensure that it does not compromise the
development environment.  Note that this still does not guarantee safety of
the produced output programs and that such programs should still either be
analyzed thoroughly for safety or run only inside a sandbox or an isolated
system to avoid compromising the execution environment.


The last statement seems to be a new addition. It is too broad and again
makes a reference to analysis that appears quite theoretical. It might be
better to drop this (and instead talk in more specific terms about any
guarantees that produced binary code matches security properties intended
by the sources; I believe Richard Sandiford raised this previously).


OK, so I actually cover this at the end of the section; Richard's point 
AFAICT was about hardening, which I added another note for to make it 
explicit that missed hardening does not constitute a CVE-worthy threat:


As a result, the only case for a potential security issue in the
compiler is when it generates vulnerable application code for
trusted input source code that is conforming to the relevant
programming standard or extensions documented as supported by GCC
and the algorithm expressed in the source code does not have the
vulnerability.  The output application code could be considered
vulnerable if it produces an actual vulnerability in the target
application, specifically in the following cases:

- The application dereferences an invalid memory location despite
  the application sources being valid.
- The application reads from or writes to a valid but incorrect
  memory location, resulting in an information integrity issue or an
  information leak.
- The application ends up running in an infinite loop or with
  severe degradation in performance despite the input sources having
  no such issue, resulting in a Denial of Service.  Note that
  correct but non-performant code is not a security issue candidate,
  this only applies to incorrect code that may result in performance
  degradation severe enough to amount to a denial of service.
- The application crashes due to the generated incorrect code,
  resulting in a Denial of Service.


Re: [RFC] GCC Security policy

2023-08-15 Thread Paul Koning via Gcc-patches



> On Aug 15, 2023, at 10:07 AM, Alexander Monakov  wrote:
> 
> 
> On Tue, 15 Aug 2023, Siddhesh Poyarekar wrote:
> 
>> Does this as the first paragraph address your concerns:
> 
> Thanks, this is nicer (see notes below). My main concern is that we shouldn't
> pretend there's some method of verifying that arbitrary source code is "safe"
> to pass to an unsandboxed compiler, nor should we push the responsibility of
> doing that on users.

Perhaps, but clearly the compiler can't do it ("Halting problem"...) so it has 
to be clear that the solution must be outside the compiler.  

paul



Re: [RFC] GCC Security policy

2023-08-15 Thread Alexander Monakov


On Tue, 15 Aug 2023, Siddhesh Poyarekar wrote:

> Does this as the first paragraph address your concerns:

Thanks, this is nicer (see notes below). My main concern is that we shouldn't
pretend there's some method of verifying that arbitrary source code is "safe"
to pass to an unsandboxed compiler, nor should we push the responsibility of
doing that on users.

> The compiler driver processes source code, invokes other programs such as the
> assembler and linker and generates the output result, which may be assembly
> code or machine code.  It is necessary that all source code inputs to the
> compiler are trusted, since it is impossible for the driver to validate input
> source code for safety.

The statement begins with "It is necessary", but the next statement offers
an alternative in case the code is untrusted. This is a contradiction.
Is it necessary or not in the end?

I'd suggest to drop this statement and instead make a brief note that
compiling crafted/untrusted sources can result in arbitrary code execution
and unconstrained resource consumption in the compiler.

> For untrusted code should compilation should be done
 ^^
 typo (spurious 'should')
 
> inside a sandboxed environment to ensure that it does not compromise the
> development environment.  Note that this still does not guarantee safety of
> the produced output programs and that such programs should still either be
> analyzed thoroughly for safety or run only inside a sandbox or an isolated
> system to avoid compromising the execution environment.

The last statement seems to be a new addition. It is too broad and again
makes a reference to analysis that appears quite theoretical. It might be
better to drop this (and instead talk in more specific terms about any
guarantees that produced binary code matches security properties intended
by the sources; I believe Richard Sandiford raised this previously).

Thanks.
Alexander


Re: [RFC] GCC Security policy

2023-08-15 Thread Siddhesh Poyarekar

On 2023-08-15 01:59, Alexander Monakov wrote:


On Mon, 14 Aug 2023, Siddhesh Poyarekar wrote:


There's no practical (programmatic) way to do such validation; it has to be a
manual audit, which is why source code passed to the compiler has to be
*trusted*.


No, I do not think that is a logical conclusion. What is the problem with
passing untrusted code to a sandboxed compiler?


Right, that's what we're essentially trying to convey in the security policy
text.  It doesn't go into mechanisms for securing execution (because that's
really beyond the scope of the *project's* policy IMO) but it states
unambiguously that input to the compiler must be trusted:

"""
   ... It is necessary that
 all source code inputs to the compiler are trusted, since it is
 impossible for the driver to validate input source code beyond
 conformance to a programming language standard...
"""


I see two issues with this. First, it reads as if people wishing to build
not-entirely-trusted sources need to seek some other compiler, as somehow
we seem to imply that sandboxing GCC is out of the question.

Second, I take issue with the last part of the quoted text (language
conformance): verifying standards conformance is also impossible
(consider UB that manifests only during linking or dynamic loading)
so GCC is only doing that on a best-effort basis with no guarantees.


Does this as the first paragraph address your concerns:

The compiler driver processes source code, invokes other programs such 
as the assembler and linker and generates the output result, which may 
be assembly code or machine code.  It is necessary that all source code 
inputs to the compiler are trusted, since it is impossible for the 
driver to validate input source code for safety.  For untrusted code 
should compilation should be done inside a sandboxed environment to 
ensure that it does not compromise the development environment.  Note 
that this still does not guarantee safety of the produced output 
programs and that such programs should still either be analyzed 
thoroughly for safety or run only inside a sandbox or an isolated system 
to avoid compromising the execution environment.


Thanks,
Sid


Re: [RFC] GCC Security policy

2023-08-14 Thread Alexander Monakov


On Mon, 14 Aug 2023, Siddhesh Poyarekar wrote:

> There's no practical (programmatic) way to do such validation; it has to be a
> manual audit, which is why source code passed to the compiler has to be
> *trusted*.

No, I do not think that is a logical conclusion. What is the problem with
passing untrusted code to a sandboxed compiler?

> Right, that's what we're essentially trying to convey in the security policy
> text.  It doesn't go into mechanisms for securing execution (because that's
> really beyond the scope of the *project's* policy IMO) but it states
> unambiguously that input to the compiler must be trusted:
> 
> """
>   ... It is necessary that
> all source code inputs to the compiler are trusted, since it is
> impossible for the driver to validate input source code beyond
> conformance to a programming language standard...
> """

I see two issues with this. First, it reads as if people wishing to build
not-entirely-trusted sources need to seek some other compiler, as somehow
we seem to imply that sandboxing GCC is out of the question.

Second, I take issue with the last part of the quoted text (language
conformance): verifying standards conformance is also impossible
(consider UB that manifests only during linking or dynamic loading)
so GCC is only doing that on a best-effort basis with no guarantees.

Alexander


Re: [RFC] GCC Security policy

2023-08-14 Thread Siddhesh Poyarekar

On 2023-08-14 17:16, Alexander Monakov wrote:


On Mon, 14 Aug 2023, Siddhesh Poyarekar wrote:


1. It makes it clear to users of the project the scope in which the project
could be used and what safety it could reasonably expect from the project.  In
the context of GCC for example, it cannot expect the compiler to do a safety
check of untrusted sources; the compiler will consider #include "/etc/passwd"
just as valid code as #include  and as a result, the onus is on the
user environment to validate the input sources for safety.


Whoa, no. We shouldn't make such statements unless we are prepared to explain
to users how such validation can be practically implemented, which I'm sure
we cannot in this case, due to future extensions such as the #embed directive,
and ability to obfuscate filenames using the preprocessor.


There's no practical (programmatic) way to do such validation; it has to 
be a manual audit, which is why source code passed to the compiler has 
to be *trusted*.



I think it would be more honest to say that crafted sources can result in
arbitrary code execution with the privileges of the user invoking the compiler,
and hence the operator may want to ensure that no sensitive data is available
to that user (via measures ranging from plain UNIX permissions, to chroots,
to virtual machines, to air-gapped computers, depending on threat model).


Right, that's what we're essentially trying to convey in the security 
policy text.  It doesn't go into mechanisms for securing execution 
(because that's really beyond the scope of the *project's* policy IMO) 
but it states unambiguously that input to the compiler must be trusted:


"""
  ... It is necessary that
all source code inputs to the compiler are trusted, since it is
impossible for the driver to validate input source code beyond
conformance to a programming language standard...
"""


Resource consumption is another good reason to sandbox compilers.


Agreed, we make that specific recommendation in the context of libgccjit.

Thanks,
Sid


Re: [RFC] GCC Security policy

2023-08-14 Thread Alexander Monakov


On Mon, 14 Aug 2023, Siddhesh Poyarekar wrote:

> 1. It makes it clear to users of the project the scope in which the project
> could be used and what safety it could reasonably expect from the project.  In
> the context of GCC for example, it cannot expect the compiler to do a safety
> check of untrusted sources; the compiler will consider #include "/etc/passwd"
> just as valid code as #include  and as a result, the onus is on the
> user environment to validate the input sources for safety.

Whoa, no. We shouldn't make such statements unless we are prepared to explain
to users how such validation can be practically implemented, which I'm sure
we cannot in this case, due to future extensions such as the #embed directive,
and ability to obfuscate filenames using the preprocessor.

I think it would be more honest to say that crafted sources can result in
arbitrary code execution with the privileges of the user invoking the compiler,
and hence the operator may want to ensure that no sensitive data is available
to that user (via measures ranging from plain UNIX permissions, to chroots,
to virtual machines, to air-gapped computers, depending on threat model).

Resource consumption is another good reason to sandbox compilers.

Alexander


Re: [RFC] GCC Security policy

2023-08-14 Thread Siddhesh Poyarekar

On 2023-08-14 14:51, Richard Sandiford wrote:

I think it would help to clarify what the aim of the security policy is.
Specifically:

(1) What service do we want to provide to users by classifying one thing
 as a security bug and another thing as not a security bug?

(2) What service do we want to provide to the GNU community by the same
 classification?

I think it will be easier to agree on the classification if we first
agree on that.


I actually wanted to do a talk on this at the Cauldron this year and 
*then* propose this for the gcc community, but I guess we could do this 
early :)


So the core intent of a security policy for a project is to make clear 
the security stance of the project, specifying to the extent possible 
what kind of uses are considered safe and what kinds of bugs would be 
considered security issues in the context of those uses.


There are a few advantages of doing this:

1. It makes it clear to users of the project the scope in which the 
project could be used and what safety it could reasonably expect from 
the project.  In the context of GCC for example, it cannot expect the 
compiler to do a safety check of untrusted sources; the compiler will 
consider #include "/etc/passwd" just as valid code as #include  
and as a result, the onus is on the user environment to validate the 
input sources for safety.


2. It helps the security community (Mitre and other CNAs and security 
researchers) set correct expectations of the project so that they don't 
cry wolf for every segfault or ICE under the pretext that code could 
presumably be run as a service somehow and hence result in a "DoS".


3. This in turn helps stave off spurious CVE submissions that cause 
needless churn in downstream distributions.  LLVM is already starting to 
see this[1] and it's only a matter of time before people start doing 
this for GCC.


4. It helps make a distinction between important bugs and security bugs; 
they're often conflated as one and the same thing.  Security bugs are 
special because they require different handling from those that do not 
have a security impact, regardless of their actual importance. 
Unfortunately one of the reasons they're special is because there's a 
bunch of (pretty dumb) automation out there that rings alarm bells on 
every single CVE.  Without a clear understanding of the context under 
which a project can be used, these alarm bells can be made unreasonably 
loud (due to incorrect scoring, see the LLVM CVE for instance; just one 
element in that vector changes the score from 0.0 to 5.5), causing 
needless churn in not just the code base but in downstream releases and 
end user environments.


5. This exercise is also a great start in developing an understanding of 
which parts in GCC are security sensitive and in what sense.  Runtime 
libraries for example have a direct impact on application security. 
Compiler impact is a little less direct.  Hardening features have 
another effect, but it's more mitigation-oriented than direct safety. 
This also informs us about the impact of various project actions such as 
bundling third-party libraries and development and maintenance of 
tooling within GCC and will hopefully guide policies around those practices.


I hope this is a sufficient start.  We don't necessarily want to get 
into the business of acknowledging or rejecting security issues as 
upstream at the moment (but see also the CNA discussion[2] of what we 
intend to do in that space for glibc) but having uniform upstream 
guidelines would be helpful to researchers as well as downstream 
consumers to help decide what constitutes a security issue.


Thanks,
Sid

[1] https://nvd.nist.gov/vuln/detail/CVE-2023-29932
[2] 
https://inbox.sourceware.org/libc-alpha/1a44f25a-5aa3-28b7-1ecb-b3991d44c...@gotplt.org/T/


Re: [RFC] GCC Security policy

2023-08-14 Thread Richard Sandiford via Gcc-patches
I think it would help to clarify what the aim of the security policy is.
Specifically:

(1) What service do we want to provide to users by classifying one thing
as a security bug and another thing as not a security bug?

(2) What service do we want to provide to the GNU community by the same
classification?

I think it will be easier to agree on the classification if we first
agree on that.

Siddhesh Poyarekar  writes:
> Hi,
>
> Here's the updated draft of the top part of the security policy with all 
> of the recommendations incorporated.
>
> Thanks,
> Sid
>
>
> What is a GCC security bug?
> ===
>
>  A security bug is one that threatens the security of a system or
>  network, or might compromise the security of data stored on it.
>  In the context of GCC there are multiple ways in which this might
>  happen and they're detailed below.
>
> Compiler drivers, programs, libgccjit and support libraries
> ---
>
>  The compiler driver processes source code, invokes other programs
>  such as the assembler and linker and generates the output result,
>  which may be assembly code or machine code.  It is necessary that
>  all source code inputs to the compiler are trusted, since it is
>  impossible for the driver to validate input source code beyond
>  conformance to a programming language standard.
>
>  The GCC JIT implementation, libgccjit, is intended to be plugged
>  into applications to translate input source code in the application
>  context.  Limitations that apply to the compiler
>  driver, apply here too in terms of sanitizing inputs, so it is
>  recommended that inputs are either sanitized by an external program
>  to allow only trusted, safe execution in the context of the
>  application or the JIT execution context is appropriately sandboxed
>  to contain the effects of any bugs in the JIT or its generated code
>  to the sandboxed environment.
>
>  Support libraries such as libiberty, libcc1 libvtv and libcpp have
>  been developed separately to share code with other tools such as
>  binutils and gdb.  These libraries again have similar challenges to
>  compiler drivers.  While they are expected to be robust against
>  arbitrary input, they should only be used with trusted inputs.
>
>  Libraries such as zlib that bundled into GCC to build it will be
>  treated the same as the compiler drivers and programs as far as
>  security coverage is concerned.  However if you find an issue in
>  these libraries independent of their use in GCC, you should reach
>  out to their upstream projects to report them.
>
>  As a result, the only case for a potential security issue in all
>  these cases is when it ends up generating vulnerable output for
>  valid input source code.
>
>  As a result, the only case for a potential security issue in the
>  compiler is when it generates vulnerable application code for
>  trusted input source code that is conforming to the relevant
>  programming standard or extensions documented as supported by GCC
>  and the algorithm expressed in the source code does not have the
>  vulnerability.  The output application code could be considered
>  vulnerable if it produces an actual vulnerability in the target
>  application, specifically in the following cases:
>
>  - The application dereferences an invalid memory location despite
>the application sources being valid.
>  - The application reads from or writes to a valid but incorrect
>memory location, resulting in an information integrity issue or an
>information leak.
>  - The application ends up running in an infinite loop or with
>severe degradation in performance despite the input sources having
>no such issue, resulting in a Denial of Service.  Note that
>correct but non-performant code is not a security issue candidate,
>this only applies to incorrect code that may result in performance
>degradation severe enough to amount to a denial of service.
>  - The application crashes due to the generated incorrect code,
>resulting in a Denial of Service.

One difficulty is that wrong-code bugs are rarely confined to
a particular source code structure.  Something that causes a
miscompilation of a bounds check could later be discovered to cause a
miscompilation of something that is less obviously security-sensitive.
Or the same thing could happen in reverse.  And it's common for the
same bug to be reported multiple times, against different testcases.

The proposal says that certain kinds of wrong code could be a security
bug.  But what will be the criteria for deciding whether a wrong code
bug that *could* be classified as a security bug is in fact a security
bug?  Does someone have to show that at least one 

Re: [RFC] GCC Security policy

2023-08-14 Thread Siddhesh Poyarekar

Hi,

Here's the updated draft of the top part of the security policy with all 
of the recommendations incorporated.


Thanks,
Sid


What is a GCC security bug?
===

A security bug is one that threatens the security of a system or
network, or might compromise the security of data stored on it.
In the context of GCC there are multiple ways in which this might
happen and they're detailed below.

Compiler drivers, programs, libgccjit and support libraries
---

The compiler driver processes source code, invokes other programs
such as the assembler and linker and generates the output result,
which may be assembly code or machine code.  It is necessary that
all source code inputs to the compiler are trusted, since it is
impossible for the driver to validate input source code beyond
conformance to a programming language standard.

The GCC JIT implementation, libgccjit, is intended to be plugged
into applications to translate input source code in the application
context.  Limitations that apply to the compiler
driver, apply here too in terms of sanitizing inputs, so it is
recommended that inputs are either sanitized by an external program
to allow only trusted, safe execution in the context of the
application or the JIT execution context is appropriately sandboxed
to contain the effects of any bugs in the JIT or its generated code
to the sandboxed environment.

Support libraries such as libiberty, libcc1 libvtv and libcpp have
been developed separately to share code with other tools such as
binutils and gdb.  These libraries again have similar challenges to
compiler drivers.  While they are expected to be robust against
arbitrary input, they should only be used with trusted inputs.

Libraries such as zlib that bundled into GCC to build it will be
treated the same as the compiler drivers and programs as far as
security coverage is concerned.  However if you find an issue in
these libraries independent of their use in GCC, you should reach
out to their upstream projects to report them.

As a result, the only case for a potential security issue in all
these cases is when it ends up generating vulnerable output for
valid input source code.

As a result, the only case for a potential security issue in the
compiler is when it generates vulnerable application code for
trusted input source code that is conforming to the relevant
programming standard or extensions documented as supported by GCC
and the algorithm expressed in the source code does not have the
vulnerability.  The output application code could be considered
vulnerable if it produces an actual vulnerability in the target
application, specifically in the following cases:

- The application dereferences an invalid memory location despite
  the application sources being valid.
- The application reads from or writes to a valid but incorrect
  memory location, resulting in an information integrity issue or an
  information leak.
- The application ends up running in an infinite loop or with
  severe degradation in performance despite the input sources having
  no such issue, resulting in a Denial of Service.  Note that
  correct but non-performant code is not a security issue candidate,
  this only applies to incorrect code that may result in performance
  degradation severe enough to amount to a denial of service.
- The application crashes due to the generated incorrect code,
  resulting in a Denial of Service.

Language runtime libraries
--

GCC also builds and distributes libraries that are intended to be
used widely to implement runtime support for various programming
languages.  These include the following:

* libada
* libatomic
* libbacktrace
* libcc1
* libcody
* libcpp
* libdecnumber
* libffi
* libgcc
* libgfortran
* libgm2
* libgo
* libgomp
* libiberty
* libitm
* libobjc
* libphobos
* libquadmath
* libsanitizer
* libssp
* libstdc++

These libraries are intended to be used in arbitrary contexts and as
a result, bugs in these libraries may be evaluated for security
impact.  However, some of these libraries, e.g. libgo, libphobos,
etc.  are not maintained in the GCC project, due to which the GCC
project may not be the correct point of contact for them.  You are
encouraged to look at README files within those library directories
to locate the canonical security contact point for those projects
and include them in the report.  Once the issue is fixed in the
upstream project, the fix will be synced into GCC in a future
release.

Most security vulnerabilities in these runtime libraries arise when
an application 

Re: [RFC] GCC Security policy

2023-08-11 Thread Siddhesh Poyarekar

On 2023-08-11 11:12, David Edelsohn wrote:
The text above states "bugs in these libraries may be evaluated for 
security impact", but there is no comment about the criteria for a 
security impact, unlike the GLIBC SECURITY.md document.  The text seems 
to imply the "What is a security bug?" definitions from GLIBC, but the 
definitions are not explicitly stated in the GCC Security policy.


Should this "Language runtime libraries" section include some of the 
GLIBC "What is a security bug?" text or should the GCC "What is a 
security bug?" section earlier in this document include the text with a 
qualification that issues like buffer overflow, memory leaks, 
information disclosure, etc. specifically apply to "Language runtime 
libraries" and not all components of GCC?


Yes, that makes sense.  This part will likely evolve though, much like 
the glibc one did, based on reports we get over time.  I'll work it in 
and post an updated draft.


Thanks,
Sid


Re: [RFC] GCC Security policy

2023-08-11 Thread Siddhesh Poyarekar

On 2023-08-11 11:09, Paul Koning wrote:




On Aug 11, 2023, at 10:36 AM, Siddhesh Poyarekar  wrote:

On 2023-08-10 14:50, Siddhesh Poyarekar wrote:

   As a result, the only case for a potential security issue in all
   these cases is when it ends up generating vulnerable output for
   valid input source code.


I think this leaves open the interpretation "every wrong code bug
is potentially a security bug".  I suppose that's true in a trite sense,
but not in a useful sense.  As others said earlier in the thread,
whether a wrong code bug in GCC leads to a security bug in the object
code is too application-dependent to be a useful classification for GCC.

I think we should explicitly say that we don't generally consider wrong
code bugs to be security bugs.  Leaving it implicit is bound to lead
to misunderstanding.

I see what you mean, but the context-dependence of a bug is something GCC will 
have to deal with, similar to how libraries have to deal with bugs.  But I 
agree this probably needs some more expansion.  Let me try and come up with 
something more detailed for that last paragraph.


How's this:

As a result, the only case for a potential security issue in the compiler is 
when it generates vulnerable application code for valid, trusted input source 
code.  The output application code could be considered vulnerable if it 
produces an actual vulnerability in the target application, specifically in the 
following cases:


You might make it explicit that we're talking about wrong code errors here -- 
in other words, the source code is correct (conforms to the standard) and the 
algorithm expressed in the source code does not have a vulnerability, but the 
generated code has semantics that differ from those of the source code such 
that it does have a vulnerability.


Ack, thanks for the suggestion.




- The application dereferences an invalid memory location despite the 
application sources being valid.

- The application reads from or writes to a valid but incorrect memory 
location, resulting in an information integrity issue or an information leak.

- The application ends up running in an infinite loop or with severe 
degradation in performance despite the input sources having no such issue, 
resulting in a Denial of Service.  Note that correct but non-performant code is 
not a security issue candidate, this only applies to incorrect code that may 
result in performance degradation.


The last sentence somewhat contradicts the preceding one.  Perhaps "...may result in 
performance degradation severe enough to amount to a denial of service".


Ack, will fix that up, thanks.

Sid


Re: [RFC] GCC Security policy

2023-08-11 Thread David Edelsohn via Gcc-patches
On Wed, Aug 9, 2023 at 1:33 PM Siddhesh Poyarekar 
wrote:

> On 2023-08-08 10:30, Siddhesh Poyarekar wrote:
> >> Do you have a suggestion for the language to address libgcc,
> >> libstdc++, etc. and libiberty, libbacktrace, etc.?
> >
> > I'll work on this a bit and share a draft.
>
> Hi David,
>
> Here's what I came up with for different parts of GCC, including the
> runtime libraries.  Over time we may find that specific parts of runtime
> libraries simply cannot be used safely in some contexts and flag that.
>
> Sid
>
> """
> What is a GCC security bug?
> ===
>
>  A security bug is one that threatens the security of a system or
>  network, or might compromise the security of data stored on it.
>  In the context of GCC there are multiple ways in which this might
>  happen and they're detailed below.
>
> Compiler drivers, programs, libgccjit and support libraries
> ---
>
>  The compiler driver processes source code, invokes other programs
>  such as the assembler and linker and generates the output result,
>  which may be assembly code or machine code.  It is necessary that
>  all source code inputs to the compiler are trusted, since it is
>  impossible for the driver to validate input source code beyond
>  conformance to a programming language standard.
>
>  The GCC JIT implementation, libgccjit, is intended to be plugged
>  into applications to translate input source code in the application
>  context.  Limitations that apply to the compiler
>  driver, apply here too in terms of sanitizing inputs, so it is
>  recommended that inputs are either sanitized by an external program
>  to allow only trusted, safe execution in the context of the
>  application or the JIT execution context is appropriately sandboxed
>  to contain the effects of any bugs in the JIT or its generated code
>  to the sandboxed environment.
>
>  Support libraries such as libiberty, libcc1 libvtv and libcpp have
>  been developed separately to share code with other tools such as
>  binutils and gdb.  These libraries again have similar challenges to
>  compiler drivers.  While they are expected to be robust against
>  arbitrary input, they should only be used with trusted inputs.
>
>  Libraries such as zlib and libffi that bundled into GCC to build it
>  will be treated the same as the compiler drivers and programs as far
>  as security coverage is concerned.
>
>  As a result, the only case for a potential security issue in all
>  these cases is when it ends up generating vulnerable output for
>  valid input source code.
>
> Language runtime libraries
> --
>
>  GCC also builds and distributes libraries that are intended to be
>  used widely to implement runtime support for various programming
>  languages.  These include the following:
>
>  * libada
>  * libatomic
>  * libbacktrace
>  * libcc1
>  * libcody
>  * libcpp
>  * libdecnumber
>  * libgcc
>  * libgfortran
>  * libgm2
>  * libgo
>  * libgomp
>  * libiberty
>  * libitm
>  * libobjc
>  * libphobos
>  * libquadmath
>  * libssp
>  * libstdc++
>
>  These libraries are intended to be used in arbitrary contexts and as
>  a result, bugs in these libraries may be evaluated for security
>  impact.  However, some of these libraries, e.g. libgo, libphobos,
>  etc.  are not maintained in the GCC project, due to which the GCC
>  project may not be the correct point of contact for them.  You are
>  encouraged to look at README files within those library directories
>  to locate the canonical security contact point for those projects.
>

Hi, Sid

The text above states "bugs in these libraries may be evaluated for
security impact", but there is no comment about the criteria for a security
impact, unlike the GLIBC SECURITY.md document.  The text seems to imply the
"What is a security bug?" definitions from GLIBC, but the definitions are
not explicitly stated in the GCC Security policy.

Should this "Language runtime libraries" section include some of the GLIBC
"What is a security bug?" text or should the GCC "What is a security bug?"
section earlier in this document include the text with a qualification that
issues like buffer overflow, memory leaks, information disclosure, etc.
specifically apply to "Language runtime libraries" and not all components
of GCC?

Thanks, David


>
> Diagnostic libraries
> 
>
>  The sanitizer library bundled in GCC is intended to be used in
>  diagnostic cases and not intended for use in sensitive environments.
>  As a result, bugs in the sanitizer will not be considered security
>  sensitive.
>
> GCC plugins
> ---
>
>  It should be noted that GCC may execute arbitrary code loaded by a

Re: [RFC] GCC Security policy

2023-08-11 Thread Paul Koning via Gcc-patches



> On Aug 11, 2023, at 10:36 AM, Siddhesh Poyarekar  wrote:
> 
> On 2023-08-10 14:50, Siddhesh Poyarekar wrote:
   As a result, the only case for a potential security issue in all
   these cases is when it ends up generating vulnerable output for
   valid input source code.
>>> 
>>> I think this leaves open the interpretation "every wrong code bug
>>> is potentially a security bug".  I suppose that's true in a trite sense,
>>> but not in a useful sense.  As others said earlier in the thread,
>>> whether a wrong code bug in GCC leads to a security bug in the object
>>> code is too application-dependent to be a useful classification for GCC.
>>> 
>>> I think we should explicitly say that we don't generally consider wrong
>>> code bugs to be security bugs.  Leaving it implicit is bound to lead
>>> to misunderstanding.
>> I see what you mean, but the context-dependence of a bug is something GCC 
>> will have to deal with, similar to how libraries have to deal with bugs.  
>> But I agree this probably needs some more expansion.  Let me try and come up 
>> with something more detailed for that last paragraph.
> 
> How's this:
> 
> As a result, the only case for a potential security issue in the compiler is 
> when it generates vulnerable application code for valid, trusted input source 
> code.  The output application code could be considered vulnerable if it 
> produces an actual vulnerability in the target application, specifically in 
> the following cases:

You might make it explicit that we're talking about wrong code errors here -- 
in other words, the source code is correct (conforms to the standard) and the 
algorithm expressed in the source code does not have a vulnerability, but the 
generated code has semantics that differ from those of the source code such 
that it does have a vulnerability.

> - The application dereferences an invalid memory location despite the 
> application sources being valid.
> 
> - The application reads from or writes to a valid but incorrect memory 
> location, resulting in an information integrity issue or an information leak.
> 
> - The application ends up running in an infinite loop or with severe 
> degradation in performance despite the input sources having no such issue, 
> resulting in a Denial of Service.  Note that correct but non-performant code 
> is not a security issue candidate, this only applies to incorrect code that 
> may result in performance degradation.

The last sentence somewhat contradicts the preceding one.  Perhaps "...may 
result in performance degradation severe enough to amount to a denial of 
service".

> - The application crashes due to the generated incorrect code, resulting in a 
> Denial of Service.

paul



Re: [RFC] GCC Security policy

2023-08-11 Thread Siddhesh Poyarekar

On 2023-08-10 14:50, Siddhesh Poyarekar wrote:

  As a result, the only case for a potential security issue in all
  these cases is when it ends up generating vulnerable output for
  valid input source code.


I think this leaves open the interpretation "every wrong code bug
is potentially a security bug".  I suppose that's true in a trite sense,
but not in a useful sense.  As others said earlier in the thread,
whether a wrong code bug in GCC leads to a security bug in the object
code is too application-dependent to be a useful classification for GCC.

I think we should explicitly say that we don't generally consider wrong
code bugs to be security bugs.  Leaving it implicit is bound to lead
to misunderstanding.


I see what you mean, but the context-dependence of a bug is something 
GCC will have to deal with, similar to how libraries have to deal with 
bugs.  But I agree this probably needs some more expansion.  Let me try 
and come up with something more detailed for that last paragraph.


How's this:

As a result, the only case for a potential security issue in the 
compiler is when it generates vulnerable application code for valid, 
trusted input source code.  The output application code could be 
considered vulnerable if it produces an actual vulnerability in the 
target application, specifically in the following cases:


- The application dereferences an invalid memory location despite the 
application sources being valid.


- The application reads from or writes to a valid but incorrect memory 
location, resulting in an information integrity issue or an information 
leak.


- The application ends up running in an infinite loop or with severe 
degradation in performance despite the input sources having no such 
issue, resulting in a Denial of Service.  Note that correct but 
non-performant code is not a security issue candidate, this only applies 
to incorrect code that may result in performance degradation.


- The application crashes due to the generated incorrect code, resulting 
in a Denial of Service.




Re: [RFC] GCC Security policy

2023-08-10 Thread Richard Biener via Gcc-patches



> Am 10.08.2023 um 20:28 schrieb Richard Sandiford :
> 
> Siddhesh Poyarekar  writes:
>> On 2023-08-08 10:30, Siddhesh Poyarekar wrote:
 Do you have a suggestion for the language to address libgcc, 
 libstdc++, etc. and libiberty, libbacktrace, etc.?
>>> 
>>> I'll work on this a bit and share a draft.
>> 
>> Hi David,
>> 
>> Here's what I came up with for different parts of GCC, including the 
>> runtime libraries.  Over time we may find that specific parts of runtime 
>> libraries simply cannot be used safely in some contexts and flag that.
>> 
>> Sid
>> 
>> """
>> What is a GCC security bug?
>> ===
>> 
>> A security bug is one that threatens the security of a system or
>> network, or might compromise the security of data stored on it.
>> In the context of GCC there are multiple ways in which this might
>> happen and they're detailed below.
>> 
>> Compiler drivers, programs, libgccjit and support libraries
>> ---
>> 
>> The compiler driver processes source code, invokes other programs
>> such as the assembler and linker and generates the output result,
>> which may be assembly code or machine code.  It is necessary that
>> all source code inputs to the compiler are trusted, since it is
>> impossible for the driver to validate input source code beyond
>> conformance to a programming language standard.
>> 
>> The GCC JIT implementation, libgccjit, is intended to be plugged
>> into applications to translate input source code in the application
>> context.  Limitations that apply to the compiler
>> driver, apply here too in terms of sanitizing inputs, so it is
>> recommended that inputs are either sanitized by an external program
>> to allow only trusted, safe execution in the context of the
>> application or the JIT execution context is appropriately sandboxed
>> to contain the effects of any bugs in the JIT or its generated code
>> to the sandboxed environment.
>> 
>> Support libraries such as libiberty, libcc1 libvtv and libcpp have
>> been developed separately to share code with other tools such as
>> binutils and gdb.  These libraries again have similar challenges to
>> compiler drivers.  While they are expected to be robust against
>> arbitrary input, they should only be used with trusted inputs.
>> 
>> Libraries such as zlib and libffi that bundled into GCC to build it
>> will be treated the same as the compiler drivers and programs as far
>> as security coverage is concerned.
>> 
>> As a result, the only case for a potential security issue in all
>> these cases is when it ends up generating vulnerable output for
>> valid input source code.
> 
> I think this leaves open the interpretation "every wrong code bug
> is potentially a security bug".  I suppose that's true in a trite sense,
> but not in a useful sense.  As others said earlier in the thread,
> whether a wrong code bug in GCC leads to a security bug in the object
> code is too application-dependent to be a useful classification for GCC.
> 
> I think we should explicitly say that we don't generally consider wrong
> code bugs to be security bugs.  Leaving it implicit is bound to lead
> to misunderstanding.

In some sense the security bug is never in GCC itself but the consumer which is 
what you need to be able to exploit

Richard 


> There's another case that I think should be highlighted explicitly:
> GCC provides various security-hardening features.  I think any failure
> of those feature to act as documented is poentially a security bug.
> Failure to follow reasonable expectations (even if not documented)
> might sometimes be a security bug too.
> 
> Thanks,
> Richard
>> 
>> Language runtime libraries
>> --
>> 
>> GCC also builds and distributes libraries that are intended to be
>> used widely to implement runtime support for various programming
>> languages.  These include the following:
>> 
>> * libada
>> * libatomic
>> * libbacktrace
>> * libcc1
>> * libcody
>> * libcpp
>> * libdecnumber
>> * libgcc
>> * libgfortran
>> * libgm2
>> * libgo
>> * libgomp
>> * libiberty
>> * libitm
>> * libobjc
>> * libphobos
>> * libquadmath
>> * libssp
>> * libstdc++
>> 
>> These libraries are intended to be used in arbitrary contexts and as
>> a result, bugs in these libraries may be evaluated for security
>> impact.  However, some of these libraries, e.g. libgo, libphobos,
>> etc.  are not maintained in the GCC project, due to which the GCC
>> project may not be the correct point of contact for them.  You are
>> encouraged to look at README files within those library directories
>> to locate the canonical security contact point for those projects.
>> 
>> Diagnostic libraries
>> 
>> 
>> 

Re: [RFC] GCC Security policy

2023-08-10 Thread Siddhesh Poyarekar

On 2023-08-10 14:28, Richard Sandiford wrote:

Siddhesh Poyarekar  writes:

On 2023-08-08 10:30, Siddhesh Poyarekar wrote:

Do you have a suggestion for the language to address libgcc,
libstdc++, etc. and libiberty, libbacktrace, etc.?


I'll work on this a bit and share a draft.


Hi David,

Here's what I came up with for different parts of GCC, including the
runtime libraries.  Over time we may find that specific parts of runtime
libraries simply cannot be used safely in some contexts and flag that.

Sid

"""
What is a GCC security bug?
===

  A security bug is one that threatens the security of a system or
  network, or might compromise the security of data stored on it.
  In the context of GCC there are multiple ways in which this might
  happen and they're detailed below.

Compiler drivers, programs, libgccjit and support libraries
---

  The compiler driver processes source code, invokes other programs
  such as the assembler and linker and generates the output result,
  which may be assembly code or machine code.  It is necessary that
  all source code inputs to the compiler are trusted, since it is
  impossible for the driver to validate input source code beyond
  conformance to a programming language standard.

  The GCC JIT implementation, libgccjit, is intended to be plugged
  into applications to translate input source code in the application
  context.  Limitations that apply to the compiler
  driver, apply here too in terms of sanitizing inputs, so it is
  recommended that inputs are either sanitized by an external program
  to allow only trusted, safe execution in the context of the
  application or the JIT execution context is appropriately sandboxed
  to contain the effects of any bugs in the JIT or its generated code
  to the sandboxed environment.

  Support libraries such as libiberty, libcc1 libvtv and libcpp have
  been developed separately to share code with other tools such as
  binutils and gdb.  These libraries again have similar challenges to
  compiler drivers.  While they are expected to be robust against
  arbitrary input, they should only be used with trusted inputs.

  Libraries such as zlib and libffi that bundled into GCC to build it
  will be treated the same as the compiler drivers and programs as far
  as security coverage is concerned.

  As a result, the only case for a potential security issue in all
  these cases is when it ends up generating vulnerable output for
  valid input source code.


I think this leaves open the interpretation "every wrong code bug
is potentially a security bug".  I suppose that's true in a trite sense,
but not in a useful sense.  As others said earlier in the thread,
whether a wrong code bug in GCC leads to a security bug in the object
code is too application-dependent to be a useful classification for GCC.

I think we should explicitly say that we don't generally consider wrong
code bugs to be security bugs.  Leaving it implicit is bound to lead
to misunderstanding.


I see what you mean, but the context-dependence of a bug is something 
GCC will have to deal with, similar to how libraries have to deal with 
bugs.  But I agree this probably needs some more expansion.  Let me try 
and come up with something more detailed for that last paragraph.



There's another case that I think should be highlighted explicitly:
GCC provides various security-hardening features.  I think any failure
of those feature to act as documented is poentially a security bug.
Failure to follow reasonable expectations (even if not documented)
might sometimes be a security bug too.


Missed hardening in general does not put systems at immediate risk, so 
they're not considered CVE-worthy.  In fact when bugs are evaluated for 
security risk at a source level (e.g. when NIST does it), hardening does 
not come into the picture at all.  It's only at product levels that 
hardening features are accounted for, e.g. where -fstack-protector would 
reduce the seriousness of a stack buffer overflow and even there one 
must do an analysis to see if the generated code actually mitigated the 
overflow using the stack protector canary.


Thanks,
Sid


Re: [RFC] GCC Security policy

2023-08-10 Thread Richard Sandiford via Gcc-patches
Siddhesh Poyarekar  writes:
> On 2023-08-08 10:30, Siddhesh Poyarekar wrote:
>>> Do you have a suggestion for the language to address libgcc, 
>>> libstdc++, etc. and libiberty, libbacktrace, etc.?
>> 
>> I'll work on this a bit and share a draft.
>
> Hi David,
>
> Here's what I came up with for different parts of GCC, including the 
> runtime libraries.  Over time we may find that specific parts of runtime 
> libraries simply cannot be used safely in some contexts and flag that.
>
> Sid
>
> """
> What is a GCC security bug?
> ===
>
>  A security bug is one that threatens the security of a system or
>  network, or might compromise the security of data stored on it.
>  In the context of GCC there are multiple ways in which this might
>  happen and they're detailed below.
>
> Compiler drivers, programs, libgccjit and support libraries
> ---
>
>  The compiler driver processes source code, invokes other programs
>  such as the assembler and linker and generates the output result,
>  which may be assembly code or machine code.  It is necessary that
>  all source code inputs to the compiler are trusted, since it is
>  impossible for the driver to validate input source code beyond
>  conformance to a programming language standard.
>
>  The GCC JIT implementation, libgccjit, is intended to be plugged
>  into applications to translate input source code in the application
>  context.  Limitations that apply to the compiler
>  driver, apply here too in terms of sanitizing inputs, so it is
>  recommended that inputs are either sanitized by an external program
>  to allow only trusted, safe execution in the context of the
>  application or the JIT execution context is appropriately sandboxed
>  to contain the effects of any bugs in the JIT or its generated code
>  to the sandboxed environment.
>
>  Support libraries such as libiberty, libcc1 libvtv and libcpp have
>  been developed separately to share code with other tools such as
>  binutils and gdb.  These libraries again have similar challenges to
>  compiler drivers.  While they are expected to be robust against
>  arbitrary input, they should only be used with trusted inputs.
>
>  Libraries such as zlib and libffi that bundled into GCC to build it
>  will be treated the same as the compiler drivers and programs as far
>  as security coverage is concerned.
>
>  As a result, the only case for a potential security issue in all
>  these cases is when it ends up generating vulnerable output for
>  valid input source code.

I think this leaves open the interpretation "every wrong code bug
is potentially a security bug".  I suppose that's true in a trite sense,
but not in a useful sense.  As others said earlier in the thread,
whether a wrong code bug in GCC leads to a security bug in the object
code is too application-dependent to be a useful classification for GCC.

I think we should explicitly say that we don't generally consider wrong
code bugs to be security bugs.  Leaving it implicit is bound to lead
to misunderstanding.

There's another case that I think should be highlighted explicitly:
GCC provides various security-hardening features.  I think any failure
of those feature to act as documented is poentially a security bug.
Failure to follow reasonable expectations (even if not documented)
might sometimes be a security bug too.

Thanks,
Richard
>
> Language runtime libraries
> --
>
>  GCC also builds and distributes libraries that are intended to be
>  used widely to implement runtime support for various programming
>  languages.  These include the following:
>
>  * libada
>  * libatomic
>  * libbacktrace
>  * libcc1
>  * libcody
>  * libcpp
>  * libdecnumber
>  * libgcc
>  * libgfortran
>  * libgm2
>  * libgo
>  * libgomp
>  * libiberty
>  * libitm
>  * libobjc
>  * libphobos
>  * libquadmath
>  * libssp
>  * libstdc++
>
>  These libraries are intended to be used in arbitrary contexts and as
>  a result, bugs in these libraries may be evaluated for security
>  impact.  However, some of these libraries, e.g. libgo, libphobos,
>  etc.  are not maintained in the GCC project, due to which the GCC
>  project may not be the correct point of contact for them.  You are
>  encouraged to look at README files within those library directories
>  to locate the canonical security contact point for those projects.
>
> Diagnostic libraries
> 
>
>  The sanitizer library bundled in GCC is intended to be used in
>  diagnostic cases and not intended for use in sensitive environments.
>  As a result, bugs in the sanitizer will not be considered security
>  sensitive.
>
> GCC plugins
> ---
>
>  It should 

Re: [RFC] GCC Security policy

2023-08-09 Thread Siddhesh Poyarekar

On 2023-08-09 14:17, David Edelsohn wrote:
On Wed, Aug 9, 2023 at 1:33 PM Siddhesh Poyarekar > wrote:


On 2023-08-08 10:30, Siddhesh Poyarekar wrote:
 >> Do you have a suggestion for the language to address libgcc,
 >> libstdc++, etc. and libiberty, libbacktrace, etc.?
 >
 > I'll work on this a bit and share a draft.

Hi David,

Here's what I came up with for different parts of GCC, including the
runtime libraries.  Over time we may find that specific parts of
runtime
libraries simply cannot be used safely in some contexts and flag that.

Sid


Hi, Sid

Thanks for iterating on this.


"""
What is a GCC security bug?
===

      A security bug is one that threatens the security of a system or
      network, or might compromise the security of data stored on it.
      In the context of GCC there are multiple ways in which this might
      happen and they're detailed below.

Compiler drivers, programs, libgccjit and support libraries
---

      The compiler driver processes source code, invokes other programs
      such as the assembler and linker and generates the output result,
      which may be assembly code or machine code.  It is necessary that
      all source code inputs to the compiler are trusted, since it is
      impossible for the driver to validate input source code beyond
      conformance to a programming language standard.

      The GCC JIT implementation, libgccjit, is intended to be plugged
      into applications to translate input source code in the
application
      context.  Limitations that apply to the compiler
      driver, apply here too in terms of sanitizing inputs, so it is
      recommended that inputs are either sanitized by an external
program
      to allow only trusted, safe execution in the context of the
      application or the JIT execution context is appropriately
sandboxed
      to contain the effects of any bugs in the JIT or its generated
code
      to the sandboxed environment.

      Support libraries such as libiberty, libcc1 libvtv and libcpp have
      been developed separately to share code with other tools such as
      binutils and gdb.  These libraries again have similar
challenges to
      compiler drivers.  While they are expected to be robust against
      arbitrary input, they should only be used with trusted inputs.

      Libraries such as zlib and libffi that bundled into GCC to
build it
      will be treated the same as the compiler drivers and programs
as far
      as security coverage is concerned.


Should we direct people to the upstream projects for their security 
policies?


We bundle zlib and libffi so regardless of whether it's a security issue 
in those libraries (because security impact of memory safety bugs in 
general use libraries will be context dependent and hence get assigned 
CVEs more often than not), the context in gcc is well defined as a local 
unprivileged executable and hence not security-relevant.


That said, we could add something like:

However if you find a issue in these libraries independent of their
use in GCC you should reach out to their upstream projects to report
them.




      As a result, the only case for a potential security issue in all
      these cases is when it ends up generating vulnerable output for
      valid input source code.


Language runtime libraries
--

      GCC also builds and distributes libraries that are intended to be
      used widely to implement runtime support for various programming
      languages.  These include the following:

      * libada
      * libatomic
      * libbacktrace
      * libcc1
      * libcody
      * libcpp
      * libdecnumber
      * libgcc
      * libgfortran
      * libgm2
      * libgo
      * libgomp
      * libiberty
      * libitm
      * libobjc
      * libphobos
      * libquadmath
      * libssp
      * libstdc++

      These libraries are intended to be used in arbitrary contexts
and as
      a result, bugs in these libraries may be evaluated for security
      impact.  However, some of these libraries, e.g. libgo, libphobos,
      etc.  are not maintained in the GCC project, due to which the GCC
      project may not be the correct point of contact for them.  You are
      encouraged to look at README files within those library
directories
      to locate the canonical security contact point for those projects.


As Richard mentioned, should GCC make a specific statement about the 
security policy / response for issues that are discovered and fixed in 
the upstream 

Re: [RFC] GCC Security policy

2023-08-09 Thread David Edelsohn via Gcc-patches
On Wed, Aug 9, 2023 at 1:33 PM Siddhesh Poyarekar 
wrote:

> On 2023-08-08 10:30, Siddhesh Poyarekar wrote:
> >> Do you have a suggestion for the language to address libgcc,
> >> libstdc++, etc. and libiberty, libbacktrace, etc.?
> >
> > I'll work on this a bit and share a draft.
>
> Hi David,
>
> Here's what I came up with for different parts of GCC, including the
> runtime libraries.  Over time we may find that specific parts of runtime
> libraries simply cannot be used safely in some contexts and flag that.
>
> Sid
>

Hi, Sid

Thanks for iterating on this.


>
> """
> What is a GCC security bug?
> ===
>
>  A security bug is one that threatens the security of a system or
>  network, or might compromise the security of data stored on it.
>  In the context of GCC there are multiple ways in which this might
>  happen and they're detailed below.
>
> Compiler drivers, programs, libgccjit and support libraries
> ---
>
>  The compiler driver processes source code, invokes other programs
>  such as the assembler and linker and generates the output result,
>  which may be assembly code or machine code.  It is necessary that
>  all source code inputs to the compiler are trusted, since it is
>  impossible for the driver to validate input source code beyond
>  conformance to a programming language standard.
>
>  The GCC JIT implementation, libgccjit, is intended to be plugged
>  into applications to translate input source code in the application
>  context.  Limitations that apply to the compiler
>  driver, apply here too in terms of sanitizing inputs, so it is
>  recommended that inputs are either sanitized by an external program
>  to allow only trusted, safe execution in the context of the
>  application or the JIT execution context is appropriately sandboxed
>  to contain the effects of any bugs in the JIT or its generated code
>  to the sandboxed environment.
>
>  Support libraries such as libiberty, libcc1 libvtv and libcpp have
>  been developed separately to share code with other tools such as
>  binutils and gdb.  These libraries again have similar challenges to
>  compiler drivers.  While they are expected to be robust against
>  arbitrary input, they should only be used with trusted inputs.
>
>  Libraries such as zlib and libffi that bundled into GCC to build it
>  will be treated the same as the compiler drivers and programs as far
>  as security coverage is concerned.
>

Should we direct people to the upstream projects for their security
policies?


>  As a result, the only case for a potential security issue in all
>  these cases is when it ends up generating vulnerable output for
>  valid input source code.


> Language runtime libraries
> --
>
>  GCC also builds and distributes libraries that are intended to be
>  used widely to implement runtime support for various programming
>  languages.  These include the following:
>
>  * libada
>  * libatomic
>  * libbacktrace
>  * libcc1
>  * libcody
>  * libcpp
>  * libdecnumber
>  * libgcc
>  * libgfortran
>  * libgm2
>  * libgo
>  * libgomp
>  * libiberty
>  * libitm
>  * libobjc
>  * libphobos
>  * libquadmath
>  * libssp
>  * libstdc++
>
>  These libraries are intended to be used in arbitrary contexts and as
>  a result, bugs in these libraries may be evaluated for security
>  impact.  However, some of these libraries, e.g. libgo, libphobos,
>  etc.  are not maintained in the GCC project, due to which the GCC
>  project may not be the correct point of contact for them.  You are
>  encouraged to look at README files within those library directories
>  to locate the canonical security contact point for those projects.
>

As Richard mentioned, should GCC make a specific statement about the
security policy / response for issues that are discovered and fixed in the
upstream projects from which the GCC libraries are imported?


>
> Diagnostic libraries
> 
>
>  The sanitizer library bundled in GCC is intended to be used in
>  diagnostic cases and not intended for use in sensitive environments.
>  As a result, bugs in the sanitizer will not be considered security
>  sensitive.
>
> GCC plugins
> ---
>
>  It should be noted that GCC may execute arbitrary code loaded by a
>  user through the GCC plugin mechanism or through system preloading
>  mechanism.  Such custom code should be vetted by the user for safety
>  as bugs exposed through such code will not be considered security
>  issues.
>

Thanks, David


Re: [RFC] GCC Security policy

2023-08-09 Thread Siddhesh Poyarekar

On 2023-08-08 10:30, Siddhesh Poyarekar wrote:
Do you have a suggestion for the language to address libgcc, 
libstdc++, etc. and libiberty, libbacktrace, etc.?


I'll work on this a bit and share a draft.


Hi David,

Here's what I came up with for different parts of GCC, including the 
runtime libraries.  Over time we may find that specific parts of runtime 
libraries simply cannot be used safely in some contexts and flag that.


Sid

"""
What is a GCC security bug?
===

A security bug is one that threatens the security of a system or
network, or might compromise the security of data stored on it.
In the context of GCC there are multiple ways in which this might
happen and they're detailed below.

Compiler drivers, programs, libgccjit and support libraries
---

The compiler driver processes source code, invokes other programs
such as the assembler and linker and generates the output result,
which may be assembly code or machine code.  It is necessary that
all source code inputs to the compiler are trusted, since it is
impossible for the driver to validate input source code beyond
conformance to a programming language standard.

The GCC JIT implementation, libgccjit, is intended to be plugged
into applications to translate input source code in the application
context.  Limitations that apply to the compiler
driver, apply here too in terms of sanitizing inputs, so it is
recommended that inputs are either sanitized by an external program
to allow only trusted, safe execution in the context of the
application or the JIT execution context is appropriately sandboxed
to contain the effects of any bugs in the JIT or its generated code
to the sandboxed environment.

Support libraries such as libiberty, libcc1 libvtv and libcpp have
been developed separately to share code with other tools such as
binutils and gdb.  These libraries again have similar challenges to
compiler drivers.  While they are expected to be robust against
arbitrary input, they should only be used with trusted inputs.

Libraries such as zlib and libffi that bundled into GCC to build it
will be treated the same as the compiler drivers and programs as far
as security coverage is concerned.

As a result, the only case for a potential security issue in all
these cases is when it ends up generating vulnerable output for
valid input source code.

Language runtime libraries
--

GCC also builds and distributes libraries that are intended to be
used widely to implement runtime support for various programming
languages.  These include the following:

* libada
* libatomic
* libbacktrace
* libcc1
* libcody
* libcpp
* libdecnumber
* libgcc
* libgfortran
* libgm2
* libgo
* libgomp
* libiberty
* libitm
* libobjc
* libphobos
* libquadmath
* libssp
* libstdc++

These libraries are intended to be used in arbitrary contexts and as
a result, bugs in these libraries may be evaluated for security
impact.  However, some of these libraries, e.g. libgo, libphobos,
etc.  are not maintained in the GCC project, due to which the GCC
project may not be the correct point of contact for them.  You are
encouraged to look at README files within those library directories
to locate the canonical security contact point for those projects.

Diagnostic libraries


The sanitizer library bundled in GCC is intended to be used in
diagnostic cases and not intended for use in sensitive environments.
As a result, bugs in the sanitizer will not be considered security
sensitive.

GCC plugins
---

It should be noted that GCC may execute arbitrary code loaded by a
user through the GCC plugin mechanism or through system preloading
mechanism.  Such custom code should be vetted by the user for safety
as bugs exposed through such code will not be considered security
issues.


Re: [RFC] GCC Security policy

2023-08-09 Thread Richard Earnshaw (lists) via Gcc-patches

On 08/08/2023 20:39, Carlos O'Donell via Gcc-patches wrote:

On 8/8/23 13:46, David Edelsohn wrote:

I believe that upstream projects for components that are imported
into GCC should be responsible for their security policy, including
libgo, gofrontend, libsanitizer (other than local patches), zlib,
libtool, libphobos, libcody, libffi, eventually Rust libcore, etc.


I agree completely.

We can reference the upstream and direct people to follow upstream security
policy for these bundled components.

Any other policy risks having conflicting guidance between the projects,
which is not useful for security policy.

There might be exceptions to this rule, particularly when the downstream
wants to accept particular risks while upstream does not; but none of these
components are in that case IMO.



I agree with that, but with one caveat.  Our policy should state what we 
 do once upstream has addressed the issue.


R.


Re: [RFC] GCC Security policy

2023-08-08 Thread Joseph Myers
On Tue, 8 Aug 2023, David Malcolm via Gcc-patches wrote:

> However, consider a situation in which someone attempted to, say, embed
> libgccjit inside a web browser to generate machine code from
> JavaScript, where the JavaScript is potentially controlled by an
> attacker.  I think we want to explicitly say that that if you're going
> to do that, you need to put some other layer of defense in, so that
> you're not blithely accepting the inputs to the compilation (sources
> and options) from a potentially hostile source, where a crafted input
> sources could potentially hit an ICE in the compiler and thus crash the
> web browser.

A binutils analogue of sorts: you might well want to use objdump etc. on 
untrusted input, e.g. as part of analysis of a captured malware sample.  
But if you are using binutils tools in malware analysis, you really, 
really need to do so in a heavily sandboxed environment, as the malware 
could well try to exploit any system investigating it.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: [RFC] GCC Security policy

2023-08-08 Thread Carlos O'Donell via Gcc-patches
On 8/8/23 13:46, David Edelsohn wrote:
> I believe that upstream projects for components that are imported
> into GCC should be responsible for their security policy, including
> libgo, gofrontend, libsanitizer (other than local patches), zlib,
> libtool, libphobos, libcody, libffi, eventually Rust libcore, etc.

I agree completely.

We can reference the upstream and direct people to follow upstream security
policy for these bundled components.

Any other policy risks having conflicting guidance between the projects,
which is not useful for security policy.

There might be exceptions to this rule, particularly when the downstream
wants to accept particular risks while upstream does not; but none of these
components are in that case IMO.

-- 
Cheers,
Carlos.



Re: [RFC] GCC Security policy

2023-08-08 Thread David Edelsohn via Gcc-patches
On Tue, Aug 8, 2023 at 1:36 PM Ian Lance Taylor  wrote:

> On Tue, Aug 8, 2023 at 7:37 AM Jakub Jelinek  wrote:
> >
> > BTW, I think we should perhaps differentiate between production ready
> > libraries (e.g. libgcc, libstdc++, libgomp, libatomic, libgfortran,
> libquadmath,
> > libssp) vs. e.g. the sanitizer libraries which are meant for debugging
> and
> > I believe it is highly risky to run them in programs with extra
> priviledges
> > - e.g. I think they use getenv rather than *secure_getenv to get at
> various
> > tweaks for their behavior including where logging will happen and
> upstream
> > doesn't really care.
> > And not really sure what to say about lesser used language support
> > libraries, libada, libphobos, libgo, libgm2, ... nor what to say about
> > libvtv etc.
>
> libgo is a complicated case because it has a lot of components
> including a web server with TLS support, so there are a lot of
> potential security issues for programs that use libgo.  The upstream
> security policy is https://go.dev/security/policy.  I'm not sure what
> to say about libgo in GCC, since realistically the support for
> security problems is best-effort.  I guess we should at least accept
> security reports, even if we can't promise to fix them quickly.
>

 I believe that upstream projects for components that are imported into GCC
should be responsible for their security policy, including libgo,
gofrontend, libsanitizer (other than local patches), zlib, libtool,
libphobos, libcody, libffi, eventually Rust libcore, etc.

Thanks, David


Re: [RFC] GCC Security policy

2023-08-08 Thread Ian Lance Taylor via Gcc-patches
On Tue, Aug 8, 2023 at 7:37 AM Jakub Jelinek  wrote:
>
> BTW, I think we should perhaps differentiate between production ready
> libraries (e.g. libgcc, libstdc++, libgomp, libatomic, libgfortran, 
> libquadmath,
> libssp) vs. e.g. the sanitizer libraries which are meant for debugging and
> I believe it is highly risky to run them in programs with extra priviledges
> - e.g. I think they use getenv rather than *secure_getenv to get at various
> tweaks for their behavior including where logging will happen and upstream
> doesn't really care.
> And not really sure what to say about lesser used language support
> libraries, libada, libphobos, libgo, libgm2, ... nor what to say about
> libvtv etc.

libgo is a complicated case because it has a lot of components
including a web server with TLS support, so there are a lot of
potential security issues for programs that use libgo.  The upstream
security policy is https://go.dev/security/policy.  I'm not sure what
to say about libgo in GCC, since realistically the support for
security problems is best-effort.  I guess we should at least accept
security reports, even if we can't promise to fix them quickly.

Ian


Re: [RFC] GCC Security policy

2023-08-08 Thread Paul Koning via Gcc-patches



> On Aug 8, 2023, at 11:55 AM, Siddhesh Poyarekar  wrote:
> 
> On 2023-08-08 11:48, David Malcolm wrote:
>> On Tue, 2023-08-08 at 09:33 -0400, Paul Koning via Gcc-patches wrote:
>>> 
>>> 
 On Aug 8, 2023, at 9:01 AM, Jakub Jelinek via Gcc-patches
  wrote:
 
 On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-
 patches wrote:
> There's probably external tools to do this, not sure if we should
> replicate
> things in the driver for this.
> 
> But sure, I think the driver is the proper point to address any
> of such
> issues - iff we want to address them at all.  Maybe a nice little
> google summer-of-code project ;)
 
 What I'd really like to avoid is having all compiler bugs
 (primarily ICEs)
 considered to be security bugs (e.g. DoS category), it would be
 terrible to
 release every week a new compiler because of the "security" issues.
>>> 
>>> Indeed.  But my answer would be that such things are not DoS issues.
>>> DoS means that an external input, over which you have little control,
>>> is impairing service.  In the case of a compiler, if feeding it bad
>>> source code X.c causes it to crash, the answer is "well, then don't
>>> do that".
>> Agreed.
>> I'm not sure how to "wordsmith" this, but it seems like the sources and
>> options on the *host* are assumed to be trusted, and that the act of
>> *compiling* source on the host requires trusting them, just like the
>> act of executing the compiled code on the target does.  Though users
>> may be more familiar with sandboxing the target than the host.
>> We should spell this out further for libgccjit: libgccjit allows for
>> ahead-of-time and JIT compilation of sources - but it assumes that
>> those sources (and the compilation options) are trusted.
>> [Adding Andrea Corallo to the addressees]
>> For example, Emacs is using libgccjit to do ahead-of-time compilation
>> of Emacs bytecode.  I'm assuming that Emacs is assuming that its
>> bytecode is trusted, and that there isn't any attempt by Emacs to
>> sandbox the Emacs Lisp being processed.
>> However, consider a situation in which someone attempted to, say, embed
>> libgccjit inside a web browser to generate machine code from
>> JavaScript, where the JavaScript is potentially controlled by an
>> attacker.  I think we want to explicitly say that that if you're going
>> to do that, you need to put some other layer of defense in, so that
>> you're not blithely accepting the inputs to the compilation (sources
>> and options) from a potentially hostile source, where a crafted input
>> sources could potentially hit an ICE in the compiler and thus crash the
>> web browser.
> 
> +1, this is precisely the kind of thing the security policy should warn 
> against and suggest using sandboxing for.  The compiler (or libgccjit) isn't 
> really in a position to defend such uses, ICE or otherwise.

I agree somewhat.  But only somewhat, because the compiler's job is not to 
crash even if presented with bad inputs.  An ICE is a bug, which of course 
we've always accepted.  But as several have agreed, it's not a DoS bug, 
therefore not a security bug.

The scenario of the web browser is a valid one, and I would use it to 
illustrate a general point, which is redundancy in safety measures. If inputs 
come from possibly hostile sources, it's sound practice to have multiple layers 
of protection.  The consuming software should be robust so it doesn't fail when 
subjected to bad inputs.  But additional layers of protection in case there is 
a defect in the first layer are valuable, and sandboxing or the like (chroot, 
for example) can provide that additional defense.  This isn't really a GCC 
issue but rather a general principle of prudence.

paul



Re: [RFC] GCC Security policy

2023-08-08 Thread Richard Earnshaw (lists) via Gcc-patches

On 08/08/2023 15:40, Siddhesh Poyarekar wrote:

On 2023-08-08 10:37, Jakub Jelinek wrote:

On Tue, Aug 08, 2023 at 10:30:10AM -0400, Siddhesh Poyarekar wrote:

Do you have a suggestion for the language to address libgcc, libstdc++,
etc. and libiberty, libbacktrace, etc.?


I'll work on this a bit and share a draft.


BTW, I think we should perhaps differentiate between production ready
libraries (e.g. libgcc, libstdc++, libgomp, libatomic, libgfortran, 
libquadmath,
libssp) vs. e.g. the sanitizer libraries which are meant for debugging 
and


Agreed, that's why I need some time to sort all of the libraries gcc 
builds to categorize them into various levels of support in terms of 
safety re. untrusted input.


Thanks,
Sid


Related to this, our coding standards should really reflect what we 
consider good practice these days.  eg.  There are many library APIs 
around that were once considered acceptable that frankly we would be 
better uninventing.


R.


Re: [RFC] GCC Security policy

2023-08-08 Thread Siddhesh Poyarekar

On 2023-08-08 11:48, David Malcolm wrote:

On Tue, 2023-08-08 at 09:33 -0400, Paul Koning via Gcc-patches wrote:




On Aug 8, 2023, at 9:01 AM, Jakub Jelinek via Gcc-patches
 wrote:

On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-
patches wrote:

There's probably external tools to do this, not sure if we should
replicate
things in the driver for this.

But sure, I think the driver is the proper point to address any
of such
issues - iff we want to address them at all.  Maybe a nice little
google summer-of-code project ;)


What I'd really like to avoid is having all compiler bugs
(primarily ICEs)
considered to be security bugs (e.g. DoS category), it would be
terrible to
release every week a new compiler because of the "security" issues.


Indeed.  But my answer would be that such things are not DoS issues.
DoS means that an external input, over which you have little control,
is impairing service.  In the case of a compiler, if feeding it bad
source code X.c causes it to crash, the answer is "well, then don't
do that".


Agreed.

I'm not sure how to "wordsmith" this, but it seems like the sources and
options on the *host* are assumed to be trusted, and that the act of
*compiling* source on the host requires trusting them, just like the
act of executing the compiled code on the target does.  Though users
may be more familiar with sandboxing the target than the host.

We should spell this out further for libgccjit: libgccjit allows for
ahead-of-time and JIT compilation of sources - but it assumes that
those sources (and the compilation options) are trusted.

[Adding Andrea Corallo to the addressees]

For example, Emacs is using libgccjit to do ahead-of-time compilation
of Emacs bytecode.  I'm assuming that Emacs is assuming that its
bytecode is trusted, and that there isn't any attempt by Emacs to
sandbox the Emacs Lisp being processed.

However, consider a situation in which someone attempted to, say, embed
libgccjit inside a web browser to generate machine code from
JavaScript, where the JavaScript is potentially controlled by an
attacker.  I think we want to explicitly say that that if you're going
to do that, you need to put some other layer of defense in, so that
you're not blithely accepting the inputs to the compilation (sources
and options) from a potentially hostile source, where a crafted input
sources could potentially hit an ICE in the compiler and thus crash the
web browser.


+1, this is precisely the kind of thing the security policy should warn 
against and suggest using sandboxing for.  The compiler (or libgccjit) 
isn't really in a position to defend such uses, ICE or otherwise.


Thanks,
Sid


Re: [RFC] GCC Security policy

2023-08-08 Thread David Malcolm via Gcc-patches
On Tue, 2023-08-08 at 09:33 -0400, Paul Koning via Gcc-patches wrote:
> 
> 
> > On Aug 8, 2023, at 9:01 AM, Jakub Jelinek via Gcc-patches
> >  wrote:
> > 
> > On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-
> > patches wrote:
> > > There's probably external tools to do this, not sure if we should
> > > replicate
> > > things in the driver for this.
> > > 
> > > But sure, I think the driver is the proper point to address any
> > > of such
> > > issues - iff we want to address them at all.  Maybe a nice little
> > > google summer-of-code project ;)
> > 
> > What I'd really like to avoid is having all compiler bugs
> > (primarily ICEs)
> > considered to be security bugs (e.g. DoS category), it would be
> > terrible to
> > release every week a new compiler because of the "security" issues.
> 
> Indeed.  But my answer would be that such things are not DoS issues. 
> DoS means that an external input, over which you have little control,
> is impairing service.  In the case of a compiler, if feeding it bad
> source code X.c causes it to crash, the answer is "well, then don't
> do that".

Agreed.

I'm not sure how to "wordsmith" this, but it seems like the sources and
options on the *host* are assumed to be trusted, and that the act of
*compiling* source on the host requires trusting them, just like the
act of executing the compiled code on the target does.  Though users
may be more familiar with sandboxing the target than the host.

We should spell this out further for libgccjit: libgccjit allows for
ahead-of-time and JIT compilation of sources - but it assumes that
those sources (and the compilation options) are trusted.

[Adding Andrea Corallo to the addressees]

For example, Emacs is using libgccjit to do ahead-of-time compilation
of Emacs bytecode.  I'm assuming that Emacs is assuming that its
bytecode is trusted, and that there isn't any attempt by Emacs to
sandbox the Emacs Lisp being processed.

However, consider a situation in which someone attempted to, say, embed
libgccjit inside a web browser to generate machine code from
JavaScript, where the JavaScript is potentially controlled by an
attacker.  I think we want to explicitly say that that if you're going
to do that, you need to put some other layer of defense in, so that
you're not blithely accepting the inputs to the compilation (sources
and options) from a potentially hostile source, where a crafted input
sources could potentially hit an ICE in the compiler and thus crash the
web browser.

Dave



Re: [RFC] GCC Security policy

2023-08-08 Thread Siddhesh Poyarekar

On 2023-08-08 10:37, Jakub Jelinek wrote:

On Tue, Aug 08, 2023 at 10:30:10AM -0400, Siddhesh Poyarekar wrote:

Do you have a suggestion for the language to address libgcc, libstdc++,
etc. and libiberty, libbacktrace, etc.?


I'll work on this a bit and share a draft.


BTW, I think we should perhaps differentiate between production ready
libraries (e.g. libgcc, libstdc++, libgomp, libatomic, libgfortran, libquadmath,
libssp) vs. e.g. the sanitizer libraries which are meant for debugging and


Agreed, that's why I need some time to sort all of the libraries gcc 
builds to categorize them into various levels of support in terms of 
safety re. untrusted input.


Thanks,
Sid


Re: [RFC] GCC Security policy

2023-08-08 Thread Jakub Jelinek via Gcc-patches
On Tue, Aug 08, 2023 at 10:30:10AM -0400, Siddhesh Poyarekar wrote:
> > Do you have a suggestion for the language to address libgcc, libstdc++,
> > etc. and libiberty, libbacktrace, etc.?
> 
> I'll work on this a bit and share a draft.

BTW, I think we should perhaps differentiate between production ready
libraries (e.g. libgcc, libstdc++, libgomp, libatomic, libgfortran, libquadmath,
libssp) vs. e.g. the sanitizer libraries which are meant for debugging and
I believe it is highly risky to run them in programs with extra priviledges
- e.g. I think they use getenv rather than *secure_getenv to get at various
tweaks for their behavior including where logging will happen and upstream
doesn't really care.
And not really sure what to say about lesser used language support
libraries, libada, libphobos, libgo, libgm2, ... nor what to say about
libvtv etc.

Jakub



Re: [RFC] GCC Security policy

2023-08-08 Thread Siddhesh Poyarekar

On 2023-08-08 10:14, David Edelsohn wrote:
On Tue, Aug 8, 2023 at 10:07 AM Siddhesh Poyarekar > wrote:


On 2023-08-08 10:04, Richard Biener wrote:
 > On Tue, Aug 8, 2023 at 3:35 PM Ian Lance Taylor mailto:i...@google.com>> wrote:
 >>
 >> On Tue, Aug 8, 2023 at 6:02 AM Jakub Jelinek via Gcc-patches
 >> mailto:gcc-patches@gcc.gnu.org>> wrote:
 >>>
 >>> On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via
Gcc-patches wrote:
  There's probably external tools to do this, not sure if we
should replicate
  things in the driver for this.
 
  But sure, I think the driver is the proper point to address
any of such
  issues - iff we want to address them at all.  Maybe a nice little
  google summer-of-code project ;)
 >>>
 >>> What I'd really like to avoid is having all compiler bugs
(primarily ICEs)
 >>> considered to be security bugs (e.g. DoS category), it would be
terrible to
 >>> release every week a new compiler because of the "security" issues.
 >>> Running compiler on untrusted sources can trigger ICEs (which
we want to fix
 >>> but there will always be some), or run into some compile time
and/or compile
 >>> memory issue (we have various quadratic or worse spots),
compiler stack
 >>> limits (deeply nested stuff e.g. during parsing but other areas
as well).
 >>> So, people running fuzzers and reporting issues is great, but
if they'd get
 >>> a CVE assigned for each ice-on-invalid-code, ice-on-valid-code,
 >>> each compile-time-hog and each memory-hog, that wouldn't be useful.
 >>> Runtime libraries or security issues in the code we generate
for valid
 >>> sources are of course a different thing.
 >>
 >>
 >> I wonder if a security policy should say something about the
-fplugin
 >> option.  I agree that an ICE is not a security issue, but I
wonder how
 >> many people are aware that a poorly chosen command line option can
 >> direct the compiler to run arbitrary code.  For that matter the same
 >> is true of setting the GCC_EXEC_PREFIX environment variable, and no
 >> doubt several other environment variables.  My point is not that we
 >> should change these, but that a security policy should draw
attention
 >> to the fact that there are cases in which the compiler will
 >> unexpectedly run other programs.
 >
 > Well, if you run an arbitrary commandline from the internet you get
 > what you deserve, running "echo "Hello World" | gcc -xc - -o
/dev/sda"
 > as root doesn't need plugins to shoot yourself in the foot.  You
need to
 > know what you're doing, otherwise you are basically executing an
 > arbitrary shell script with whatever privileges you have.

I think it would be useful to mention caveats with plugins though, just
like it would be useful to mention exceptions for libiberty and similar
libraries that gcc builds.  It only helps makes things clearer in terms
of what security coverage the project provides.


I have added a line to the Note section in the proposed text:

     GCC and its tools provide features and options that can run 
arbitrary user code (e.g., -fplugin).


How about the following to make it clearer that arbitrary code in 
plugins is not considered secure:


GCC and its tools provide features and options that can run arbitrary 
user code, e.g. using the -fplugin options.  Such custom code should be 
vetted by the user for safety as bugs exposed through such code will not 
be considered security issues.


I believe that the security implication already is addressed because the 
program is not tricked into a direct compromise of security.


Do you have a suggestion for the language to address libgcc, libstdc++, 
etc. and libiberty, libbacktrace, etc.?


I'll work on this a bit and share a draft.

Thanks,
Sid


Re: [RFC] GCC Security policy

2023-08-08 Thread David Edelsohn via Gcc-patches
On Tue, Aug 8, 2023 at 10:07 AM Siddhesh Poyarekar 
wrote:

> On 2023-08-08 10:04, Richard Biener wrote:
> > On Tue, Aug 8, 2023 at 3:35 PM Ian Lance Taylor  wrote:
> >>
> >> On Tue, Aug 8, 2023 at 6:02 AM Jakub Jelinek via Gcc-patches
> >>  wrote:
> >>>
> >>> On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via
> Gcc-patches wrote:
>  There's probably external tools to do this, not sure if we should
> replicate
>  things in the driver for this.
> 
>  But sure, I think the driver is the proper point to address any of
> such
>  issues - iff we want to address them at all.  Maybe a nice little
>  google summer-of-code project ;)
> >>>
> >>> What I'd really like to avoid is having all compiler bugs (primarily
> ICEs)
> >>> considered to be security bugs (e.g. DoS category), it would be
> terrible to
> >>> release every week a new compiler because of the "security" issues.
> >>> Running compiler on untrusted sources can trigger ICEs (which we want
> to fix
> >>> but there will always be some), or run into some compile time and/or
> compile
> >>> memory issue (we have various quadratic or worse spots), compiler stack
> >>> limits (deeply nested stuff e.g. during parsing but other areas as
> well).
> >>> So, people running fuzzers and reporting issues is great, but if
> they'd get
> >>> a CVE assigned for each ice-on-invalid-code, ice-on-valid-code,
> >>> each compile-time-hog and each memory-hog, that wouldn't be useful.
> >>> Runtime libraries or security issues in the code we generate for valid
> >>> sources are of course a different thing.
> >>
> >>
> >> I wonder if a security policy should say something about the -fplugin
> >> option.  I agree that an ICE is not a security issue, but I wonder how
> >> many people are aware that a poorly chosen command line option can
> >> direct the compiler to run arbitrary code.  For that matter the same
> >> is true of setting the GCC_EXEC_PREFIX environment variable, and no
> >> doubt several other environment variables.  My point is not that we
> >> should change these, but that a security policy should draw attention
> >> to the fact that there are cases in which the compiler will
> >> unexpectedly run other programs.
> >
> > Well, if you run an arbitrary commandline from the internet you get
> > what you deserve, running "echo "Hello World" | gcc -xc - -o /dev/sda"
> > as root doesn't need plugins to shoot yourself in the foot.  You need to
> > know what you're doing, otherwise you are basically executing an
> > arbitrary shell script with whatever privileges you have.
>
> I think it would be useful to mention caveats with plugins though, just
> like it would be useful to mention exceptions for libiberty and similar
> libraries that gcc builds.  It only helps makes things clearer in terms
> of what security coverage the project provides.
>

I have added a line to the Note section in the proposed text:

GCC and its tools provide features and options that can run arbitrary
user code (e.g., -fplugin).

I believe that the security implication already is addressed because the
program is not tricked into a direct compromise of security.

Do you have a suggestion for the language to address libgcc, libstdc++,
etc. and libiberty, libbacktrace, etc.?

Thanks, David


Re: [RFC] GCC Security policy

2023-08-08 Thread Siddhesh Poyarekar

On 2023-08-08 10:04, Richard Biener wrote:

On Tue, Aug 8, 2023 at 3:35 PM Ian Lance Taylor  wrote:


On Tue, Aug 8, 2023 at 6:02 AM Jakub Jelinek via Gcc-patches
 wrote:


On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-patches wrote:

There's probably external tools to do this, not sure if we should replicate
things in the driver for this.

But sure, I think the driver is the proper point to address any of such
issues - iff we want to address them at all.  Maybe a nice little
google summer-of-code project ;)


What I'd really like to avoid is having all compiler bugs (primarily ICEs)
considered to be security bugs (e.g. DoS category), it would be terrible to
release every week a new compiler because of the "security" issues.
Running compiler on untrusted sources can trigger ICEs (which we want to fix
but there will always be some), or run into some compile time and/or compile
memory issue (we have various quadratic or worse spots), compiler stack
limits (deeply nested stuff e.g. during parsing but other areas as well).
So, people running fuzzers and reporting issues is great, but if they'd get
a CVE assigned for each ice-on-invalid-code, ice-on-valid-code,
each compile-time-hog and each memory-hog, that wouldn't be useful.
Runtime libraries or security issues in the code we generate for valid
sources are of course a different thing.



I wonder if a security policy should say something about the -fplugin
option.  I agree that an ICE is not a security issue, but I wonder how
many people are aware that a poorly chosen command line option can
direct the compiler to run arbitrary code.  For that matter the same
is true of setting the GCC_EXEC_PREFIX environment variable, and no
doubt several other environment variables.  My point is not that we
should change these, but that a security policy should draw attention
to the fact that there are cases in which the compiler will
unexpectedly run other programs.


Well, if you run an arbitrary commandline from the internet you get
what you deserve, running "echo "Hello World" | gcc -xc - -o /dev/sda"
as root doesn't need plugins to shoot yourself in the foot.  You need to
know what you're doing, otherwise you are basically executing an
arbitrary shell script with whatever privileges you have.


I think it would be useful to mention caveats with plugins though, just 
like it would be useful to mention exceptions for libiberty and similar 
libraries that gcc builds.  It only helps makes things clearer in terms 
of what security coverage the project provides.


Thanks,
Sid


Re: [RFC] GCC Security policy

2023-08-08 Thread Richard Biener via Gcc-patches
On Tue, Aug 8, 2023 at 3:35 PM Ian Lance Taylor  wrote:
>
> On Tue, Aug 8, 2023 at 6:02 AM Jakub Jelinek via Gcc-patches
>  wrote:
> >
> > On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-patches 
> > wrote:
> > > There's probably external tools to do this, not sure if we should 
> > > replicate
> > > things in the driver for this.
> > >
> > > But sure, I think the driver is the proper point to address any of such
> > > issues - iff we want to address them at all.  Maybe a nice little
> > > google summer-of-code project ;)
> >
> > What I'd really like to avoid is having all compiler bugs (primarily ICEs)
> > considered to be security bugs (e.g. DoS category), it would be terrible to
> > release every week a new compiler because of the "security" issues.
> > Running compiler on untrusted sources can trigger ICEs (which we want to fix
> > but there will always be some), or run into some compile time and/or compile
> > memory issue (we have various quadratic or worse spots), compiler stack
> > limits (deeply nested stuff e.g. during parsing but other areas as well).
> > So, people running fuzzers and reporting issues is great, but if they'd get
> > a CVE assigned for each ice-on-invalid-code, ice-on-valid-code,
> > each compile-time-hog and each memory-hog, that wouldn't be useful.
> > Runtime libraries or security issues in the code we generate for valid
> > sources are of course a different thing.
>
>
> I wonder if a security policy should say something about the -fplugin
> option.  I agree that an ICE is not a security issue, but I wonder how
> many people are aware that a poorly chosen command line option can
> direct the compiler to run arbitrary code.  For that matter the same
> is true of setting the GCC_EXEC_PREFIX environment variable, and no
> doubt several other environment variables.  My point is not that we
> should change these, but that a security policy should draw attention
> to the fact that there are cases in which the compiler will
> unexpectedly run other programs.

Well, if you run an arbitrary commandline from the internet you get
what you deserve, running "echo "Hello World" | gcc -xc - -o /dev/sda"
as root doesn't need plugins to shoot yourself in the foot.  You need to
know what you're doing, otherwise you are basically executing an
arbitrary shell script with whatever privileges you have.

Richard.

>
> Ian


Re: [RFC] GCC Security policy

2023-08-08 Thread Ian Lance Taylor via Gcc-patches
On Tue, Aug 8, 2023 at 6:02 AM Jakub Jelinek via Gcc-patches
 wrote:
>
> On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-patches 
> wrote:
> > There's probably external tools to do this, not sure if we should replicate
> > things in the driver for this.
> >
> > But sure, I think the driver is the proper point to address any of such
> > issues - iff we want to address them at all.  Maybe a nice little
> > google summer-of-code project ;)
>
> What I'd really like to avoid is having all compiler bugs (primarily ICEs)
> considered to be security bugs (e.g. DoS category), it would be terrible to
> release every week a new compiler because of the "security" issues.
> Running compiler on untrusted sources can trigger ICEs (which we want to fix
> but there will always be some), or run into some compile time and/or compile
> memory issue (we have various quadratic or worse spots), compiler stack
> limits (deeply nested stuff e.g. during parsing but other areas as well).
> So, people running fuzzers and reporting issues is great, but if they'd get
> a CVE assigned for each ice-on-invalid-code, ice-on-valid-code,
> each compile-time-hog and each memory-hog, that wouldn't be useful.
> Runtime libraries or security issues in the code we generate for valid
> sources are of course a different thing.


I wonder if a security policy should say something about the -fplugin
option.  I agree that an ICE is not a security issue, but I wonder how
many people are aware that a poorly chosen command line option can
direct the compiler to run arbitrary code.  For that matter the same
is true of setting the GCC_EXEC_PREFIX environment variable, and no
doubt several other environment variables.  My point is not that we
should change these, but that a security policy should draw attention
to the fact that there are cases in which the compiler will
unexpectedly run other programs.

Ian


Re: [RFC] GCC Security policy

2023-08-08 Thread Paul Koning via Gcc-patches



> On Aug 8, 2023, at 9:01 AM, Jakub Jelinek via Gcc-patches 
>  wrote:
> 
> On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-patches 
> wrote:
>> There's probably external tools to do this, not sure if we should replicate
>> things in the driver for this.
>> 
>> But sure, I think the driver is the proper point to address any of such
>> issues - iff we want to address them at all.  Maybe a nice little
>> google summer-of-code project ;)
> 
> What I'd really like to avoid is having all compiler bugs (primarily ICEs)
> considered to be security bugs (e.g. DoS category), it would be terrible to
> release every week a new compiler because of the "security" issues.

Indeed.  But my answer would be that such things are not DoS issues.  DoS means 
that an external input, over which you have little control, is impairing 
service.  In the case of a compiler, if feeding it bad source code X.c causes 
it to crash, the answer is "well, then don't do that".

paul




Re: [RFC] GCC Security policy

2023-08-08 Thread Michael Matz via Gcc-patches
Hello,

On Tue, 8 Aug 2023, Jakub Jelinek via Gcc-patches wrote:

> What I'd really like to avoid is having all compiler bugs (primarily ICEs)
> considered to be security bugs (e.g. DoS category), it would be terrible to
> release every week a new compiler because of the "security" issues.
> Running compiler on untrusted sources can trigger ICEs (which we want to fix
> but there will always be some), or run into some compile time and/or compile
> memory issue (we have various quadratic or worse spots), compiler stack
> limits (deeply nested stuff e.g. during parsing but other areas as well).
> So, people running fuzzers and reporting issues is great, but if they'd get
> a CVE assigned for each ice-on-invalid-code, ice-on-valid-code,
> each compile-time-hog and each memory-hog, that wouldn't be useful.

This!  Double-this!

FWIW, the binutils security policy, and by extension the proposed GCC 
policy David posted, handles this.  (To me this is the most important 
aspect of such policy, having been on the receiving end of such nonsense 
on the binutils side).

> Runtime libraries or security issues in the code we generate for valid
> sources are of course a different thing.

Generate or otherwise provide for consumption.  E.g. a bug with security 
consequences in the runtime libs (either in source form (templates) or as 
executable code, but with the problem being in e.g. libgcc sources 
(unwinder!)) needs proper handling, similar to how glibc is handled.


Ciao,
Michael.


Re: [RFC] GCC Security policy

2023-08-08 Thread Richard Biener via Gcc-patches
On Tue, Aug 8, 2023 at 3:01 PM Jakub Jelinek  wrote:
>
> On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-patches 
> wrote:
> > There's probably external tools to do this, not sure if we should replicate
> > things in the driver for this.
> >
> > But sure, I think the driver is the proper point to address any of such
> > issues - iff we want to address them at all.  Maybe a nice little
> > google summer-of-code project ;)
>
> What I'd really like to avoid is having all compiler bugs (primarily ICEs)
> considered to be security bugs (e.g. DoS category), it would be terrible to
> release every week a new compiler because of the "security" issues.
> Running compiler on untrusted sources can trigger ICEs (which we want to fix
> but there will always be some), or run into some compile time and/or compile
> memory issue (we have various quadratic or worse spots), compiler stack
> limits (deeply nested stuff e.g. during parsing but other areas as well).
> So, people running fuzzers and reporting issues is great, but if they'd get
> a CVE assigned for each ice-on-invalid-code, ice-on-valid-code,
> each compile-time-hog and each memory-hog, that wouldn't be useful.
> Runtime libraries or security issues in the code we generate for valid
> sources are of course a different thing.

We can only hope they get "confused" by our nice reporting of segfaults ...

Richard.

> Jakub
>


Re: [RFC] GCC Security policy

2023-08-08 Thread Jakub Jelinek via Gcc-patches
On Tue, Aug 08, 2023 at 02:52:57PM +0200, Richard Biener via Gcc-patches wrote:
> There's probably external tools to do this, not sure if we should replicate
> things in the driver for this.
> 
> But sure, I think the driver is the proper point to address any of such
> issues - iff we want to address them at all.  Maybe a nice little
> google summer-of-code project ;)

What I'd really like to avoid is having all compiler bugs (primarily ICEs)
considered to be security bugs (e.g. DoS category), it would be terrible to
release every week a new compiler because of the "security" issues.
Running compiler on untrusted sources can trigger ICEs (which we want to fix
but there will always be some), or run into some compile time and/or compile
memory issue (we have various quadratic or worse spots), compiler stack
limits (deeply nested stuff e.g. during parsing but other areas as well).
So, people running fuzzers and reporting issues is great, but if they'd get
a CVE assigned for each ice-on-invalid-code, ice-on-valid-code,
each compile-time-hog and each memory-hog, that wouldn't be useful.
Runtime libraries or security issues in the code we generate for valid
sources are of course a different thing.

Jakub



Re: [RFC] GCC Security policy

2023-08-08 Thread Richard Biener via Gcc-patches
On Tue, Aug 8, 2023 at 2:33 PM Siddhesh Poyarekar  wrote:
>
> On 2023-08-08 04:16, Richard Biener wrote:
> > On Mon, Aug 7, 2023 at 7:30 PM David Edelsohn via Gcc-patches
> >  wrote:
> >>
> >> FOSS Best Practices recommends that projects have an official Security
> >> policy stated in a SECURITY.md or SECURITY.txt file at the root of the
> >> repository.  GLIBC and Binutils have added such documents.
> >>
> >> Appended is a prototype for a Security policy file for GCC based on the
> >> Binutils document because GCC seems to have more affinity with Binutils as
> >> a tool. Do the runtime libraries distributed with GCC, especially libgcc,
> >> require additional security policies?
> >>
> >> [ ] Is it appropriate to use the Binutils SECURITY.txt as the starting
> >> point or should GCC use GLIBC SECURITY.md as the starting point for the GCC
> >> Security policy?
> >>
> >> [ ] Does GCC, or some components of GCC, require additional care because of
> >> runtime libraries like libgcc and libstdc++, and because of gcov and
> >> profile-directed feedback?
> >
> > I do think that the runtime libraries should at least be explicitly 
> > mentioned
> > because they fall into the "generated output" category and bugs in the
> > runtime are usually more severe as affecting a wider class of inputs.
>
> Ack, I'd expect libstdc++ and libgcc to be aligned with glibc's
> policies.  libiberty and others on the other hand, would probably be
> more suitably aligned with binutils libbfd, where we assume trusted input.
>
> >> Thoughts?
> >>
> >> Thanks, David
> >>
> >> GCC Security Process
> >> 
> >>
> >> What is a GCC security bug?
> >> ===
> >>
> >>  A security bug is one that threatens the security of a system or
> >>  network, or might compromise the security of data stored on it.
> >>  In the context of GCC there are two ways in which such
> >>  bugs might occur.  In the first, the programs themselves might be
> >>  tricked into a direct compromise of security.  In the second, the
> >>  tools might introduce a vulnerability in the generated output that
> >>  was not already present in the files used as input.
> >>
> >>  Other than that, all other bugs will be treated as non-security
> >>  issues.  This does not mean that they will be ignored, just that
> >>  they will not be given the priority that is given to security bugs.
> >>
> >>  This stance applies to the creation tools in the GCC (e.g.,
> >>  gcc, g++, gfortran, gccgo, gccrs, gnat, cpp, gcov, etc.) and the
> >>  libraries that they use.
> >>
> >> Notes:
> >> ==
> >>
> >>  None of the programs in GCC need elevated privileges to operate and
> >>  it is recommended that users do not use them from accounts where such
> >>  privileges are automatically available.
> >
> > I'll note that we could ourselves mitigate some of that by handling 
> > privileged
> > invocation of the driver specially, dropping privs on exec of the sibling 
> > tools
> > and possibly using temporary files or pipes to do the parts of the I/O that
> > need to be privileged.
>
> It's not a bad idea, but it ends up giving legitimizing running the
> compiler as root, pushing the responsibility of privilege management to
> the driver.  How about rejecting invocation as root altogether by
> default, bypassed with a --run-as-root flag instead?
>
> I've also been thinking about a --sandbox flag that isolates the build
> process (for gcc as well as binutils) into a separate namespace so that
> it's usable in a restricted mode on untrusted sources without exposing
> the rest of the system to it.

There's probably external tools to do this, not sure if we should replicate
things in the driver for this.

But sure, I think the driver is the proper point to address any of such
issues - iff we want to address them at all.  Maybe a nice little
google summer-of-code project ;)

Richard.

>
> Thanks,
> Sid


Re: [RFC] GCC Security policy

2023-08-08 Thread Siddhesh Poyarekar

On 2023-08-08 04:16, Richard Biener wrote:

On Mon, Aug 7, 2023 at 7:30 PM David Edelsohn via Gcc-patches
 wrote:


FOSS Best Practices recommends that projects have an official Security
policy stated in a SECURITY.md or SECURITY.txt file at the root of the
repository.  GLIBC and Binutils have added such documents.

Appended is a prototype for a Security policy file for GCC based on the
Binutils document because GCC seems to have more affinity with Binutils as
a tool. Do the runtime libraries distributed with GCC, especially libgcc,
require additional security policies?

[ ] Is it appropriate to use the Binutils SECURITY.txt as the starting
point or should GCC use GLIBC SECURITY.md as the starting point for the GCC
Security policy?

[ ] Does GCC, or some components of GCC, require additional care because of
runtime libraries like libgcc and libstdc++, and because of gcov and
profile-directed feedback?


I do think that the runtime libraries should at least be explicitly mentioned
because they fall into the "generated output" category and bugs in the
runtime are usually more severe as affecting a wider class of inputs.


Ack, I'd expect libstdc++ and libgcc to be aligned with glibc's 
policies.  libiberty and others on the other hand, would probably be 
more suitably aligned with binutils libbfd, where we assume trusted input.



Thoughts?

Thanks, David

GCC Security Process


What is a GCC security bug?
===

 A security bug is one that threatens the security of a system or
 network, or might compromise the security of data stored on it.
 In the context of GCC there are two ways in which such
 bugs might occur.  In the first, the programs themselves might be
 tricked into a direct compromise of security.  In the second, the
 tools might introduce a vulnerability in the generated output that
 was not already present in the files used as input.

 Other than that, all other bugs will be treated as non-security
 issues.  This does not mean that they will be ignored, just that
 they will not be given the priority that is given to security bugs.

 This stance applies to the creation tools in the GCC (e.g.,
 gcc, g++, gfortran, gccgo, gccrs, gnat, cpp, gcov, etc.) and the
 libraries that they use.

Notes:
==

 None of the programs in GCC need elevated privileges to operate and
 it is recommended that users do not use them from accounts where such
 privileges are automatically available.


I'll note that we could ourselves mitigate some of that by handling privileged
invocation of the driver specially, dropping privs on exec of the sibling tools
and possibly using temporary files or pipes to do the parts of the I/O that
need to be privileged.


It's not a bad idea, but it ends up giving legitimizing running the 
compiler as root, pushing the responsibility of privilege management to 
the driver.  How about rejecting invocation as root altogether by 
default, bypassed with a --run-as-root flag instead?


I've also been thinking about a --sandbox flag that isolates the build 
process (for gcc as well as binutils) into a separate namespace so that 
it's usable in a restricted mode on untrusted sources without exposing 
the rest of the system to it.


Thanks,
Sid


Re: [RFC] GCC Security policy

2023-08-08 Thread Richard Biener via Gcc-patches
On Mon, Aug 7, 2023 at 7:30 PM David Edelsohn via Gcc-patches
 wrote:
>
> FOSS Best Practices recommends that projects have an official Security
> policy stated in a SECURITY.md or SECURITY.txt file at the root of the
> repository.  GLIBC and Binutils have added such documents.
>
> Appended is a prototype for a Security policy file for GCC based on the
> Binutils document because GCC seems to have more affinity with Binutils as
> a tool. Do the runtime libraries distributed with GCC, especially libgcc,
> require additional security policies?
>
> [ ] Is it appropriate to use the Binutils SECURITY.txt as the starting
> point or should GCC use GLIBC SECURITY.md as the starting point for the GCC
> Security policy?
>
> [ ] Does GCC, or some components of GCC, require additional care because of
> runtime libraries like libgcc and libstdc++, and because of gcov and
> profile-directed feedback?

I do think that the runtime libraries should at least be explicitly mentioned
because they fall into the "generated output" category and bugs in the
runtime are usually more severe as affecting a wider class of inputs.

> Thoughts?
>
> Thanks, David
>
> GCC Security Process
> 
>
> What is a GCC security bug?
> ===
>
> A security bug is one that threatens the security of a system or
> network, or might compromise the security of data stored on it.
> In the context of GCC there are two ways in which such
> bugs might occur.  In the first, the programs themselves might be
> tricked into a direct compromise of security.  In the second, the
> tools might introduce a vulnerability in the generated output that
> was not already present in the files used as input.
>
> Other than that, all other bugs will be treated as non-security
> issues.  This does not mean that they will be ignored, just that
> they will not be given the priority that is given to security bugs.
>
> This stance applies to the creation tools in the GCC (e.g.,
> gcc, g++, gfortran, gccgo, gccrs, gnat, cpp, gcov, etc.) and the
> libraries that they use.
>
> Notes:
> ==
>
> None of the programs in GCC need elevated privileges to operate and
> it is recommended that users do not use them from accounts where such
> privileges are automatically available.

I'll note that we could ourselves mitigate some of that by handling privileged
invocation of the driver specially, dropping privs on exec of the sibling tools
and possibly using temporary files or pipes to do the parts of the I/O that
need to be privileged.

> Reporting private security bugs
> 
>
>*All bugs reported in the GCC Bugzilla are public.*
>
>In order to report a private security bug that is not immediately
>public, please contact one of the downstream distributions with
>security teams.  The following teams have volunteered to handle
>such bugs:
>
>   Debian:  secur...@debian.org
>   Red Hat: secal...@redhat.com
>   SUSE:secur...@suse.de
>
>Please report the bug to just one of these teams.  It will be shared
>with other teams as necessary.
>
>The team contacted will take care of details such as vulnerability
>rating and CVE assignment (http://cve.mitre.org/about/).  It is likely
>that the team will ask to file a public bug because the issue is
>sufficiently minor and does not warrant an embargo.  An embargo is not
>a requirement for being credited with the discovery of a security
>vulnerability.
>
> Reporting public security bugs
> ==

Put this first, name it "Reporting security bugs"

>It is expected that critical security bugs will be rare, and that most
>security bugs can be reported in GCC, thus making
>them public immediately.  The system can be found here:
>
>   https://gcc.gnu.org/bugzilla/


[RFC] GCC Security policy

2023-08-07 Thread David Edelsohn via Gcc-patches
FOSS Best Practices recommends that projects have an official Security
policy stated in a SECURITY.md or SECURITY.txt file at the root of the
repository.  GLIBC and Binutils have added such documents.

Appended is a prototype for a Security policy file for GCC based on the
Binutils document because GCC seems to have more affinity with Binutils as
a tool. Do the runtime libraries distributed with GCC, especially libgcc,
require additional security policies?

[ ] Is it appropriate to use the Binutils SECURITY.txt as the starting
point or should GCC use GLIBC SECURITY.md as the starting point for the GCC
Security policy?

[ ] Does GCC, or some components of GCC, require additional care because of
runtime libraries like libgcc and libstdc++, and because of gcov and
profile-directed feedback?

Thoughts?

Thanks, David

GCC Security Process


What is a GCC security bug?
===

A security bug is one that threatens the security of a system or
network, or might compromise the security of data stored on it.
In the context of GCC there are two ways in which such
bugs might occur.  In the first, the programs themselves might be
tricked into a direct compromise of security.  In the second, the
tools might introduce a vulnerability in the generated output that
was not already present in the files used as input.

Other than that, all other bugs will be treated as non-security
issues.  This does not mean that they will be ignored, just that
they will not be given the priority that is given to security bugs.

This stance applies to the creation tools in the GCC (e.g.,
gcc, g++, gfortran, gccgo, gccrs, gnat, cpp, gcov, etc.) and the
libraries that they use.

Notes:
==

None of the programs in GCC need elevated privileges to operate and
it is recommended that users do not use them from accounts where such
privileges are automatically available.

Reporting private security bugs


   *All bugs reported in the GCC Bugzilla are public.*

   In order to report a private security bug that is not immediately
   public, please contact one of the downstream distributions with
   security teams.  The following teams have volunteered to handle
   such bugs:

  Debian:  secur...@debian.org
  Red Hat: secal...@redhat.com
  SUSE:secur...@suse.de

   Please report the bug to just one of these teams.  It will be shared
   with other teams as necessary.

   The team contacted will take care of details such as vulnerability
   rating and CVE assignment (http://cve.mitre.org/about/).  It is likely
   that the team will ask to file a public bug because the issue is
   sufficiently minor and does not warrant an embargo.  An embargo is not
   a requirement for being credited with the discovery of a security
   vulnerability.

Reporting public security bugs
==

   It is expected that critical security bugs will be rare, and that most
   security bugs can be reported in GCC, thus making
   them public immediately.  The system can be found here:

  https://gcc.gnu.org/bugzilla/