On 19/12/2021 18:56, John Kinsella wrote:
Mark, you've been handling security issues for years? :)

One option could be to commit the fix - with a commit message indicating
it's a security fix - in a private repo/branch. Include necessary
contributors to test the fix, then as the vuln announcement goes out, merge
the fix branch into the public repo (I get that this confuses the standard
release process). That said, glancing at the older CloudStack security
guidance that I put together[1], I didn't suggest that back then. Can't
remember if I was following ASF guidance or if there was some thought
behind that...

The issue with the above approach is that it breaks the release process and/or makes it obvious there is a security fix in the release.

One of the checks release reviewers should be performing is checking that the public tag and the source tarball are identical. If the release is built from a private branch the public tag will not be consistent.

With my mentor/educator/podcaster hat on, I really like guiding folks to
look at commits in OSS projects that mitigate security issues. Besides
showing "real world" mitigations of issues, often the fix isn't as simple
as one would think and that becomes a great learning experience. When I
have time I'll dig through commits to find the mitigation, but it's a lot
easier for all if it's clearly called out.

Agreed. Tomcat includes commit references when we publish CVEs on our security page. That isn't in ASF guidance but it is worth considering adding that.

Separately - reading the PR I noticed current ASF security guidance
mentions not creating Jiras for security issues: We've had the ability to
add "security" flags in Jira to allow issues to be non-public until
intended release. I know this took a bit of management on infra's side, but
it seemed to work well. I'm not sure if that functionality has fallen out
of favor, but if not, I tend to lean towards documentation/transparency as
the tech allows it. [2]

Private Jira (or Bugzilla for the few projects still using it) are fine if the project wants to do that. The guidance can be updated to include that.

John

1:
https://web.archive.org/web/20190416025112/https://cloudstack.apache.org/security.html
2: For Github - folks have been asking for "private" issues for a few
years, last time I checked that's still not possible AFAIK. I guess if they
did implement that, they could also add support for private branches, which
could help this discussion a lot...

Potentially.

I often use local git branches for security fixes. I assume the other Tomcat committers do something similar. We share proposed fixes, test cases etc as diffs over email using the [email protected] list.

While this approach may look overly burdensome, in practise the additional overhead is minimal and I think there are real benefits both to having a completely different process and that that process is more detached. It helps to create a clear separation between normal issues and security issues. Having to switch your way of working helps you to switch from "normal issue" to "security issue" mode and the additional separation reduces the chances of an inadvertent command making something public sooner than intended.

Mark


On Sun, Dec 19, 2021 at 2:41 AM Mark Thomas <[email protected]> wrote:

On 18/12/2021 22:52, Hen wrote:
I was looking at Andrew's excellent clarifications to the CVE process
language ( https://github.com/apache/www-site/pull/109 ), and one out of
scope thought I had was to add a couple of steps/substeps.

Currently it says:

      "The project team commits the fix. Do not make any reference that
the
commit relates to a security vulnerability."

I was wondering on stretching that out into the larger text of:

      "The project creates a plan for committing the fix and reviews this
with the Security Team. Once approved, the project team commits the fix.
This commit will not make any reference that the commit relates to a
security vulnerability."

I think this has hit the nail on the head regarding a key problem: How
to educate project teams about the difficulties of navigating that fine
line between having to to commit the fix to a public repository without
giving away the fact that the commit is addressing a security
vulnerability.

I don't think there is a one size fits all solution to this. Some
projects have been doing this successfully for years (decades even for
some) and we don't want to add overhead to those projects.

At the other end of the spectrum there are projects that have never
received a vulnerability report before. When a project's processes are
set up to do things in public, it is all too easy for information to
accidentally leak via a public bug report, PR or similar created in
response to a security issue.

I don't have a good solution for this.

In outline, the solution is almost certainly more input from those with
experience of handling security issues. It is the detail of how to do
that I don't have. Whether those with the necessary experience are
willing and able to volunteer the time required is a separate
question/problem.

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail:
[email protected]





---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to