On Sun, Dec 19, 2021 at 2:40 AM Mark Thomas <[email protected]> wrote:
> On 18/12/2021 22:52, Hen wrote: > > I was looking at Andrew's excellent clarifications to the CVE process > > language ( https://github.com/apache/www-site/pull/109 ), and one out of > > scope thought I had was to add a couple of steps/substeps. > > > > Currently it says: > > > > "The project team commits the fix. Do not make any reference that > the > > commit relates to a security vulnerability." > > > > I was wondering on stretching that out into the larger text of: > > > > "The project creates a plan for committing the fix and reviews this > > with the Security Team. Once approved, the project team commits the fix. > > This commit will not make any reference that the commit relates to a > > security vulnerability." > > I think this has hit the nail on the head regarding a key problem: How > to educate project teams about the difficulties of navigating that fine > line between having to to commit the fix to a public repository without > giving away the fact that the commit is addressing a security > vulnerability. > > I don't think there is a one size fits all solution to this. Some > projects have been doing this successfully for years (decades even for > some) and we don't want to add overhead to those projects. > > At the other end of the spectrum there are projects that have never > received a vulnerability report before. When a project's processes are > set up to do things in public, it is all too easy for information to > accidentally leak via a public bug report, PR or similar created in > response to a security issue. > > I don't have a good solution for this. > > In outline, the solution is almost certainly more input from those with > experience of handling security issues. It is the detail of how to do > that I don't have. Whether those with the necessary experience are > willing and able to volunteer the time required is a separate > question/problem. > Thinking Apache style. In terms of 'some are experienced and some are not', perhaps invite the experienced folk to become Security Team members. Watching you operate from afar, I still feel I see you getting peer review on your plans. So we would have the "a Security Team member (not the lead on the issue) peer reviews" stage, and then we have an Apache style nomination for those who have led security issues to the Security Team's satisfaction that they could lead an issue for a project they are not expert in. I agree that finding the necessary volunteers is a challenge, but that's going to be a general challenge for anything we look to improve here. For example, it would be really valuable to write educational post-mortems, but that means someone interviewing those who were involved and writing content. I think at the moment what we (ASF) do is brainstorm ideas for things that will help, and then look to see who can help and how we can extend the trust. I'm assuming that for the next 3 months there is going to be greater intentions than usual to get involved. On extending trust difficulties, it would be awesome to have paid first responders on reported issues as that's high burnout for a volunteer (I'm guessing), but I don't see us being happy with a corporation offering up some random frontline support folk to hear about delicate issues. I'm quite intrigued by how the Linux distros@ security mailing list works and whether there are ideas that could be adapted from there. Hen [I feel bad throwing the 'we' around a bit above as I feel I only have a very bad crayon drawing idea of the problems faced and expertise required; apologies if any of my comments feel hugely presumptive]
