On Sat, Jan 11, 2014 at 2:51 AM, Ben Reser <b...@reser.org> wrote: > On 1/10/14, 5:38 AM, Jeff Trawick wrote: > > [ ] It is an accepted practice (but not required) to obscure or omit the > > vulnerability impact in CHANGES or commit log information when > committing fixes > > for vulnerabilities to any branch. > > > > [ ] It is mandatory to provide best available description and any > available > > tracking information when committing fixes for vulnerabilities to any > branch, > > delaying committing of the fix if the information shouldn't be provided > yet. > > > > [ ] _______________ (fill in the blank) > > The Subversion project has struggled with this same issue. To the degree > that > there has actually been a fair amount of thought into how to avoid doing > security by obscurity. > > The processes we've discussed have varied from executing the release > entirely > hidden from anyone but the PMC to simply publishing an advisory with a > patch > right before committing to trunk (treating that advisory with patch as a > release with appropriate voting handled by PMC members privately). > > You're always dealing with a certain amount of security by obscurity. The > bugs > we find often have existed for a long time. If one person can find it > someone > else certainly could. For all we know the issues may have already been > found > and only exploited in limited ways such that the issue was never reported. > > Even with advance notification I've found that the binary packagers can > take > their sweet time getting security fixes included. Some binary packagers > don't > really have a process that supports patching (they release one package for > each > version without a method of identify versions that have been patched). > Administrators may not always know what to do with patches. So frankly > all of > the processes stink. > > Yet, I think that the best process is to reveal security issues when you > can > put your best foot forward and have things positioned to get the fixes in > the > hands of as many users as possible as quickly as possible. I think that's > best > served by withholding details (even if you're doing so imperfectly) until > release or the issue is widely disclosed to the public. > > It should be noted that not all security issues are equal. For the most > part > highly critical fixes are rare, when they do come up we could use a release > process that hides everything from non-PMC members until release time > frame. > With other less severe issues possibly just disclosing immediately when we > apply the fix. > > There doesn't need to be a one size fits all answer to this. But I > certainly > would like to see us have a consistent policy for determining which > process to > follow. > > So my vote wouldn't really fit into the options presented above. I'd > suggest > coming up with a process for varying levels of issues and criteria to > determine > which process to follow. >
Hmmm... Maybe I'm missing part of your point, but I think that there is acknowledgement of different issue severities in the vote. The crucial part of the second choice is a simple division of vulnerabilities into two groups by severity: a) low enough severity to share information when tag & roll is not imminent (and furthermore share *all* information) b) high enough severity to NOT share information when tag & roll is not imminent (and furthermore share *no* information) I think a lot of your concerns revolve around assessment of when a vulnerability can be disclosed, and that has to be determined on a case by case basis. The vote is just about whether there will be an in-between situation where we share some information we have (the code change) without sharing the rest, vs. deciding that, separate from the timing of how we release information about a vulnerability, we either release all we have (code + impact) or none of it. Apologies in advance for what I worry is a crude summary of your thoughts :) -- Born in Roswell... married an alien... http://emptyhammock.com/