Ben Bucksch wrote:
> > those who are responsible
> > for Mozilla distributions and other Mozilla-based products
> 
> > criteria: whether the Mozilla distribution or Mozilla-based product is
> > generally available for public use or not, how big of a user base it
> > has, how known and trusted within the Mozilla community are the people
> > behind the distribution, and so on.
> 
> These criteria are inherently troublesome, even questionable.
> 
> But I have no better suggestion. I guess, it depends a lot on the
> weigthening - e.g. what you consider "large" and how you weigth the
> different critera against each other.

If we maintain a special "notification list" for Mozilla distributors
(as I discussed), my proposal would be to let the core security group
approve who is on it -- after all, if there are people put on that list
who the security team does not know and trust, then the security team
will be motivated to communicate less through Bugzilla and more through
private email -- and I believe mozilla.org has an interest in
discouraging their doing that (unless it's absolutely necessary).

> > And in principle this problem is no worse than the problem of
> > selecting who gets CVS write access and who doesn't.
> 
> It is, because you can submit patches without CVS write access. It is
> just less fun, technically inferior and a bit less convient.
> In this case, pleople are potentially cutted from information they need
> to do their business.

Yes, you are correct that there are differences between getting CVS
access and getting on the list for immediate notification of
vulnerabilities. But again, we are talking about people being cut off
from information only for a limited period of time.

> > I think the core security group (the people responsible for
> > coordinating investigation and resolution) shouldn't be more
> > three to five people at most; in my opinion the group doesn't need
> > to be any larger than that in order to be effective. (The core group
> > can always invite more people to get involved for a particular bug,
> > as described above.)
> 
> Oh, that's very small, too small IMO. With brendan, shaver, mstoltz and
> jar (sorry, if I forgot someone), you'd have your group complete
> already.
> 
> I think, there are more people who are interested in security bugs and
> also trustworthy. If they match both criteria, it is IMO a good thing to
> add them to the group - they add more reliability, and they can add
> their input or contribute fixes to *any* security bug.

OK, I'll modify my proposal: mozilla.org will appoint an initial group
of people to the core security team; if the team wants to add additional
members then it can do so, using whatever methods and criteria they
decide. I don't care if the security team decides if they need 5 people
on the team, or 15, or whatever. What I do care about (from a
mozilla.org point of view) is having only one or two people that are
held responsible for the security team's operation; in other words, I'd
like to see the equivalent of a "module owner" for this task.

> Let's say, I'd make it my goal to make Mozilla more secure. I'd not only
> hunt for new bugs and fix them, but would like to be able to see and fix
> other known security bugs. Maybe I make it my goal to fix the security
> bugs I "like" within 2 hours after reporting, assuming I am reachable.
> With your proposal, I couldn't.

With my new proposal you could, if you convince the security team that
you would be a useful person to have on their team.

> > The second group (people invited by the core group to help with a
> > particular problem) could be significantly larger, say 10-30 people.
> 
> Since many people work on a volunteer basis, I'd make like to make
> "invitation" to contribute fixes more the exception than the regular
> process.

The "invitation" is to people who already have responsibility for a
particular area, including in particular module owners for that area. In
my opinion if a person is a module owner then they should be responsible
for providing a rapid response to requests for help with security
vulnerabilities. They don't necessarily have to do all the work
themselves, but they have to at least recruit other people who could
help.

> I imagined it like that: If a bug is found in a particular area, the
> person causing the bug, the module owner and a few other relevant
> people, assuming trust, are cced. The first person who claims the bug
> assignes it to him-/herself. Usually, the person who caused the bug will
> be interested in fixing it him-/herself. Or the other "area specialists"
> or the frist security group could jump in, whoever is faster. Others
> review / provide input.

This scenario sounds reasonable, except that I'm not sure what you mean
by "the person causing the bug"? Do you mean the developer who is
responsible for the code in which the bug occurs? Or do you mean the
person reporting the bug?

> > Note that IMO the people on this "Mozilla
> > distributor" list could and should include representatives from
> > "non-corporate" projects like Galeon and K-Meleon, not just
> > representatives from "corporate" projects like Netscape 6
> 
> Sure.
> 
> But I think, that group would be easiest to get into, so it would impose
> the largest risk.

As I discussed above, I think that the core security team (or perhaps
just the team leader or "module owner") should approve people who want
to be added to the distributor list, as the security team will have to
live with any risk.

> > The purpose of this
> > initial period is to assess the severity of the problem, put in place
> > a plan to address it, and write up information for public release;
> 
> IMO *this* should be a matter of hours.

Ideally, yes. But I myself don't want to guarantee this can always be
done in a few hours, because I myself will not be the person who has to
fulfill it.

> > I don't think a one or two day period before automatic disclosure is
> > long enough to do that in all cases: people might not be immediately
> > available, the initial characterization of the problem might be
> > incorrect or incomplete, etc. That's why I think a somewhat longer
> > period is better.
> 
> I think, the period should be enough to fix the problem in most cases.
> That's the rationale behind the temporay hiding - being able to fix the
> problem before any cracker has a chance to exploit it (ideally) - not?

Again, I'll let the people who might be on such a security team say
whether such problems can always be resolved in a day or two, or whether
it might take longer in some cases.

> > I guess the idea is that if the initial
> > reporter of the new bug explicitly marked "I want full disclosure"
> > then no one else would be able to "hide" the bug.
> 
> That's OK, for the reasons you gave in your initial post, not?

Yes.

> > I'm not sure exactly what you mean by "start with proposals for
> > details".
> 
> You're heading in the right direction. I just wanted us to get concrete
> now, since we seem to have, IIRC, mostly consensus on your
> "meta-proposal", and it is about time to implement it.

OK, if you or anyone else has more suggestions please post them.

Frank
-- 
Frank Hecker            work: http://www.collab.net/
[EMAIL PROTECTED]        home: http://www.hecker.org/

Reply via email to