Ben Bucksch wrote:
> OK, now that it is ultra-urgent for me, other means were not effective
> and I even finally have access to bnews.mozilla.org, so I could read the
> excelltent post you referenced above, I'm going to respond:

My comments on your comments are below; note that these comments
represent me "thinking out loud" to some extent, since I wanted to
respond to your points as soon as I could. For some of your points I
need to do more investigation before I can properly respond.

> That's why I
> propose to change the meaning of the "nsonly" flag to "security"
> *immediately* as a short-term solution. Netscape confidental information
> shouldn't be in bugzilla.mozilla.org anyway. If that is too much of a
> problem, create a new flag "security", and be sure to move *all*
> security bugs over there (no matter, if they contain Netscape
> confidental info or not - here, security is more important). Whatever
> you do, do it fast please. I don't believe that no severe security bugs
> have been found in the last month, i.e. I need to make an update for
> Beonex Comm. soon.

Some time back there was a discussion on the mozilla.org staff list
about the current status of the "Netscape confidential" bit in Bugzilla,
but I can't find the full discussions. In any case, Netscape now has
their own internal instance of Bugzilla for tracking bugs for Netscape 6
(separate from bugs for Mozilla proper), so there is certainly no
compelling reason for maintaining the Netscape confidential stuff in
Bugzilla any longer.

However I'll have to defer to our Bugzilla experts on the question of
how many Netscape confidential bugs remain in the public Bugzilla
database, and how soon they could be removed.

I'll respond in greater detail later after I do some more investigation.

> Frank Hecker wrote:
> > 6. There should be some reasonable way for an individual to apply and
> > be approved for membership in the "security group". This does not
> > imply that such access must always be granted, but rather that the
> > procedures for selecting the members of the group should be reasonably
> > fair and justifiable.
> 
> And the same rules should apply to everybody.
> E.g. is hard to justify that an @netscape.com or @beonex.org email
> address immediately qualifies for seeing such bugs, while others have to
> prove their qualification. I especially also include the selection of
> the inital group here.

Note that after further thinking about this topic, I think there are
actually three separate groups that might become involved in some way in
the process of investigating and resolving Mozilla security bugs:

1. The first group is a core group of people who would be chosen by
mozilla.org to have primary responsibility for the way in which Mozilla
security-related bug reports are investigated and resolved. These people
would be responsible for evaluating initial reports of vulnerabilities,
coordinating the work of people investigating and attempting to resolve
the vulnerabilities, and writing any public statements issued by
mozilla.org concerning the vulnerability.

This core group is really the group I'm thinking of when I use the term
"security group". However there are people who are not part of the core
security group who would still be involved in some way with activities
relating to Mozilla security bugs, as described below.

2. The second group is a wider group of people who might get involved in
investigating and fixing vulnerabilities; this group could include
module owners and other developers in various areas, and people doing QA
work. Only a subset of these people might be involved in looking at a
particular vulnerability, based on whether the vulnerability involves
their particular area of expertise; for example, for a problem
involving, say, buffer overruns in the networking code, the core
security group probably wouldn't need to involve the module owners for
the layout or JavaScript code.

As has been pointed out by others, this second group could be created on
an ad hoc basis for each vulnerability; for example, particular module
owners or other people could be put on the cc list for particular bugs,
with a corresponding Bugzilla feature that limited viewing of a
"confidential" bug to people already on the cc list (in addition to the
reporter and assignee, of course).

3. Finally, there is a third group of people who might not necessarily
be involved in resolving the vulnerability, but who should be kept
informed about it from the moment it is reported and marked as
security-relevant. This group is composed of those who are responsible
for Mozilla distributions and other Mozilla-based products this group
needs to be involved because they will need to fix the vulnerabilities
in their own products in order to serve their own customers. There would
be one representative from each company or organization creating and
distributing these Mozilla-based products (or maybe two, one primary and 
one as a backup).

Not to get too deep into implementation at this point, but this group of
people could be put on an "invitation-only" mailing list, and then the
mailing list address could be added to the cc list of bug reports marked
as security-critical. (Presumably this would also allow members of that
list to view the bug reports as well -- again, I don't want to get too
deep into implementation details of how that might happen.)

This last group could be chosen by mozilla.org, based on a number of
criteria: whether the Mozilla distribution or Mozilla-based product is
generally available for public use or not, how big of a user base it
has, how known and trusted within the Mozilla community are the people
behind the distribution, and so on.

If mozilla.org does create such a list of such "distribution
representatives" then it will have to make some judgement calls about
who gets to be on the list and who doesn't, and some people almost
certainly will object to the way in which the selection is done. I don't
see any way to avoid that and still (temporarily) limit disclosure in
some way. And in principle this problem is no worse than the problem of
selecting who gets CVS write access and who doesn't.

> As for the actual rules, I'd make the group very small (say, 10 or 20
> people) or give *all* Mozilla contributors with CVS write access also
> access to security bugs. I can see justifications and reasonable rules
> for both, but not so much for a group sized between that (e.g. 100).

I think the core security group (the people responsible for coordinating
investigation and resolution) shouldn't be more three to five people at
most; in my opinion the group doesn't need to be any larger than that in
order to be effective. (The core group can always invite more people to
get involved for a particular bug, as described above.)

The second group (people invited by the core group to help with a
particular problem) could be significantly larger, say 10-30 people.

The size of the last group would be determined by the number of Mozilla
distributions and products. Note that IMO the people on this "Mozilla
distributor" list could and should include representatives from
"non-corporate" projects like Galeon and K-Meleon, not just
representatives from "corporate" projects like Netscape 6, 

> > 7. There should be full public disclosure of the security
> > vulnerability (and information relating to it maintained by
> > mozilla.org) after some reasonable amount of time, whether or
> > not the vulnerability has actually been resolved by then.
> 
> I agree, because this ensures that the (first) non-disclosure isn't used
> to hide bugs or support "laziness" (at a corporate level).
> I would define "reasonable" as "one week", considering that bugs in
> Linux etc. are usually fixed within hours or at least 2 days.

In my last message I didn't mention a specific time period, but after
thinking about it I agree that a one week period is a reasonable time
length. (You could also state the time period in terms of a certain
number of "business days", say 5 business days.) The purpose of this
initial period is to assess the severity of the problem, put in place a
plan to address it, and write up information for public release; it also
gives Mozilla distributors a chance to figure out how they're going to
address the problem for their own customers.

I don't think a one or two day period before automatic disclosure is
long enough to do that in all cases: people might not be immediately
available, the initial characterization of the problem might be
incorrect or incomplete, etc. That's why I think a somewhat longer
period is better.

> So, you would force me to report the problem to an open forum like a
> newsgroup *first*, so the bug doesn't get marked confidental in the
> first place. This might slow down the process, which is harmful. Adding
> a flag "I want full discluse" (you mentioned that this category would be
> special anyway), similar to "Initial state NEW / UNCONFIRMED" (which you
> can see for new bugs, if you are allowed to confirm bugs), is not much
> of a problem is it?

You have a good point here. I guess the idea is that if the initial
reporter of the new bug explicitly marked "I want full disclosure" then
no one else would be able to "hide" the bug. (Of course, people could
bypass this by opening a new and separate "hidden" bug, but I don't see
any technical way to avoid that, it's really a social issue within the
project, for mozilla.org to deal with.)

> Can we start with proposals for details, so this can proceed further?

I'm not sure exactly what you mean by "start with proposals for
details". You have already proposed some details about the way in which
security bugs might be handled, and I have added some more proposed
details in my comments above. I don't want to write a complete detailed
proposal just yet; I'd like to see more comments from other people
first. (I also need to go back and re-read and respond to comments
people made to my first post on this topic.)

Frank
-- 
Frank Hecker            work: http://www.collab.net/
[EMAIL PROTECTED]        home: http://www.hecker.org/


Reply via email to