Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-21 Thread Adrian Bunk
On Thu, Jun 21, 2007 at 04:59:39PM -0700, Linus Torvalds wrote:
> 
> 
> On Fri, 22 Jun 2007, Adrian Bunk wrote:
> > On Tue, Jun 19, 2007 at 10:04:58AM -0700, Linus Torvalds wrote:
> > >...
> > > This is why I've been advocating bugzilla "forget" stuff, for example. I 
> > > tend to see bugzilla as a place where noise accumulates, rather than a 
> > > place where noise is made into a signal. 
> > > 
> > > Which gets my to the real issue I have: the notion of having a process 
> > > for 
> > > _tracking_ all the information is actually totally counter-productive, if 
> > > a big part of the process isn't also about throwing noise away.
> > > 
> > > We don't want to "save" all the crud. I don't want "smart tracking" to 
> > > keep track of everything. I want "smart forgetting", so that we are only 
> > > left with the major signal - the stuff that matters. 
> > 
> > Even generating the perfect signal is a complete waste of time if 
> > there's no recipient for the signal...
> 
> My argument is that *if* we had "more signal, less noise", we'd probably 
> get more people looking at it.
> 
> In fact, I guarantee that's the case. You may  not be 100% happy with the 
> regression list, but every single maintainer/developer I've talked to has 
> said they appreciated it and it made it easier (and thus more likely) for 
> them to actually look at what the outstanding issues were.


The problems are the parts of the kernel without maintainer or with a 
maintainer who is for whatever reason not able to look after bug 
reports.

And you often need someone with a good knowledge of a specific area of 
the kernel for getting a bug fixed.


Let me make an example:

During 2.6.16-rc, I reported a bug (not a regression) in CIFS where I 
had reproducible during big writes to a Samba server after some 100 MBs 
(not a fixed amount of data, but 100% reproducible when transferring 1 GB)
a complete freeze of my computer (no SysRq possible). And there is
nothing more I (or any other submitter) could have given as information -
in fact it even took me several days to isolate CIFS as the source of 
these freezes.

Steve French and Dave Kleikamp told me to try some mount option.

With this option, I got an Oops instead of a freeze.

After they fixed the Oops, it turned out the patch also fixed the 
freeze. The patch went into 2.6.16, and it was therefore fixed
in 2.6.16.

That's one important value of maintainers.

In many other parts of the kernel, my bug report wouldn't have had any 
effect.


We need more maintaners who look after bugs - but where to find them, 
they don't seem to grow on trees?


>   Linus

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-21 Thread Linus Torvalds


On Fri, 22 Jun 2007, Adrian Bunk wrote:
> On Tue, Jun 19, 2007 at 10:04:58AM -0700, Linus Torvalds wrote:
> >...
> > This is why I've been advocating bugzilla "forget" stuff, for example. I 
> > tend to see bugzilla as a place where noise accumulates, rather than a 
> > place where noise is made into a signal. 
> > 
> > Which gets my to the real issue I have: the notion of having a process for 
> > _tracking_ all the information is actually totally counter-productive, if 
> > a big part of the process isn't also about throwing noise away.
> > 
> > We don't want to "save" all the crud. I don't want "smart tracking" to 
> > keep track of everything. I want "smart forgetting", so that we are only 
> > left with the major signal - the stuff that matters. 
> 
> Even generating the perfect signal is a complete waste of time if 
> there's no recipient for the signal...

My argument is that *if* we had "more signal, less noise", we'd probably 
get more people looking at it.

In fact, I guarantee that's the case. You may  not be 100% happy with the 
regression list, but every single maintainer/developer I've talked to has 
said they appreciated it and it made it easier (and thus more likely) for 
them to actually look at what the outstanding issues were.

Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-21 Thread Adrian Bunk
On Tue, Jun 19, 2007 at 10:04:58AM -0700, Linus Torvalds wrote:
>...
> This is why I've been advocating bugzilla "forget" stuff, for example. I 
> tend to see bugzilla as a place where noise accumulates, rather than a 
> place where noise is made into a signal. 
> 
> Which gets my to the real issue I have: the notion of having a process for 
> _tracking_ all the information is actually totally counter-productive, if 
> a big part of the process isn't also about throwing noise away.
> 
> We don't want to "save" all the crud. I don't want "smart tracking" to 
> keep track of everything. I want "smart forgetting", so that we are only 
> left with the major signal - the stuff that matters. 

Even generating the perfect signal is a complete waste of time if 
there's no recipient for the signal...

>   Linus

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-21 Thread Adrian Bunk
On Tue, Jun 19, 2007 at 08:01:19AM -0700, Linus Torvalds wrote:
> On Tue, 19 Jun 2007, Adrian Bunk wrote:
>...
> > The -mm kernel already implements what your proposed PTS would do.
> > 
> > Plus it gives testers more or less all patches currently pending 
> > inclusion into Linus' tree in one kernel they can test.
> > 
> > The problem are more social problems like patches Andrew has never heard 
> > of before getting into Linus' tree during the merge window.
> 
> Not really. The "problem" boils down to this:
> 
>   [EMAIL PROTECTED] linux]$ git-rev-list --all --since=100.days.ago | wc 
> -l
>   7147
>   [EMAIL PROTECTED] linux]$ git-rev-list --no-merges --all 
> --since=100.days.ago | wc -l
>   6768
> 
> ie over the last hundred days, we have averaged over 70 changes per day, 
> and even ignoring merges and only looking at "pure patches" we have more 
> than an average of 65 patches per day. Every day. Day in and day out.
> 
> That translates to five hundred commits a week, two _thousand_ commits per 
> month, and 25 thousand commits per year. As a fairly constant stream.
> 
> Will mistakes happen? Hell *yes*. 
> 
> And I'd argue that any flow that tries to "guarantee" that mistakes don't 
> happen is broken. It's a sure-fire way to just frustrate people, simply 
> because it assumes a level of perfection in maintainers and developers 
> that isn't possible.
> 
> The accepted industry standard for bug counts is basically one bug per a 
> thousand lines of code. And that's for released, *debugged* code. 
> 
> Yes, we should aim higher. Obviously. Let's say that we aim for 0.1 bugs 
> per KLOC, and that we actually aim for that not just in _released_ code, 
> but in patches.
> 
> What does that mean?
> 
> Do the math:
> 
>   git log -M -p --all --since=100.days.ago | grep '^+' | wc -l
> 
> That basically takes the last one hundred days of development, shows it 
> all as patches, and just counts the "new" lines. It takes about ten 
> seconds to run, and returns 517252 for me right now.
> 
> That's *over*half*a*million* lines added or changed!
> 
> And even with the expectation that we do ten times better than what is 
> often quoted as an industry average, and even with the expectation that 
> this is already fully debugged code, that's at least 50 bugs in the last 
> one hundred days.
> 
> Yeah, we can be even more stringent, and actually subtract the number of 
> lines _removed_ (274930), and assume that only *new* code contains bugs, 
> and that's still just under a quarter million purely *added* lines, and 
> maybe we'd expect just new 24 bugs in the last 100 days.
> 
> [ Argument: some of the old code also contained bugs, so the lines added 
>   to replace it balance out. Counter-argument: new code is less well 
>   tested by *definition* than old code, so.. Counter-counter-argument: the 
>   new code was often added to _fix_ a bug, so the code removed had an even 
>   _higher_ bug rate than normal code.. 
> 
>   End result? We don't know. This is all just food for thought. ]
> 
> So here's the deal: even by the most *stringent* reasonable rules, we add 
> a new bug every four days. That's just something that people need to 
> accept. The people who say "we must never introduce a regression" aren't 
> living on planet earth, they are living in some wonderful world of 
> Blarney, where mistakes don't happen, developers are perfect, hardware is 
> perfect, and maintainers always catch things.
>...

Exactly: We cannot get a regression free or even bug free kernel.
But we could handle the reported regressions (or even the reported bugs) 
better than we do.

Lesson #6:
Get the data.

Some real life numbers from 2.6.21 development:
- 80 days between 2.6.20 and 2.6.21
- 98 post-2.6.20 regression have been reported before 2.6.21 was released
- 15 open post-2.6.20 regression reports at the time of the 2.6.21 release
- 8 open post-2.6.20 regression reports at the time of the 2.6.21 release
that were reported at least 3 weeks before the 2.6.21 release

This:
- only includes regressions with reasonably usable reports [1] and
- confirmed to be regressions and
- reported by the relatively small number (compared to the complete
  number of Linux users) of -rc testers and
- reported before the release of 2.6.21.

We weren't even able to handle all reported recent regressions in 
2.6.21, and for other bugs our numbers won't be better.

When Dave Jones says that for a kernel for a new RHEL release that is 
based on a "stable" upstream kernel they spend 3 months only for shaking 
out bugs in the kernel that's IMHO a good description of our "stable" 
kernels.

I'm not claiming the kernel could become bug-free, but aiming at being 
able to handle all incoming bug reports is IMHO a worthwhile and not 
completely unrealistic goal with benefits for all Linux users (and the 
overall image of Linux).

Currently, we are light years away from this goal.

>   Linus

cu
Adrian

[1] submitter has 

Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-21 Thread Adrian Bunk
On Tue, Jun 19, 2007 at 08:01:19AM -0700, Linus Torvalds wrote:
 On Tue, 19 Jun 2007, Adrian Bunk wrote:
...
  The -mm kernel already implements what your proposed PTS would do.
  
  Plus it gives testers more or less all patches currently pending 
  inclusion into Linus' tree in one kernel they can test.
  
  The problem are more social problems like patches Andrew has never heard 
  of before getting into Linus' tree during the merge window.
 
 Not really. The problem boils down to this:
 
   [EMAIL PROTECTED] linux]$ git-rev-list --all --since=100.days.ago | wc 
 -l
   7147
   [EMAIL PROTECTED] linux]$ git-rev-list --no-merges --all 
 --since=100.days.ago | wc -l
   6768
 
 ie over the last hundred days, we have averaged over 70 changes per day, 
 and even ignoring merges and only looking at pure patches we have more 
 than an average of 65 patches per day. Every day. Day in and day out.
 
 That translates to five hundred commits a week, two _thousand_ commits per 
 month, and 25 thousand commits per year. As a fairly constant stream.
 
 Will mistakes happen? Hell *yes*. 
 
 And I'd argue that any flow that tries to guarantee that mistakes don't 
 happen is broken. It's a sure-fire way to just frustrate people, simply 
 because it assumes a level of perfection in maintainers and developers 
 that isn't possible.
 
 The accepted industry standard for bug counts is basically one bug per a 
 thousand lines of code. And that's for released, *debugged* code. 
 
 Yes, we should aim higher. Obviously. Let's say that we aim for 0.1 bugs 
 per KLOC, and that we actually aim for that not just in _released_ code, 
 but in patches.
 
 What does that mean?
 
 Do the math:
 
   git log -M -p --all --since=100.days.ago | grep '^+' | wc -l
 
 That basically takes the last one hundred days of development, shows it 
 all as patches, and just counts the new lines. It takes about ten 
 seconds to run, and returns 517252 for me right now.
 
 That's *over*half*a*million* lines added or changed!
 
 And even with the expectation that we do ten times better than what is 
 often quoted as an industry average, and even with the expectation that 
 this is already fully debugged code, that's at least 50 bugs in the last 
 one hundred days.
 
 Yeah, we can be even more stringent, and actually subtract the number of 
 lines _removed_ (274930), and assume that only *new* code contains bugs, 
 and that's still just under a quarter million purely *added* lines, and 
 maybe we'd expect just new 24 bugs in the last 100 days.
 
 [ Argument: some of the old code also contained bugs, so the lines added 
   to replace it balance out. Counter-argument: new code is less well 
   tested by *definition* than old code, so.. Counter-counter-argument: the 
   new code was often added to _fix_ a bug, so the code removed had an even 
   _higher_ bug rate than normal code.. 
 
   End result? We don't know. This is all just food for thought. ]
 
 So here's the deal: even by the most *stringent* reasonable rules, we add 
 a new bug every four days. That's just something that people need to 
 accept. The people who say we must never introduce a regression aren't 
 living on planet earth, they are living in some wonderful world of 
 Blarney, where mistakes don't happen, developers are perfect, hardware is 
 perfect, and maintainers always catch things.
...

Exactly: We cannot get a regression free or even bug free kernel.
But we could handle the reported regressions (or even the reported bugs) 
better than we do.

Lesson #6:
Get the data.

Some real life numbers from 2.6.21 development:
- 80 days between 2.6.20 and 2.6.21
- 98 post-2.6.20 regression have been reported before 2.6.21 was released
- 15 open post-2.6.20 regression reports at the time of the 2.6.21 release
- 8 open post-2.6.20 regression reports at the time of the 2.6.21 release
that were reported at least 3 weeks before the 2.6.21 release

This:
- only includes regressions with reasonably usable reports [1] and
- confirmed to be regressions and
- reported by the relatively small number (compared to the complete
  number of Linux users) of -rc testers and
- reported before the release of 2.6.21.

We weren't even able to handle all reported recent regressions in 
2.6.21, and for other bugs our numbers won't be better.

When Dave Jones says that for a kernel for a new RHEL release that is 
based on a stable upstream kernel they spend 3 months only for shaking 
out bugs in the kernel that's IMHO a good description of our stable 
kernels.

I'm not claiming the kernel could become bug-free, but aiming at being 
able to handle all incoming bug reports is IMHO a worthwhile and not 
completely unrealistic goal with benefits for all Linux users (and the 
overall image of Linux).

Currently, we are light years away from this goal.

   Linus

cu
Adrian

[1] submitter has given all information requested

-- 

   Is there not promise of rain? Ling Tan asked suddenly 

Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-21 Thread Adrian Bunk
On Tue, Jun 19, 2007 at 10:04:58AM -0700, Linus Torvalds wrote:
...
 This is why I've been advocating bugzilla forget stuff, for example. I 
 tend to see bugzilla as a place where noise accumulates, rather than a 
 place where noise is made into a signal. 
 
 Which gets my to the real issue I have: the notion of having a process for 
 _tracking_ all the information is actually totally counter-productive, if 
 a big part of the process isn't also about throwing noise away.
 
 We don't want to save all the crud. I don't want smart tracking to 
 keep track of everything. I want smart forgetting, so that we are only 
 left with the major signal - the stuff that matters. 

Even generating the perfect signal is a complete waste of time if 
there's no recipient for the signal...

   Linus

cu
Adrian

-- 

   Is there not promise of rain? Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   Only a promise, Lao Er said.
   Pearl S. Buck - Dragon Seed

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-21 Thread Linus Torvalds


On Fri, 22 Jun 2007, Adrian Bunk wrote:
 On Tue, Jun 19, 2007 at 10:04:58AM -0700, Linus Torvalds wrote:
 ...
  This is why I've been advocating bugzilla forget stuff, for example. I 
  tend to see bugzilla as a place where noise accumulates, rather than a 
  place where noise is made into a signal. 
  
  Which gets my to the real issue I have: the notion of having a process for 
  _tracking_ all the information is actually totally counter-productive, if 
  a big part of the process isn't also about throwing noise away.
  
  We don't want to save all the crud. I don't want smart tracking to 
  keep track of everything. I want smart forgetting, so that we are only 
  left with the major signal - the stuff that matters. 
 
 Even generating the perfect signal is a complete waste of time if 
 there's no recipient for the signal...

My argument is that *if* we had more signal, less noise, we'd probably 
get more people looking at it.

In fact, I guarantee that's the case. You may  not be 100% happy with the 
regression list, but every single maintainer/developer I've talked to has 
said they appreciated it and it made it easier (and thus more likely) for 
them to actually look at what the outstanding issues were.

Linus
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-21 Thread Adrian Bunk
On Thu, Jun 21, 2007 at 04:59:39PM -0700, Linus Torvalds wrote:
 
 
 On Fri, 22 Jun 2007, Adrian Bunk wrote:
  On Tue, Jun 19, 2007 at 10:04:58AM -0700, Linus Torvalds wrote:
  ...
   This is why I've been advocating bugzilla forget stuff, for example. I 
   tend to see bugzilla as a place where noise accumulates, rather than a 
   place where noise is made into a signal. 
   
   Which gets my to the real issue I have: the notion of having a process 
   for 
   _tracking_ all the information is actually totally counter-productive, if 
   a big part of the process isn't also about throwing noise away.
   
   We don't want to save all the crud. I don't want smart tracking to 
   keep track of everything. I want smart forgetting, so that we are only 
   left with the major signal - the stuff that matters. 
  
  Even generating the perfect signal is a complete waste of time if 
  there's no recipient for the signal...
 
 My argument is that *if* we had more signal, less noise, we'd probably 
 get more people looking at it.
 
 In fact, I guarantee that's the case. You may  not be 100% happy with the 
 regression list, but every single maintainer/developer I've talked to has 
 said they appreciated it and it made it easier (and thus more likely) for 
 them to actually look at what the outstanding issues were.


The problems are the parts of the kernel without maintainer or with a 
maintainer who is for whatever reason not able to look after bug 
reports.

And you often need someone with a good knowledge of a specific area of 
the kernel for getting a bug fixed.


Let me make an example:

During 2.6.16-rc, I reported a bug (not a regression) in CIFS where I 
had reproducible during big writes to a Samba server after some 100 MBs 
(not a fixed amount of data, but 100% reproducible when transferring 1 GB)
a complete freeze of my computer (no SysRq possible). And there is
nothing more I (or any other submitter) could have given as information -
in fact it even took me several days to isolate CIFS as the source of 
these freezes.

Steve French and Dave Kleikamp told me to try some mount option.

With this option, I got an Oops instead of a freeze.

After they fixed the Oops, it turned out the patch also fixed the 
freeze. The patch went into 2.6.16, and it was therefore fixed
in 2.6.16.

That's one important value of maintainers.

In many other parts of the kernel, my bug report wouldn't have had any 
effect.


We need more maintaners who look after bugs - but where to find them, 
they don't seem to grow on trees?


   Linus

cu
Adrian

-- 

   Is there not promise of rain? Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   Only a promise, Lao Er said.
   Pearl S. Buck - Dragon Seed

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Stefan Richter
Oleg Verych wrote:
[I wrote]
>> a) Would it save me more time than it costs me to fit into the system
>>(time that can be invested in actual debugging)?
>>This can only be answered after trying it.
> 
> I'm not a wizard, if i will answer now: "No." [1:]
> 
> [1:] Your User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; 
> rv:1.8.1.4) Gecko/20070509 SeaMonkey/1.1.2

Seamonkey isn't interoperable with Debian's BTS?
Lucky me that I frequently use other MUAs too.
-- 
Stefan Richter
-=-=-=== -==- =--==
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Oleg Verych

* Date: Tue, 19 Jun 2007 19:50:48 +0200
> 
> [...]
>> Current identification of problems and patch association
>> have completely zero level of tracking or automation, while Bugzilla is
>> believed by somebody to have positive efficiency in bug tracking.
>
> I, as maintainer of a small subsystem, can personally track bug--patch
> relationships with bugzilla just fine, on its near-zero level of
> automation and integration.
>
> Nevertheless, would a more integrated bug/patch tracking system help me
> improve quality of my output? ---
> a) Would it save me more time than it costs me to fit into the system
>(time that can be invested in actual debugging)?
>This can only be answered after trying it.

I'm not a wizard, if i will answer now: "No." [1:]

[1:] Your User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; 
rv:1.8.1.4) Gecko/20070509 SeaMonkey/1.1.2

> b) Would it help me to spot mistakes in patches before I submit?
>No.

If you ever tried to report bug with reportbug tool in Debian, you may
understand what i meant. Nothing can substitute intelligence. Something
can reduce impact of laziness (of searching relevant bugreports).

> c) Would I get quicker feedback from testers?
>That depends on whether such a system attracts testers and helps
>testers to work efficiently.  This is also something that can only be
>speculated about without trying it.
>
> The potential testers that I deal with are mostly either very
> non-technical persons, or persons which are experienced in their
> hardware/application area but *not* in kernel internals and kernel
> development procedures.

They also don't bother subscribing to mailing lists and like to write
blogs. I'm not sure about hw databases you talked about, i will talk
about gathering information from testers.

Debian have experimental and unstable branches, people willing to have
new stuff are likely to have this, and not testing or stable. BTS just
collects bugreports . If kernel team uploads new
kernel (release or even rc recently), interested people will use it after
next upgrade. Bug reports get collected, but main answer will be, try
reproduce on most recent kernel.org's one. Here, what i have proposed,
may play role you expect. Mis-configuration/malfunctioning, programmer's
error (Linus noted) in organized manner may easily join reporting person
to kernel.org's testing. On driver or small sub-system level this may
work. Again it's all about information, not intelligence.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Stefan Richter
Oleg Verych wrote:
> On Tue, Jun 19, 2007 at 04:27:15PM +0200, Stefan Richter wrote:
>> There are different people involved in
>>   - patch handling,
>>   - bug handling (bugs are reported by end-users),
>> therefore don't forget that PTS and BTS have different requirements.
> 
> Sure. But if tracking was done, possible bugs where killed, user's bug
> report seems to depend on that patch (bisecting), why not to have a
> linkage here?

Of course there are certain links between bugs and patches, and thus
there are certain links between bug tracking and patch tracking.

[...]
> Current identification of problems and patch association
> have completely zero level of tracking or automation, while Bugzilla is
> believed by somebody to have positive efficiency in bug tracking.

I, as maintainer of a small subsystem, can personally track bug--patch
relationships with bugzilla just fine, on its near-zero level of
automation and integration.

Nevertheless, would a more integrated bug/patch tracking system help me
improve quality of my output? ---
a) Would it save me more time than it costs me to fit into the system
   (time that can be invested in actual debugging)?
   This can only be answered after trying it.
b) Would it help me to spot mistakes in patches before I submit?
   No.
c) Would I get quicker feedback from testers?
   That depends on whether such a system attracts testers and helps
   testers to work efficiently.  This is also something that can only be
   speculated about without trying it.

The potential testers that I deal with are mostly either very
non-technical persons, or persons which are experienced in their
hardware/application area but *not* in kernel internals and kernel
development procedures.
-- 
Stefan Richter
-=-=-=== -==- =--==
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Oleg Verych
* Date: Tue, 19 Jun 2007 10:04:58 -0700 (PDT)
> 
> On Tue, 19 Jun 2007, Oleg Verych wrote:
>> 
>> I'm proposing kind of smart tracking, summarized before. I'm not an
>> idealist, doing manual work. Making tools -- is what i've picked up from
>> one of your mails. Thus hope of having more opinions on that.
>
> Don't get me wrong, I wasn't actually responing to you personally, I was 
> actually responding mostly to the tone of this thread.

By reading only known persons[1]? Fine, it is OK.

But i hope, i did useful statements. In fact, noise reduction stuff WRT
bug reports was before in my analysis of Adrian's POV here (reportbug
tool). Also it showed again, when i've wrote about traces, where testers
(bug reporters) can find test cases, before they will cry (again) about
some issues. I see this, example is bugzilla @ mozilla -- known history.

[1] Noise filtering -- that's obvious for me, after all :)

By not flaming further, i'm just going to try to implement something.
Hopefully my next patch will be usefully smart tracked.

Thanks!

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Natalie Protasevich

On 6/19/07, Linus Torvalds <[EMAIL PROTECTED]> wrote:



On Tue, 19 Jun 2007, Oleg Verych wrote:
>
> I'm proposing kind of smart tracking, summarized before. I'm not an
> idealist, doing manual work. Making tools -- is what i've picked up from
> one of your mails. Thus hope of having more opinions on that.

Don't get me wrong, I wasn't actually responing to you personally, I was
actually responding mostly to the tone of this thread.

So I was responding to things like the example from Bartlomiej about
missed opportunity for taking developer review into account (and btw, I
think a little public shaming might not be a bad idea - I believe more in
*social* rules than in *technical* rules), and I'm responding to some of
the commentary by Adrian and others about "no regressions *ever*".

These are things we can *wish* for. But the fact that we migth wish for
them doesn't actually mean that they are really good ideas to aim for in
practice.

Let me put it another way: a few weeks ago there was this big news story
in the New York Times about how "forgetting" is a very essential part
about remembering, and people passed this around as if it was a big
revelation. People think that people with good memories have a "good
thing".

And personally, I was like "Duh".

Good memory is not about remembering everything. Good memory is about
forgetting the irrelevant, so that the important stuff stands out and you
*can* remember it. But the big deal is that yes, you have to forget stuff,
and that means that you *will* miss details - but you'll hopefully miss
the stuff you don't care for. The keyword being "hopefully". It works most
of the time, but we all know we've sometimes been able to forget a detail
that turned out to be crucial after all.

So the *stupid* response to that is "we should remember everything". It
misses the point. Yes, we sometimes forget even important details, but
it's *so* important to forget details, that the fact that our brains
occasionally forget things we later ended up needing is still *much*
preferable to trying to remember everything.

The same tends to be true of bug hunting, and regression tracking.

There's a lot of "noise" there. We'll never get perfect, and I'll argue
that if we don't have a system that tries to actively *remove* noise,
we'll just be overwhelmed. But that _inevitably_ means that sometimes
there was actually a signal in the noise that we ended up removing,
because nobody saw it as anything but noise.

So I think people should concentrate on turning "noise" into "clear
signal", but at the same time remember that that inevitably is a "lossy"
transformation, and just accept the fact that it will mean that we
occasionally make "mistakes".


This is the most crucial point so far in my opinion.
Well, not only people who report bugs are smart - they are curious,
enthusiastic, and passionate about their system, and job, hobby -
whatever linux means to them. They often do own investigations, give
lots of detail, and often others jump in with "me too" and give even
more detail (and more noise) But real detail that would help in bug
assessment is not there, and needs to be requested in lengthy
exchanges (time wise, since every request takes  hours, days,
months...)
I think  would help to make some attempt to lead them on to giving out
what's important. Cold and impersonal upfront fields and drop-down
menus are taking a lot of noise and heat off the actual report.
Another observation - things like "me too" should be encouraged to
become separate reports because generally only maintainer and people
who work directly on the module can sort out if this is same problem,
and in fact real problems get lost and not accounted for when getting
in wrong buckets this way.
--Natalie


This is why I've been advocating bugzilla "forget" stuff, for example. I
tend to see bugzilla as a place where noise accumulates, rather than a
place where noise is made into a signal.

Which gets my to the real issue I have: the notion of having a process for
_tracking_ all the information is actually totally counter-productive, if
a big part of the process isn't also about throwing noise away.

We don't want to "save" all the crud. I don't want "smart tracking" to
keep track of everything. I want "smart forgetting", so that we are only
left with the major signal - the stuff that matters.

Linus


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Linus Torvalds


On Tue, 19 Jun 2007, Oleg Verych wrote:
> 
> I'm proposing kind of smart tracking, summarized before. I'm not an
> idealist, doing manual work. Making tools -- is what i've picked up from
> one of your mails. Thus hope of having more opinions on that.

Don't get me wrong, I wasn't actually responing to you personally, I was 
actually responding mostly to the tone of this thread.

So I was responding to things like the example from Bartlomiej about 
missed opportunity for taking developer review into account (and btw, I 
think a little public shaming might not be a bad idea - I believe more in 
*social* rules than in *technical* rules), and I'm responding to some of 
the commentary by Adrian and others about "no regressions *ever*".

These are things we can *wish* for. But the fact that we migth wish for 
them doesn't actually mean that they are really good ideas to aim for in 
practice. 

Let me put it another way: a few weeks ago there was this big news story 
in the New York Times about how "forgetting" is a very essential part 
about remembering, and people passed this around as if it was a big 
revelation. People think that people with good memories have a "good 
thing".

And personally, I was like "Duh". 

Good memory is not about remembering everything. Good memory is about 
forgetting the irrelevant, so that the important stuff stands out and you 
*can* remember it. But the big deal is that yes, you have to forget stuff, 
and that means that you *will* miss details - but you'll hopefully miss 
the stuff you don't care for. The keyword being "hopefully". It works most 
of the time, but we all know we've sometimes been able to forget a detail 
that turned out to be crucial after all.

So the *stupid* response to that is "we should remember everything". It 
misses the point. Yes, we sometimes forget even important details, but 
it's *so* important to forget details, that the fact that our brains 
occasionally forget things we later ended up needing is still *much* 
preferable to trying to remember everything.

The same tends to be true of bug hunting, and regression tracking. 

There's a lot of "noise" there. We'll never get perfect, and I'll argue 
that if we don't have a system that tries to actively *remove* noise, 
we'll just be overwhelmed. But that _inevitably_ means that sometimes 
there was actually a signal in the noise that we ended up removing, 
because nobody saw it as anything but noise. 

So I think people should concentrate on turning "noise" into "clear 
signal", but at the same time remember that that inevitably is a "lossy" 
transformation, and just accept the fact that it will mean that we 
occasionally make "mistakes". 

This is why I've been advocating bugzilla "forget" stuff, for example. I 
tend to see bugzilla as a place where noise accumulates, rather than a 
place where noise is made into a signal. 

Which gets my to the real issue I have: the notion of having a process for 
_tracking_ all the information is actually totally counter-productive, if 
a big part of the process isn't also about throwing noise away.

We don't want to "save" all the crud. I don't want "smart tracking" to 
keep track of everything. I want "smart forgetting", so that we are only 
left with the major signal - the stuff that matters. 

Linus
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Oleg Verych
* Date: Tue, 19 Jun 2007 17:08:13 +0200
>
>> Crazy development{0}. Somebody knows, that comprehensively testing
>> hibernation is their thing. I don't care about it, i care about foo, bar.
>> Thus i can apply for example lguest patches and implement and test new
>> asm-offset replacement, *easily*.
>
> That's right.  But the production of subsystem test patchkits is
> volunteer work which will be hard to unify.
>
> I'm not saying it's impossible to reach some degree of organized
> production of test patchkits; after all we already have some
> standardization regarding patch submission which is volunteer work too.

But still there's no one opinion about against what tree to base the
patch. For somebody it's Linus's mainline, for somebody it's bleeding
edge -mm. And there will be no one.

Thus, particular patch entry might have as -mm as Linus's re-based
versions or (as Adrian noted) VFS.asof02-07-2007 FANCYFS. For example,
Rusty did that, after somebody asked him to have not only -mm lguest
version. So, for really intrusive feature/patch (and not
in-middle-development, Adrian) author can have a version (with git
branch, patch directory or something).

Counter-example: Scheduler patches are extraordinary with large
threads or replies, but that is (one of) classical release-early and
often. Proposed bureaucracy doesn't apply ;)

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Oleg Verych
Linus,

On Tue, Jun 19, 2007 at 08:01:19AM -0700, Linus Torvalds wrote:
> 
> 
> On Tue, 19 Jun 2007, Adrian Bunk wrote:
> > 
> > The goal is to get all patches for a maintained subsystem submitted to 
> > Linus by the maintainer.

Nice quote. I'm trying to make proposition/convince Adrian, who is in
opposition, but whole thread gets just like obeying his extreme POV...
 
> But quite frankly, anybody who aims for "perfect" without taking reality 
> into account is just not realistic. And if that's part of the goal of some 
> "new process", then I'm not even interested in listening to people discuss 
> it.

I'm proposing kind of smart tracking, summarized before. I'm not an
idealist, doing manual work. Making tools -- is what i've picked up from
one of your mails. Thus hope of having more opinions on that.

> If this plan cannot take reality into account, please stop Cc'ing me. I'm 
> simply not interested.

This one is last at least from me. Sorry for taking you time.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Oleg Verych
On Tue, Jun 19, 2007 at 04:27:15PM +0200, Stefan Richter wrote:
> On 6/19/2007 4:05 PM, Oleg Verych wrote:
> > On Tue, Jun 19, 2007 at 02:48:55PM +0200, Adrian Bunk wrote:
> >> The Debian BTS requires you to either write emails with control messages 
> >> or generating control messages with external tools.
> ...
> >> In Bugzilla the same works through a web interface.
> ...
> > Basic concept of Debian BTS is what i've discovered after many useless
> > hours i spent in Bugzilla. And this is mainly because of one basic
> > important thing, that nobody acknowledged (for newbies, like me):
> > 
> > * E-Mail with useful MUAs, after it got reliable delivery MTAs with qmail
> >   (or exim) is the main communication toolset.
> > 
> > Can't you see that from Linux's patch sending policy?
> 
> That's for developers, not for users.
> 
> There are different people involved in
>   - patch handling,
>   - bug handling (bugs are reported by end-users),
> therefore don't forget that PTS and BTS have different requirements.

Sure. But if tracking was done, possible bugs where killed, user's bug
report seems to depend on that patch (bisecting), why not to have a
linkage here? Usefulness for a developer (in sub-system association),
next time to see what went wrong, check test-cases, users might be
interested to have them run too before crying (again) about broken
system. Bug report can become part of (reopened) patch discussion (as
i've wrote). Until that, as bug-candidate without identified patch it
can be associated to some particular sub-system or abstract one
bug-category {1}.

Reversed time. As "do-bisection" shows, problems are not happening
just simply because of something abstract. If problem worth of solving
it, eventually there will be patch trying solve that, in both cases:

* when breaking patch (bisection) actually correct, but hardware
  (or similar independent) problem arise.
* something different, like feature request or something.

So, this guys are candidate for patch, and can have ID numerically from
the same domain as patch ID, but with different prefix, like "i'm just
candidate for patch". Bugs {1}, are obviously in this category.

Current identification of problems and patch association
have completely zero level of tracking or automation, while Bugzilla is
believed by somebody to have positive efficiency in bug tracking.

That two (patch/bug tracking) aren't that perpendicular to each other at
all.

Eventually it might be that perfect unification, that bug-tracking can be
obsolete, because of good tracking of patches/features-added and what
they did/do.

In any case, i would like to ask mentors to write at least something
similar to technical task, if that, what i'm saying is accessible for
you. Because your experience is treasure, that must be preserved and
possibly automated/organized.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Stefan Richter
Oleg Verych wrote:
> On Tue, Jun 19, 2007 at 02:48:55PM +0200, Adrian Bunk wrote:
>> The -mm kernel already implements what your proposed PTS would do.
...
>> Plus it gives testers more or less all patches currently pending 
>> inclusion into Linus' tree in one kernel they can test.
> 
> Crazy development{0}. Somebody knows, that comprehensively testing
> hibernation is their thing. I don't care about it, i care about foo, bar.
> Thus i can apply for example lguest patches and implement and test new
> asm-offset replacement, *easily*.

That's right.  But the production of subsystem test patchkits is
volunteer work which will be hard to unify.

I'm not saying it's impossible to reach some degree of organized
production of test patchkits; after all we already have some
standardization regarding patch submission which is volunteer work too.
-- 
Stefan Richter
-=-=-=== -==- =--==
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Linus Torvalds


On Tue, 19 Jun 2007, Adrian Bunk wrote:
> 
> The goal is to get all patches for a maintained subsystem submitted to 
> Linus by the maintainer.

Well, to be honest, I've actually over the years tried to have a policy of 
*never* really having black-and-white policies.

The fact is, some maintainers are excellent. All the relevant patches 
*already* effectively go through them.

But at the same time, other maintainers are less than active, and some 
areas aren't clearly maintained at all. 

Also, being a maintainer often means that you are busy and spend a lot of 
time talking to *people* - it doesn't necessarily mean that you actually 
have the hardware and can test things, nor does it necessarily mean that 
you know every detail. 

So I point out in Documentation/ManagementStyle (which is written very 
much tongue-in-cheek, but at the same time it's really *true*) that 
maintainership is often about recognizing people who just know *better* 
than you!

> The -mm kernel already implements what your proposed PTS would do.
> 
> Plus it gives testers more or less all patches currently pending 
> inclusion into Linus' tree in one kernel they can test.
> 
> The problem are more social problems like patches Andrew has never heard 
> of before getting into Linus' tree during the merge window.

Not really. The "problem" boils down to this:

[EMAIL PROTECTED] linux]$ git-rev-list --all --since=100.days.ago | wc 
-l
7147
[EMAIL PROTECTED] linux]$ git-rev-list --no-merges --all 
--since=100.days.ago | wc -l
6768

ie over the last hundred days, we have averaged over 70 changes per day, 
and even ignoring merges and only looking at "pure patches" we have more 
than an average of 65 patches per day. Every day. Day in and day out.

That translates to five hundred commits a week, two _thousand_ commits per 
month, and 25 thousand commits per year. As a fairly constant stream.

Will mistakes happen? Hell *yes*. 

And I'd argue that any flow that tries to "guarantee" that mistakes don't 
happen is broken. It's a sure-fire way to just frustrate people, simply 
because it assumes a level of perfection in maintainers and developers 
that isn't possible.

The accepted industry standard for bug counts is basically one bug per a 
thousand lines of code. And that's for released, *debugged* code. 

Yes, we should aim higher. Obviously. Let's say that we aim for 0.1 bugs 
per KLOC, and that we actually aim for that not just in _released_ code, 
but in patches.

What does that mean?

Do the math:

git log -M -p --all --since=100.days.ago | grep '^+' | wc -l

That basically takes the last one hundred days of development, shows it 
all as patches, and just counts the "new" lines. It takes about ten 
seconds to run, and returns 517252 for me right now.

That's *over*half*a*million* lines added or changed!

And even with the expectation that we do ten times better than what is 
often quoted as an industry average, and even with the expectation that 
this is already fully debugged code, that's at least 50 bugs in the last 
one hundred days.

Yeah, we can be even more stringent, and actually subtract the number of 
lines _removed_ (274930), and assume that only *new* code contains bugs, 
and that's still just under a quarter million purely *added* lines, and 
maybe we'd expect just new 24 bugs in the last 100 days.

[ Argument: some of the old code also contained bugs, so the lines added 
  to replace it balance out. Counter-argument: new code is less well 
  tested by *definition* than old code, so.. Counter-counter-argument: the 
  new code was often added to _fix_ a bug, so the code removed had an even 
  _higher_ bug rate than normal code.. 

  End result? We don't know. This is all just food for thought. ]

So here's the deal: even by the most *stringent* reasonable rules, we add 
a new bug every four days. That's just something that people need to 
accept. The people who say "we must never introduce a regression" aren't 
living on planet earth, they are living in some wonderful world of 
Blarney, where mistakes don't happen, developers are perfect, hardware is 
perfect, and maintainers always catch things.

> The problem is that most problems don't occur on one well-defined 
> kind of hardware - patches often break in exactly the areas the patch
> author expected no problems in.

Note that the industry-standard 1-bug-per-kloc thing has nothing to do 
with hardware. Somebody earlier in this thread (or one of the related 
ones) said that "git bisect is only valid for bugs that happen due to 
hardware issues", which is just totally *ludicrous*.

Yes, hardware makes it harder to test, but even *without* any hardware- 
specific issues, bugs happen. The developer just didn't happen to trigger 
the condition, or didn't happen to notice it when he *did* trigger it.

So don't go overboard about "hardware". Yes, hardware-specific issues have 
their own set of problems, and yes, drivers have a much 

Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Adrian Bunk
On Tue, Jun 19, 2007 at 04:05:12PM +0200, Oleg Verych wrote:
>...
> On Tue, Jun 19, 2007 at 02:48:55PM +0200, Adrian Bunk wrote:
> > On Tue, Jun 19, 2007 at 06:06:47AM +0200, Oleg Verych wrote:
>...
> > >   When patch in sent to this PTS, your lovely
> > >   checkpatch/check-whatever-crap-has-being-sent tools can be set up as
> > >   gatekeepers, thus making impossible stupid errors with MIME coding,
> > >   line wrapping, whatever style you've came up with now in checking
> > >   sent crap.
> > 
> > The -mm kernel already implements what your proposed PTS would do.
> 
> Having all-in-one patchset, like -mm is easy thing to provide
> interested parties with "you know what you have -- crazy development"
> 
> However [P]TS is tracking, recording, organizing tool. {1} Andrew's cron
> daemon easily can run script to check status of particular patch (cc,
> tested-by, acked-by). If patch have no TS ID, Andrew's watchdog is
> barking back to patch originator (with polite asking to send patch as:
> 
> * TS as "To:" target
> * patch author as "Cc:" target, this is useful to require:
>   . author can check that copy himself with text-only pager program
> (to see any MIME coding crap)
>   . preventing SPAM
> * maybe somebody else in Cc or Bcc.)

Quite a big part of -mm are git trees of maintainers.
Where are they in your tool?

And I still don't think your tool would make sense.
But hey, simply try it - that's the only way for you to prove me wrong.
People said similar things about the 2.6.16 kernel or my regression 
tracking, and I simply did it.

> > Plus it gives testers more or less all patches currently pending 
> > inclusion into Linus' tree in one kernel they can test.
> 
> Crazy development{0}. Somebody knows, that comprehensively testing
> hibernation is their thing. I don't care about it, i care about foo, bar.
> Thus i can apply for example lguest patches and implement and test new
> asm-offset replacement, *easily*. Somebody, as you know, likes new fancy
> file system, and no-way other. Let them be happy testing that thing
> *easily*. Because another fancy NO_MHz will hang their testing bench
> right after best ever speed results were recorded.

Patch dependencies and patch conflicts will be the interesting parts 
when you will implement this.

E.g. new fancy filesystem patch in -mm might depend on some VFS change 
that requires changes to all other filesystems.

I'm really looking forward to see how you will implement this for 
something like -mm with > 1000 patches (many of them git trees that 
themselves contain many different patches) without offloading all the 
additional work to the kernel developers.

> > The problem are more social problems like patches Andrew has never heard 
> > of before getting into Linus' tree during the merge window.
> 
> Linus' watchdog, as well, asking for valid patch id, or just doesn't
> care (in similar manner Linus does :).
> 
> So far no human is involved in social things. Do you agree?

No.

Forcing people to use some tool (no matter whether it's Bugzilla or
the PTS you want to implement) is 100% a social problem involving humans.

> Human power is worth and needed in particular patch discussion and
> testing under the participation (by Cc, acking, test-ok *e-mails*) of
> tracking system.

For getting people to use your tool, you will have to convince them that 
using your tool will bring them real benefits.

> > >...
> > > |-*- Feature Needed -*-
> > >   Addition, needed is hardware user tested have/had/used. Currently
> > >   ``reportbug'' tool includes packed specific and system specific
> > >   additions automaticly gathered and inserted to e-mail sent to BTS.
> > >   (e.g. 
> > > )
> > 
> > The problem is that most problems don't occur on one well-defined 
> > kind of hardware - patches often break in exactly the areas the patch
> > author expected no problems in.
> 
> I tried to test that new fancy FS, and couldn't boot because of
> yet-another ACPI crap. See theme{0}?
> 
> Overall testing, like Andrew does, is doubtless brave thing, but he have
> more time after {1}, isn't it?

I doubt the placing of some Acked-By- tags in patches is really what 
is killing Andrews time.

How does Andrew check the status of 1500 patches in -mm in your PTS?

And how do you implement the use case that Andrew forwards a batch of
200 patches to Linus? How does the information from your tool come into git?

But hey, write your tool and convince Andrew of it's advantages if you 
don't believe me.

> > And in many cases a patch for a device driver was written due to a bug 
> > report - in such cases a tester with the hardware in question is already 
> > available.
> 
> Tracking all possible testers (for next driver update, for example) is
> in question.

Spamming people who have some hardware with information about patches 
won't bring you anything. You need people willing to test patches that 
won't bring them 

Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Stefan Richter
On 6/19/2007 4:05 PM, Oleg Verych wrote:
> On Tue, Jun 19, 2007 at 02:48:55PM +0200, Adrian Bunk wrote:
>> The Debian BTS requires you to either write emails with control messages 
>> or generating control messages with external tools.
...
>> In Bugzilla the same works through a web interface.
...
> Basic concept of Debian BTS is what i've discovered after many useless
> hours i spent in Bugzilla. And this is mainly because of one basic
> important thing, that nobody acknowledged (for newbies, like me):
> 
> * E-Mail with useful MUAs, after it got reliable delivery MTAs with qmail
>   (or exim) is the main communication toolset.
> 
> Can't you see that from Linux's patch sending policy?

That's for developers, not for users.

There are different people involved in
  - patch handling,
  - bug handling (bugs are reported by end-users),
therefore don't forget that PTS and BTS have different requirements.
-- 
Stefan Richter
-=-=-=== -==- =--==
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Oleg Verych
[Dropping noise for Debbugs, because interested people may join via Gmane]

On Tue, Jun 19, 2007 at 02:48:55PM +0200, Adrian Bunk wrote:
> On Tue, Jun 19, 2007 at 06:06:47AM +0200, Oleg Verych wrote:
> > [Dear Debbug developers, i wish your ideas will be useful.]
> > 
> > * From: Linus Torvalds
> > * Newsgroups: gmane.linux.kernel
> > * Date: Mon, 18 Jun 2007 17:09:37 -0700 (PDT)
> > >
> > > On Mon, 18 Jun 2007, Martin Bligh wrote:
> > >> 
> > >> Sorry to be a wet blanket, but I've seen those sort of things
> > >> before, and they just don't seem to work, especially in the
> > >> environment we're in with such a massive diversity of hardware.
> > >
> > > I do agree. It _sounds_ like a great idea to try to control the flow of 
> > > patches better,
> > 
> > There were some ideas, i will try to summarize:
> > 
> > * New Patches (or sets) need tracking, review, testing
> > 
> >   Zero (tracking) done by sending (To, or bcc) [RFC] patch to something
> >   like [EMAIL PROTECTED] (like BTS now). Notifications will
> >   be sent to intrested maintainers (if meta-information is ok) or testers
> >   (see below)
> > 
> >   First is mostly done by maintainers or interested individuals.
> >   Result is "Acked-by" and "Cc" entries in the next patch sent. Due to
> >   lack of tracking this things are done manually, are generally in
> >   trusted manner. But bad like <[EMAIL PROTECTED]>
> >   sometimes happens.
> 
> The goal is to get all patches for a maintained subsystem submitted to 
> Linus by the maintainer.
> 
> >   When patch in sent to this PTS, your lovely
> >   checkpatch/check-whatever-crap-has-being-sent tools can be set up as
> >   gatekeepers, thus making impossible stupid errors with MIME coding,
> >   line wrapping, whatever style you've came up with now in checking
> >   sent crap.
> 
> The -mm kernel already implements what your proposed PTS would do.

Having all-in-one patchset, like -mm is easy thing to provide
interested parties with "you know what you have -- crazy development"

However [P]TS is tracking, recording, organizing tool. {1} Andrew's cron
daemon easily can run script to check status of particular patch (cc,
tested-by, acked-by). If patch have no TS ID, Andrew's watchdog is
barking back to patch originator (with polite asking to send patch as:

* TS as "To:" target
* patch author as "Cc:" target, this is useful to require:
  . author can check that copy himself with text-only pager program
(to see any MIME coding crap)
  . preventing SPAM
* maybe somebody else in Cc or Bcc.)

> Plus it gives testers more or less all patches currently pending 
> inclusion into Linus' tree in one kernel they can test.

Crazy development{0}. Somebody knows, that comprehensively testing
hibernation is their thing. I don't care about it, i care about foo, bar.
Thus i can apply for example lguest patches and implement and test new
asm-offset replacement, *easily*. Somebody, as you know, likes new fancy
file system, and no-way other. Let them be happy testing that thing
*easily*. Because another fancy NO_MHz will hang their testing bench
right after best ever speed results were recorded.

> The problem are more social problems like patches Andrew has never heard 
> of before getting into Linus' tree during the merge window.

Linus' watchdog, as well, asking for valid patch id, or just doesn't
care (in similar manner Linus does :).

So far no human is involved in social things. Do you agree?

Human power is worth and needed in particular patch discussion and
testing under the participation (by Cc, acking, test-ok *e-mails*) of
tracking system.

> >...
> > |-*- Feature Needed -*-
> >   Addition, needed is hardware user tested have/had/used. Currently
> >   ``reportbug'' tool includes packed specific and system specific
> >   additions automaticly gathered and inserted to e-mail sent to BTS.
> >   (e.g. )
> 
> The problem is that most problems don't occur on one well-defined 
> kind of hardware - patches often break in exactly the areas the patch
> author expected no problems in.

I tried to test that new fancy FS, and couldn't boot because of
yet-another ACPI crap. See theme{0}?

Overall testing, like Andrew does, is doubtless brave thing, but he have
more time after {1}, isn't it?

> And in many cases a patch for a device driver was written due to a bug 
> report - in such cases a tester with the hardware in question is already 
> available.

Tracking all possible testers (for next driver update, for example) is
in question.

> 
> >...
> > > but in the end, it needs to also be easy and painfree to the people
> > > involved, and also make sure that any added workflow doesn't require
> > > even *more* load and expertise on the already often overworked 
> > > maintainers..
> > 
> > Experienced BTS users and developers. Please, correct me if i'm wrong.
> > At least e-mail part of Debian's BTS and whole idea of it is *exactly*
> > what is needed. 

Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Don Armstrong
On Tue, 19 Jun 2007, Oleg Verych wrote:
> * From: Linus Torvalds
> * Newsgroups: gmane.linux.kernel
> * Date: Mon, 18 Jun 2007 17:09:37 -0700 (PDT)
>
> > I do agree. It _sounds_ like a great idea to try to control the
> > flow of patches better,
> 
> There were some ideas, i will try to summarize:
> 
> * New Patches (or sets) need tracking, review, testing
> 
>   Zero (tracking) done by sending (To, or bcc) [RFC] patch to something
>   like [EMAIL PROTECTED] (like BTS now). Notifications will
>   be sent to intrested maintainers (if meta-information is ok) or testers
>   (see below)

The BTS, while fairly good at tracking issues for distributions made
up of thousands of packages (like Debian), is rather suboptimal for
dealing with the workflow of a single (relatively) monolithic entity
like the linux kernel.

Since the ultimate goal is presumably to apply a patch to a git tree,
some sort of system which is built directly on top of git (or
intimately intertwined) is probably required. Some of the metrics that
the BTS uses, like the easy ability to use mail to control bugs may be
useful to incorporate, but I'd be rather surprised if it could be made
to work with the kernel developer's workflow as it exists now.

It may be useful for whoever ends up designing the patch system to
take a glimpse at how it's done in debbugs, but since I don't know how
the workflow works now, and how people want to have it work in the
end, I can't tell you what features from debbugs would be useful to
use.

Finally, at the end of the day, my own time and effort (and the
primary direction of debbugs development) is aimed at supporting the
primary user of debbugs, the Debian project. People who understand (or
want to understand) the linux kernel team's workflow are the ones who
are going to need to do the heavy lifting here.


Don Armstrong
 
-- 
N: Why should I believe that?"
B: Because it's a fact."
N: Fact?"
B: F, A, C, T... fact"
N: So you're saying that I should believe it because it's true. 
   That's your argument?
B: It IS true.
-- "Ploy" http://www.mediacampaign.org/multimedia/Ploy.MPG

http://www.donarmstrong.com  http://rzlab.ucr.edu
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Adrian Bunk
On Tue, Jun 19, 2007 at 06:06:47AM +0200, Oleg Verych wrote:
> [Dear Debbug developers, i wish your ideas will be useful.]
> 
> * From: Linus Torvalds
> * Newsgroups: gmane.linux.kernel
> * Date: Mon, 18 Jun 2007 17:09:37 -0700 (PDT)
> >
> > On Mon, 18 Jun 2007, Martin Bligh wrote:
> >> 
> >> Sorry to be a wet blanket, but I've seen those sort of things
> >> before, and they just don't seem to work, especially in the
> >> environment we're in with such a massive diversity of hardware.
> >
> > I do agree. It _sounds_ like a great idea to try to control the flow of 
> > patches better,
> 
> There were some ideas, i will try to summarize:
> 
> * New Patches (or sets) need tracking, review, testing
> 
>   Zero (tracking) done by sending (To, or bcc) [RFC] patch to something
>   like [EMAIL PROTECTED] (like BTS now). Notifications will
>   be sent to intrested maintainers (if meta-information is ok) or testers
>   (see below)
> 
>   First is mostly done by maintainers or interested individuals.
>   Result is "Acked-by" and "Cc" entries in the next patch sent. Due to
>   lack of tracking this things are done manually, are generally in
>   trusted manner. But bad like <[EMAIL PROTECTED]>
>   sometimes happens.

The goal is to get all patches for a maintained subsystem submitted to 
Linus by the maintainer.

>   When patch in sent to this PTS, your lovely
>   checkpatch/check-whatever-crap-has-being-sent tools can be set up as
>   gatekeepers, thus making impossible stupid errors with MIME coding,
>   line wrapping, whatever style you've came up with now in checking
>   sent crap.

The -mm kernel already implements what your proposed PTS would do.

Plus it gives testers more or less all patches currently pending 
inclusion into Linus' tree in one kernel they can test.

The problem are more social problems like patches Andrew has never heard 
of before getting into Linus' tree during the merge window.

>...
> |-*- Feature Needed -*-
>   Addition, needed is hardware user tested have/had/used. Currently
>   ``reportbug'' tool includes packed specific and system specific
>   additions automaticly gathered and inserted to e-mail sent to BTS.
>   (e.g. )

The problem is that most problems don't occur on one well-defined 
kind of hardware - patches often break in exactly the areas the patch
author expected no problems in.

And in many cases a patch for a device driver was written due to a bug 
report - in such cases a tester with the hardware in question is already 
available.

>...
> > but in the end, it needs to also be easy and painfree to the people
> > involved, and also make sure that any added workflow doesn't require
> > even *more* load and expertise on the already often overworked 
> > maintainers..
> 
> Experienced BTS users and developers. Please, correct me if i'm wrong.
> At least e-mail part of Debian's BTS and whole idea of it is *exactly*
> what is needed. Bugzilla fans, you can still use you useless pet,
> because Debian developers have done things, to track and e-mail states
> of bugs: 
>...

"useless pet"?
Be serious.
How many open source projects use Bugzilla and how many use the Debian BTS?
And then start thinking about why the "useless pet" has so many more 
user...

The Debian BTS requires you to either write emails with control messages 
or generating control messages with external tools.

In Bugzilla the same works through a web interface.

I know both the Debian BTS and Bugzilla and although they are quite 
different they both are reasonable tools for their purpose.

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Adrian Bunk
On Tue, Jun 19, 2007 at 06:06:47AM +0200, Oleg Verych wrote:
 [Dear Debbug developers, i wish your ideas will be useful.]
 
 * From: Linus Torvalds
 * Newsgroups: gmane.linux.kernel
 * Date: Mon, 18 Jun 2007 17:09:37 -0700 (PDT)
 
  On Mon, 18 Jun 2007, Martin Bligh wrote:
  
  Sorry to be a wet blanket, but I've seen those sort of things
  before, and they just don't seem to work, especially in the
  environment we're in with such a massive diversity of hardware.
 
  I do agree. It _sounds_ like a great idea to try to control the flow of 
  patches better,
 
 There were some ideas, i will try to summarize:
 
 * New Patches (or sets) need tracking, review, testing
 
   Zero (tracking) done by sending (To, or bcc) [RFC] patch to something
   like [EMAIL PROTECTED] (like BTS now). Notifications will
   be sent to intrested maintainers (if meta-information is ok) or testers
   (see below)
 
   First is mostly done by maintainers or interested individuals.
   Result is Acked-by and Cc entries in the next patch sent. Due to
   lack of tracking this things are done manually, are generally in
   trusted manner. But bad like [EMAIL PROTECTED]
   sometimes happens.

The goal is to get all patches for a maintained subsystem submitted to 
Linus by the maintainer.

   When patch in sent to this PTS, your lovely
   checkpatch/check-whatever-crap-has-being-sent tools can be set up as
   gatekeepers, thus making impossible stupid errors with MIME coding,
   line wrapping, whatever style you've came up with now in checking
   sent crap.

The -mm kernel already implements what your proposed PTS would do.

Plus it gives testers more or less all patches currently pending 
inclusion into Linus' tree in one kernel they can test.

The problem are more social problems like patches Andrew has never heard 
of before getting into Linus' tree during the merge window.

...
 |-*- Feature Needed -*-
   Addition, needed is hardware user tested have/had/used. Currently
   ``reportbug'' tool includes packed specific and system specific
   additions automaticly gathered and inserted to e-mail sent to BTS.
   (e.g. http://permalink.gmane.org/gmane.linux.debian.devel.kernel/29740)

The problem is that most problems don't occur on one well-defined 
kind of hardware - patches often break in exactly the areas the patch
author expected no problems in.

And in many cases a patch for a device driver was written due to a bug 
report - in such cases a tester with the hardware in question is already 
available.

...
  but in the end, it needs to also be easy and painfree to the people
  involved, and also make sure that any added workflow doesn't require
  even *more* load and expertise on the already often overworked 
  maintainers..
 
 Experienced BTS users and developers. Please, correct me if i'm wrong.
 At least e-mail part of Debian's BTS and whole idea of it is *exactly*
 what is needed. Bugzilla fans, you can still use you useless pet,
 because Debian developers have done things, to track and e-mail states
 of bugs: http://permalink.gmane.org/gmane.linux.debian.devel.kernel/29736
...

useless pet?
Be serious.
How many open source projects use Bugzilla and how many use the Debian BTS?
And then start thinking about why the useless pet has so many more 
user...

The Debian BTS requires you to either write emails with control messages 
or generating control messages with external tools.

In Bugzilla the same works through a web interface.

I know both the Debian BTS and Bugzilla and although they are quite 
different they both are reasonable tools for their purpose.

cu
Adrian

-- 

   Is there not promise of rain? Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   Only a promise, Lao Er said.
   Pearl S. Buck - Dragon Seed

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Don Armstrong
On Tue, 19 Jun 2007, Oleg Verych wrote:
 * From: Linus Torvalds
 * Newsgroups: gmane.linux.kernel
 * Date: Mon, 18 Jun 2007 17:09:37 -0700 (PDT)

  I do agree. It _sounds_ like a great idea to try to control the
  flow of patches better,
 
 There were some ideas, i will try to summarize:
 
 * New Patches (or sets) need tracking, review, testing
 
   Zero (tracking) done by sending (To, or bcc) [RFC] patch to something
   like [EMAIL PROTECTED] (like BTS now). Notifications will
   be sent to intrested maintainers (if meta-information is ok) or testers
   (see below)

The BTS, while fairly good at tracking issues for distributions made
up of thousands of packages (like Debian), is rather suboptimal for
dealing with the workflow of a single (relatively) monolithic entity
like the linux kernel.

Since the ultimate goal is presumably to apply a patch to a git tree,
some sort of system which is built directly on top of git (or
intimately intertwined) is probably required. Some of the metrics that
the BTS uses, like the easy ability to use mail to control bugs may be
useful to incorporate, but I'd be rather surprised if it could be made
to work with the kernel developer's workflow as it exists now.

It may be useful for whoever ends up designing the patch system to
take a glimpse at how it's done in debbugs, but since I don't know how
the workflow works now, and how people want to have it work in the
end, I can't tell you what features from debbugs would be useful to
use.

Finally, at the end of the day, my own time and effort (and the
primary direction of debbugs development) is aimed at supporting the
primary user of debbugs, the Debian project. People who understand (or
want to understand) the linux kernel team's workflow are the ones who
are going to need to do the heavy lifting here.


Don Armstrong
 
-- 
N: Why should I believe that?
B: Because it's a fact.
N: Fact?
B: F, A, C, T... fact
N: So you're saying that I should believe it because it's true. 
   That's your argument?
B: It IS true.
-- Ploy http://www.mediacampaign.org/multimedia/Ploy.MPG

http://www.donarmstrong.com  http://rzlab.ucr.edu
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Oleg Verych
[Dropping noise for Debbugs, because interested people may join via Gmane]

On Tue, Jun 19, 2007 at 02:48:55PM +0200, Adrian Bunk wrote:
 On Tue, Jun 19, 2007 at 06:06:47AM +0200, Oleg Verych wrote:
  [Dear Debbug developers, i wish your ideas will be useful.]
  
  * From: Linus Torvalds
  * Newsgroups: gmane.linux.kernel
  * Date: Mon, 18 Jun 2007 17:09:37 -0700 (PDT)
  
   On Mon, 18 Jun 2007, Martin Bligh wrote:
   
   Sorry to be a wet blanket, but I've seen those sort of things
   before, and they just don't seem to work, especially in the
   environment we're in with such a massive diversity of hardware.
  
   I do agree. It _sounds_ like a great idea to try to control the flow of 
   patches better,
  
  There were some ideas, i will try to summarize:
  
  * New Patches (or sets) need tracking, review, testing
  
Zero (tracking) done by sending (To, or bcc) [RFC] patch to something
like [EMAIL PROTECTED] (like BTS now). Notifications will
be sent to intrested maintainers (if meta-information is ok) or testers
(see below)
  
First is mostly done by maintainers or interested individuals.
Result is Acked-by and Cc entries in the next patch sent. Due to
lack of tracking this things are done manually, are generally in
trusted manner. But bad like [EMAIL PROTECTED]
sometimes happens.
 
 The goal is to get all patches for a maintained subsystem submitted to 
 Linus by the maintainer.
 
When patch in sent to this PTS, your lovely
checkpatch/check-whatever-crap-has-being-sent tools can be set up as
gatekeepers, thus making impossible stupid errors with MIME coding,
line wrapping, whatever style you've came up with now in checking
sent crap.
 
 The -mm kernel already implements what your proposed PTS would do.

Having all-in-one patchset, like -mm is easy thing to provide
interested parties with you know what you have -- crazy development

However [P]TS is tracking, recording, organizing tool. {1} Andrew's cron
daemon easily can run script to check status of particular patch (cc,
tested-by, acked-by). If patch have no TS ID, Andrew's watchdog is
barking back to patch originator (with polite asking to send patch as:

* TS as To: target
* patch author as Cc: target, this is useful to require:
  . author can check that copy himself with text-only pager program
(to see any MIME coding crap)
  . preventing SPAM
* maybe somebody else in Cc or Bcc.)

 Plus it gives testers more or less all patches currently pending 
 inclusion into Linus' tree in one kernel they can test.

Crazy development{0}. Somebody knows, that comprehensively testing
hibernation is their thing. I don't care about it, i care about foo, bar.
Thus i can apply for example lguest patches and implement and test new
asm-offset replacement, *easily*. Somebody, as you know, likes new fancy
file system, and no-way other. Let them be happy testing that thing
*easily*. Because another fancy NO_MHz will hang their testing bench
right after best ever speed results were recorded.

 The problem are more social problems like patches Andrew has never heard 
 of before getting into Linus' tree during the merge window.

Linus' watchdog, as well, asking for valid patch id, or just doesn't
care (in similar manner Linus does :).

So far no human is involved in social things. Do you agree?

Human power is worth and needed in particular patch discussion and
testing under the participation (by Cc, acking, test-ok *e-mails*) of
tracking system.

 ...
  |-*- Feature Needed -*-
Addition, needed is hardware user tested have/had/used. Currently
``reportbug'' tool includes packed specific and system specific
additions automaticly gathered and inserted to e-mail sent to BTS.
(e.g. http://permalink.gmane.org/gmane.linux.debian.devel.kernel/29740)
 
 The problem is that most problems don't occur on one well-defined 
 kind of hardware - patches often break in exactly the areas the patch
 author expected no problems in.

I tried to test that new fancy FS, and couldn't boot because of
yet-another ACPI crap. See theme{0}?

Overall testing, like Andrew does, is doubtless brave thing, but he have
more time after {1}, isn't it?

 And in many cases a patch for a device driver was written due to a bug 
 report - in such cases a tester with the hardware in question is already 
 available.

Tracking all possible testers (for next driver update, for example) is
in question.

 
 ...
   but in the end, it needs to also be easy and painfree to the people
   involved, and also make sure that any added workflow doesn't require
   even *more* load and expertise on the already often overworked 
   maintainers..
  
  Experienced BTS users and developers. Please, correct me if i'm wrong.
  At least e-mail part of Debian's BTS and whole idea of it is *exactly*
  what is needed. Bugzilla fans, you can still use you useless pet,
  because Debian developers have done things, to track and e-mail states
  of bugs: 

Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Stefan Richter
On 6/19/2007 4:05 PM, Oleg Verych wrote:
 On Tue, Jun 19, 2007 at 02:48:55PM +0200, Adrian Bunk wrote:
 The Debian BTS requires you to either write emails with control messages 
 or generating control messages with external tools.
...
 In Bugzilla the same works through a web interface.
...
 Basic concept of Debian BTS is what i've discovered after many useless
 hours i spent in Bugzilla. And this is mainly because of one basic
 important thing, that nobody acknowledged (for newbies, like me):
 
 * E-Mail with useful MUAs, after it got reliable delivery MTAs with qmail
   (or exim) is the main communication toolset.
 
 Can't you see that from Linux's patch sending policy?

That's for developers, not for users.

There are different people involved in
  - patch handling,
  - bug handling (bugs are reported by end-users),
therefore don't forget that PTS and BTS have different requirements.
-- 
Stefan Richter
-=-=-=== -==- =--==
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Adrian Bunk
On Tue, Jun 19, 2007 at 04:05:12PM +0200, Oleg Verych wrote:
...
 On Tue, Jun 19, 2007 at 02:48:55PM +0200, Adrian Bunk wrote:
  On Tue, Jun 19, 2007 at 06:06:47AM +0200, Oleg Verych wrote:
...
 When patch in sent to this PTS, your lovely
 checkpatch/check-whatever-crap-has-being-sent tools can be set up as
 gatekeepers, thus making impossible stupid errors with MIME coding,
 line wrapping, whatever style you've came up with now in checking
 sent crap.
  
  The -mm kernel already implements what your proposed PTS would do.
 
 Having all-in-one patchset, like -mm is easy thing to provide
 interested parties with you know what you have -- crazy development
 
 However [P]TS is tracking, recording, organizing tool. {1} Andrew's cron
 daemon easily can run script to check status of particular patch (cc,
 tested-by, acked-by). If patch have no TS ID, Andrew's watchdog is
 barking back to patch originator (with polite asking to send patch as:
 
 * TS as To: target
 * patch author as Cc: target, this is useful to require:
   . author can check that copy himself with text-only pager program
 (to see any MIME coding crap)
   . preventing SPAM
 * maybe somebody else in Cc or Bcc.)

Quite a big part of -mm are git trees of maintainers.
Where are they in your tool?

And I still don't think your tool would make sense.
But hey, simply try it - that's the only way for you to prove me wrong.
People said similar things about the 2.6.16 kernel or my regression 
tracking, and I simply did it.

  Plus it gives testers more or less all patches currently pending 
  inclusion into Linus' tree in one kernel they can test.
 
 Crazy development{0}. Somebody knows, that comprehensively testing
 hibernation is their thing. I don't care about it, i care about foo, bar.
 Thus i can apply for example lguest patches and implement and test new
 asm-offset replacement, *easily*. Somebody, as you know, likes new fancy
 file system, and no-way other. Let them be happy testing that thing
 *easily*. Because another fancy NO_MHz will hang their testing bench
 right after best ever speed results were recorded.

Patch dependencies and patch conflicts will be the interesting parts 
when you will implement this.

E.g. new fancy filesystem patch in -mm might depend on some VFS change 
that requires changes to all other filesystems.

I'm really looking forward to see how you will implement this for 
something like -mm with  1000 patches (many of them git trees that 
themselves contain many different patches) without offloading all the 
additional work to the kernel developers.

  The problem are more social problems like patches Andrew has never heard 
  of before getting into Linus' tree during the merge window.
 
 Linus' watchdog, as well, asking for valid patch id, or just doesn't
 care (in similar manner Linus does :).
 
 So far no human is involved in social things. Do you agree?

No.

Forcing people to use some tool (no matter whether it's Bugzilla or
the PTS you want to implement) is 100% a social problem involving humans.

 Human power is worth and needed in particular patch discussion and
 testing under the participation (by Cc, acking, test-ok *e-mails*) of
 tracking system.

For getting people to use your tool, you will have to convince them that 
using your tool will bring them real benefits.

  ...
   |-*- Feature Needed -*-
 Addition, needed is hardware user tested have/had/used. Currently
 ``reportbug'' tool includes packed specific and system specific
 additions automaticly gathered and inserted to e-mail sent to BTS.
 (e.g. 
   http://permalink.gmane.org/gmane.linux.debian.devel.kernel/29740)
  
  The problem is that most problems don't occur on one well-defined 
  kind of hardware - patches often break in exactly the areas the patch
  author expected no problems in.
 
 I tried to test that new fancy FS, and couldn't boot because of
 yet-another ACPI crap. See theme{0}?
 
 Overall testing, like Andrew does, is doubtless brave thing, but he have
 more time after {1}, isn't it?

I doubt the placing of some Acked-By- tags in patches is really what 
is killing Andrews time.

How does Andrew check the status of 1500 patches in -mm in your PTS?

And how do you implement the use case that Andrew forwards a batch of
200 patches to Linus? How does the information from your tool come into git?

But hey, write your tool and convince Andrew of it's advantages if you 
don't believe me.

  And in many cases a patch for a device driver was written due to a bug 
  report - in such cases a tester with the hardware in question is already 
  available.
 
 Tracking all possible testers (for next driver update, for example) is
 in question.

Spamming people who have some hardware with information about patches 
won't bring you anything. You need people willing to test patches that 
won't bring them any benefit - and if you have such people they are 
usually as well willing to simply regularly test -rc kernels.

  

Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Linus Torvalds


On Tue, 19 Jun 2007, Adrian Bunk wrote:
 
 The goal is to get all patches for a maintained subsystem submitted to 
 Linus by the maintainer.

Well, to be honest, I've actually over the years tried to have a policy of 
*never* really having black-and-white policies.

The fact is, some maintainers are excellent. All the relevant patches 
*already* effectively go through them.

But at the same time, other maintainers are less than active, and some 
areas aren't clearly maintained at all. 

Also, being a maintainer often means that you are busy and spend a lot of 
time talking to *people* - it doesn't necessarily mean that you actually 
have the hardware and can test things, nor does it necessarily mean that 
you know every detail. 

So I point out in Documentation/ManagementStyle (which is written very 
much tongue-in-cheek, but at the same time it's really *true*) that 
maintainership is often about recognizing people who just know *better* 
than you!

 The -mm kernel already implements what your proposed PTS would do.
 
 Plus it gives testers more or less all patches currently pending 
 inclusion into Linus' tree in one kernel they can test.
 
 The problem are more social problems like patches Andrew has never heard 
 of before getting into Linus' tree during the merge window.

Not really. The problem boils down to this:

[EMAIL PROTECTED] linux]$ git-rev-list --all --since=100.days.ago | wc 
-l
7147
[EMAIL PROTECTED] linux]$ git-rev-list --no-merges --all 
--since=100.days.ago | wc -l
6768

ie over the last hundred days, we have averaged over 70 changes per day, 
and even ignoring merges and only looking at pure patches we have more 
than an average of 65 patches per day. Every day. Day in and day out.

That translates to five hundred commits a week, two _thousand_ commits per 
month, and 25 thousand commits per year. As a fairly constant stream.

Will mistakes happen? Hell *yes*. 

And I'd argue that any flow that tries to guarantee that mistakes don't 
happen is broken. It's a sure-fire way to just frustrate people, simply 
because it assumes a level of perfection in maintainers and developers 
that isn't possible.

The accepted industry standard for bug counts is basically one bug per a 
thousand lines of code. And that's for released, *debugged* code. 

Yes, we should aim higher. Obviously. Let's say that we aim for 0.1 bugs 
per KLOC, and that we actually aim for that not just in _released_ code, 
but in patches.

What does that mean?

Do the math:

git log -M -p --all --since=100.days.ago | grep '^+' | wc -l

That basically takes the last one hundred days of development, shows it 
all as patches, and just counts the new lines. It takes about ten 
seconds to run, and returns 517252 for me right now.

That's *over*half*a*million* lines added or changed!

And even with the expectation that we do ten times better than what is 
often quoted as an industry average, and even with the expectation that 
this is already fully debugged code, that's at least 50 bugs in the last 
one hundred days.

Yeah, we can be even more stringent, and actually subtract the number of 
lines _removed_ (274930), and assume that only *new* code contains bugs, 
and that's still just under a quarter million purely *added* lines, and 
maybe we'd expect just new 24 bugs in the last 100 days.

[ Argument: some of the old code also contained bugs, so the lines added 
  to replace it balance out. Counter-argument: new code is less well 
  tested by *definition* than old code, so.. Counter-counter-argument: the 
  new code was often added to _fix_ a bug, so the code removed had an even 
  _higher_ bug rate than normal code.. 

  End result? We don't know. This is all just food for thought. ]

So here's the deal: even by the most *stringent* reasonable rules, we add 
a new bug every four days. That's just something that people need to 
accept. The people who say we must never introduce a regression aren't 
living on planet earth, they are living in some wonderful world of 
Blarney, where mistakes don't happen, developers are perfect, hardware is 
perfect, and maintainers always catch things.

 The problem is that most problems don't occur on one well-defined 
 kind of hardware - patches often break in exactly the areas the patch
 author expected no problems in.

Note that the industry-standard 1-bug-per-kloc thing has nothing to do 
with hardware. Somebody earlier in this thread (or one of the related 
ones) said that git bisect is only valid for bugs that happen due to 
hardware issues, which is just totally *ludicrous*.

Yes, hardware makes it harder to test, but even *without* any hardware- 
specific issues, bugs happen. The developer just didn't happen to trigger 
the condition, or didn't happen to notice it when he *did* trigger it.

So don't go overboard about hardware. Yes, hardware-specific issues have 
their own set of problems, and yes, drivers have a much higher incidence 
of bugs per 

Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Stefan Richter
Oleg Verych wrote:
 On Tue, Jun 19, 2007 at 02:48:55PM +0200, Adrian Bunk wrote:
 The -mm kernel already implements what your proposed PTS would do.
...
 Plus it gives testers more or less all patches currently pending 
 inclusion into Linus' tree in one kernel they can test.
 
 Crazy development{0}. Somebody knows, that comprehensively testing
 hibernation is their thing. I don't care about it, i care about foo, bar.
 Thus i can apply for example lguest patches and implement and test new
 asm-offset replacement, *easily*.

That's right.  But the production of subsystem test patchkits is
volunteer work which will be hard to unify.

I'm not saying it's impossible to reach some degree of organized
production of test patchkits; after all we already have some
standardization regarding patch submission which is volunteer work too.
-- 
Stefan Richter
-=-=-=== -==- =--==
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Oleg Verych
On Tue, Jun 19, 2007 at 04:27:15PM +0200, Stefan Richter wrote:
 On 6/19/2007 4:05 PM, Oleg Verych wrote:
  On Tue, Jun 19, 2007 at 02:48:55PM +0200, Adrian Bunk wrote:
  The Debian BTS requires you to either write emails with control messages 
  or generating control messages with external tools.
 ...
  In Bugzilla the same works through a web interface.
 ...
  Basic concept of Debian BTS is what i've discovered after many useless
  hours i spent in Bugzilla. And this is mainly because of one basic
  important thing, that nobody acknowledged (for newbies, like me):
  
  * E-Mail with useful MUAs, after it got reliable delivery MTAs with qmail
(or exim) is the main communication toolset.
  
  Can't you see that from Linux's patch sending policy?
 
 That's for developers, not for users.
 
 There are different people involved in
   - patch handling,
   - bug handling (bugs are reported by end-users),
 therefore don't forget that PTS and BTS have different requirements.

Sure. But if tracking was done, possible bugs where killed, user's bug
report seems to depend on that patch (bisecting), why not to have a
linkage here? Usefulness for a developer (in sub-system association),
next time to see what went wrong, check test-cases, users might be
interested to have them run too before crying (again) about broken
system. Bug report can become part of (reopened) patch discussion (as
i've wrote). Until that, as bug-candidate without identified patch it
can be associated to some particular sub-system or abstract one
bug-category {1}.

Reversed time. As do-bisection shows, problems are not happening
just simply because of something abstract. If problem worth of solving
it, eventually there will be patch trying solve that, in both cases:

* when breaking patch (bisection) actually correct, but hardware
  (or similar independent) problem arise.
* something different, like feature request or something.

So, this guys are candidate for patch, and can have ID numerically from
the same domain as patch ID, but with different prefix, like i'm just
candidate for patch. Bugs {1}, are obviously in this category.

Current identification of problems and patch association
have completely zero level of tracking or automation, while Bugzilla is
believed by somebody to have positive efficiency in bug tracking.

That two (patch/bug tracking) aren't that perpendicular to each other at
all.

Eventually it might be that perfect unification, that bug-tracking can be
obsolete, because of good tracking of patches/features-added and what
they did/do.

In any case, i would like to ask mentors to write at least something
similar to technical task, if that, what i'm saying is accessible for
you. Because your experience is treasure, that must be preserved and
possibly automated/organized.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Oleg Verych
Linus,

On Tue, Jun 19, 2007 at 08:01:19AM -0700, Linus Torvalds wrote:
 
 
 On Tue, 19 Jun 2007, Adrian Bunk wrote:
  
  The goal is to get all patches for a maintained subsystem submitted to 
  Linus by the maintainer.

Nice quote. I'm trying to make proposition/convince Adrian, who is in
opposition, but whole thread gets just like obeying his extreme POV...
 
 But quite frankly, anybody who aims for perfect without taking reality 
 into account is just not realistic. And if that's part of the goal of some 
 new process, then I'm not even interested in listening to people discuss 
 it.

I'm proposing kind of smart tracking, summarized before. I'm not an
idealist, doing manual work. Making tools -- is what i've picked up from
one of your mails. Thus hope of having more opinions on that.

 If this plan cannot take reality into account, please stop Cc'ing me. I'm 
 simply not interested.

This one is last at least from me. Sorry for taking you time.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Oleg Verych
* Date: Tue, 19 Jun 2007 17:08:13 +0200

 Crazy development{0}. Somebody knows, that comprehensively testing
 hibernation is their thing. I don't care about it, i care about foo, bar.
 Thus i can apply for example lguest patches and implement and test new
 asm-offset replacement, *easily*.

 That's right.  But the production of subsystem test patchkits is
 volunteer work which will be hard to unify.

 I'm not saying it's impossible to reach some degree of organized
 production of test patchkits; after all we already have some
 standardization regarding patch submission which is volunteer work too.

But still there's no one opinion about against what tree to base the
patch. For somebody it's Linus's mainline, for somebody it's bleeding
edge -mm. And there will be no one.

Thus, particular patch entry might have as -mm as Linus's re-based
versions or (as Adrian noted) VFS.asof02-07-2007 FANCYFS. For example,
Rusty did that, after somebody asked him to have not only -mm lguest
version. So, for really intrusive feature/patch (and not
in-middle-development, Adrian) author can have a version (with git
branch, patch directory or something).

Counter-example: Scheduler patches are extraordinary with large
threads or replies, but that is (one of) classical release-early and
often. Proposed bureaucracy doesn't apply ;)

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Linus Torvalds


On Tue, 19 Jun 2007, Oleg Verych wrote:
 
 I'm proposing kind of smart tracking, summarized before. I'm not an
 idealist, doing manual work. Making tools -- is what i've picked up from
 one of your mails. Thus hope of having more opinions on that.

Don't get me wrong, I wasn't actually responing to you personally, I was 
actually responding mostly to the tone of this thread.

So I was responding to things like the example from Bartlomiej about 
missed opportunity for taking developer review into account (and btw, I 
think a little public shaming might not be a bad idea - I believe more in 
*social* rules than in *technical* rules), and I'm responding to some of 
the commentary by Adrian and others about no regressions *ever*.

These are things we can *wish* for. But the fact that we migth wish for 
them doesn't actually mean that they are really good ideas to aim for in 
practice. 

Let me put it another way: a few weeks ago there was this big news story 
in the New York Times about how forgetting is a very essential part 
about remembering, and people passed this around as if it was a big 
revelation. People think that people with good memories have a good 
thing.

And personally, I was like Duh. 

Good memory is not about remembering everything. Good memory is about 
forgetting the irrelevant, so that the important stuff stands out and you 
*can* remember it. But the big deal is that yes, you have to forget stuff, 
and that means that you *will* miss details - but you'll hopefully miss 
the stuff you don't care for. The keyword being hopefully. It works most 
of the time, but we all know we've sometimes been able to forget a detail 
that turned out to be crucial after all.

So the *stupid* response to that is we should remember everything. It 
misses the point. Yes, we sometimes forget even important details, but 
it's *so* important to forget details, that the fact that our brains 
occasionally forget things we later ended up needing is still *much* 
preferable to trying to remember everything.

The same tends to be true of bug hunting, and regression tracking. 

There's a lot of noise there. We'll never get perfect, and I'll argue 
that if we don't have a system that tries to actively *remove* noise, 
we'll just be overwhelmed. But that _inevitably_ means that sometimes 
there was actually a signal in the noise that we ended up removing, 
because nobody saw it as anything but noise. 

So I think people should concentrate on turning noise into clear 
signal, but at the same time remember that that inevitably is a lossy 
transformation, and just accept the fact that it will mean that we 
occasionally make mistakes. 

This is why I've been advocating bugzilla forget stuff, for example. I 
tend to see bugzilla as a place where noise accumulates, rather than a 
place where noise is made into a signal. 

Which gets my to the real issue I have: the notion of having a process for 
_tracking_ all the information is actually totally counter-productive, if 
a big part of the process isn't also about throwing noise away.

We don't want to save all the crud. I don't want smart tracking to 
keep track of everything. I want smart forgetting, so that we are only 
left with the major signal - the stuff that matters. 

Linus
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Natalie Protasevich

On 6/19/07, Linus Torvalds [EMAIL PROTECTED] wrote:



On Tue, 19 Jun 2007, Oleg Verych wrote:

 I'm proposing kind of smart tracking, summarized before. I'm not an
 idealist, doing manual work. Making tools -- is what i've picked up from
 one of your mails. Thus hope of having more opinions on that.

Don't get me wrong, I wasn't actually responing to you personally, I was
actually responding mostly to the tone of this thread.

So I was responding to things like the example from Bartlomiej about
missed opportunity for taking developer review into account (and btw, I
think a little public shaming might not be a bad idea - I believe more in
*social* rules than in *technical* rules), and I'm responding to some of
the commentary by Adrian and others about no regressions *ever*.

These are things we can *wish* for. But the fact that we migth wish for
them doesn't actually mean that they are really good ideas to aim for in
practice.

Let me put it another way: a few weeks ago there was this big news story
in the New York Times about how forgetting is a very essential part
about remembering, and people passed this around as if it was a big
revelation. People think that people with good memories have a good
thing.

And personally, I was like Duh.

Good memory is not about remembering everything. Good memory is about
forgetting the irrelevant, so that the important stuff stands out and you
*can* remember it. But the big deal is that yes, you have to forget stuff,
and that means that you *will* miss details - but you'll hopefully miss
the stuff you don't care for. The keyword being hopefully. It works most
of the time, but we all know we've sometimes been able to forget a detail
that turned out to be crucial after all.

So the *stupid* response to that is we should remember everything. It
misses the point. Yes, we sometimes forget even important details, but
it's *so* important to forget details, that the fact that our brains
occasionally forget things we later ended up needing is still *much*
preferable to trying to remember everything.

The same tends to be true of bug hunting, and regression tracking.

There's a lot of noise there. We'll never get perfect, and I'll argue
that if we don't have a system that tries to actively *remove* noise,
we'll just be overwhelmed. But that _inevitably_ means that sometimes
there was actually a signal in the noise that we ended up removing,
because nobody saw it as anything but noise.

So I think people should concentrate on turning noise into clear
signal, but at the same time remember that that inevitably is a lossy
transformation, and just accept the fact that it will mean that we
occasionally make mistakes.


This is the most crucial point so far in my opinion.
Well, not only people who report bugs are smart - they are curious,
enthusiastic, and passionate about their system, and job, hobby -
whatever linux means to them. They often do own investigations, give
lots of detail, and often others jump in with me too and give even
more detail (and more noise) But real detail that would help in bug
assessment is not there, and needs to be requested in lengthy
exchanges (time wise, since every request takes  hours, days,
months...)
I think  would help to make some attempt to lead them on to giving out
what's important. Cold and impersonal upfront fields and drop-down
menus are taking a lot of noise and heat off the actual report.
Another observation - things like me too should be encouraged to
become separate reports because generally only maintainer and people
who work directly on the module can sort out if this is same problem,
and in fact real problems get lost and not accounted for when getting
in wrong buckets this way.
--Natalie


This is why I've been advocating bugzilla forget stuff, for example. I
tend to see bugzilla as a place where noise accumulates, rather than a
place where noise is made into a signal.

Which gets my to the real issue I have: the notion of having a process for
_tracking_ all the information is actually totally counter-productive, if
a big part of the process isn't also about throwing noise away.

We don't want to save all the crud. I don't want smart tracking to
keep track of everything. I want smart forgetting, so that we are only
left with the major signal - the stuff that matters.

Linus


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Oleg Verych
* Date: Tue, 19 Jun 2007 10:04:58 -0700 (PDT)
 
 On Tue, 19 Jun 2007, Oleg Verych wrote:
 
 I'm proposing kind of smart tracking, summarized before. I'm not an
 idealist, doing manual work. Making tools -- is what i've picked up from
 one of your mails. Thus hope of having more opinions on that.

 Don't get me wrong, I wasn't actually responing to you personally, I was 
 actually responding mostly to the tone of this thread.

By reading only known persons[1]? Fine, it is OK.

But i hope, i did useful statements. In fact, noise reduction stuff WRT
bug reports was before in my analysis of Adrian's POV here (reportbug
tool). Also it showed again, when i've wrote about traces, where testers
(bug reporters) can find test cases, before they will cry (again) about
some issues. I see this, example is bugzilla @ mozilla -- known history.

[1] Noise filtering -- that's obvious for me, after all :)

By not flaming further, i'm just going to try to implement something.
Hopefully my next patch will be usefully smart tracked.

Thanks!

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Stefan Richter
Oleg Verych wrote:
 On Tue, Jun 19, 2007 at 04:27:15PM +0200, Stefan Richter wrote:
 There are different people involved in
   - patch handling,
   - bug handling (bugs are reported by end-users),
 therefore don't forget that PTS and BTS have different requirements.
 
 Sure. But if tracking was done, possible bugs where killed, user's bug
 report seems to depend on that patch (bisecting), why not to have a
 linkage here?

Of course there are certain links between bugs and patches, and thus
there are certain links between bug tracking and patch tracking.

[...]
 Current identification of problems and patch association
 have completely zero level of tracking or automation, while Bugzilla is
 believed by somebody to have positive efficiency in bug tracking.

I, as maintainer of a small subsystem, can personally track bug--patch
relationships with bugzilla just fine, on its near-zero level of
automation and integration.

Nevertheless, would a more integrated bug/patch tracking system help me
improve quality of my output? ---
a) Would it save me more time than it costs me to fit into the system
   (time that can be invested in actual debugging)?
   This can only be answered after trying it.
b) Would it help me to spot mistakes in patches before I submit?
   No.
c) Would I get quicker feedback from testers?
   That depends on whether such a system attracts testers and helps
   testers to work efficiently.  This is also something that can only be
   speculated about without trying it.

The potential testers that I deal with are mostly either very
non-technical persons, or persons which are experienced in their
hardware/application area but *not* in kernel internals and kernel
development procedures.
-- 
Stefan Richter
-=-=-=== -==- =--==
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Oleg Verych

* Date: Tue, 19 Jun 2007 19:50:48 +0200
 
 [...]
 Current identification of problems and patch association
 have completely zero level of tracking or automation, while Bugzilla is
 believed by somebody to have positive efficiency in bug tracking.

 I, as maintainer of a small subsystem, can personally track bug--patch
 relationships with bugzilla just fine, on its near-zero level of
 automation and integration.

 Nevertheless, would a more integrated bug/patch tracking system help me
 improve quality of my output? ---
 a) Would it save me more time than it costs me to fit into the system
(time that can be invested in actual debugging)?
This can only be answered after trying it.

I'm not a wizard, if i will answer now: No. [1:]

[1:] Your User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; 
rv:1.8.1.4) Gecko/20070509 SeaMonkey/1.1.2

 b) Would it help me to spot mistakes in patches before I submit?
No.

If you ever tried to report bug with reportbug tool in Debian, you may
understand what i meant. Nothing can substitute intelligence. Something
can reduce impact of laziness (of searching relevant bugreports).

 c) Would I get quicker feedback from testers?
That depends on whether such a system attracts testers and helps
testers to work efficiently.  This is also something that can only be
speculated about without trying it.

 The potential testers that I deal with are mostly either very
 non-technical persons, or persons which are experienced in their
 hardware/application area but *not* in kernel internals and kernel
 development procedures.

They also don't bother subscribing to mailing lists and like to write
blogs. I'm not sure about hw databases you talked about, i will talk
about gathering information from testers.

Debian have experimental and unstable branches, people willing to have
new stuff are likely to have this, and not testing or stable. BTS just
collects bugreports http://bugs.debian.org/. If kernel team uploads new
kernel (release or even rc recently), interested people will use it after
next upgrade. Bug reports get collected, but main answer will be, try
reproduce on most recent kernel.org's one. Here, what i have proposed,
may play role you expect. Mis-configuration/malfunctioning, programmer's
error (Linus noted) in organized manner may easily join reporting person
to kernel.org's testing. On driver or small sub-system level this may
work. Again it's all about information, not intelligence.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: This is [Re:] How to improve the quality of the kernel[?].

2007-06-19 Thread Stefan Richter
Oleg Verych wrote:
[I wrote]
 a) Would it save me more time than it costs me to fit into the system
(time that can be invested in actual debugging)?
This can only be answered after trying it.
 
 I'm not a wizard, if i will answer now: No. [1:]
 
 [1:] Your User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; 
 rv:1.8.1.4) Gecko/20070509 SeaMonkey/1.1.2

Seamonkey isn't interoperable with Debian's BTS?
Lucky me that I frequently use other MUAs too.
-- 
Stefan Richter
-=-=-=== -==- =--==
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/