Re: Giving developers clue how many testers verified certain kernel version

2005-07-24 Thread Martin MOKREJŠ

Hi Adrian,
 I think you don't understand me. I do report bugs and will always
do. The point was that developers could be "assured" there is possibly
no problem when people do NOT report bugs in that piece of code
because they would know that it _was_ tested by 1000 people on 357 different
HW's. And they could even check the .configs, lshw etc. Sure the people
would report a problem, but if you do NOT hear of one then there is either no
problem or nobody cared to report that or nobody tested. So you known
just nothing and you better wait some days, weeks so the patch get's lost
in lkml archives if it doesn't happend it gets into -ac or -mm.

 And that is exactly why I proposed this. Then you will know that 1000
people really cared and used that and most probably then it is reasonable
to expect there is really no bug in the code.

 Take it the other way around. You may be reluctant to commit some
patch to the official tree. ;) The guy who wrote the patch says "It was tested,
please apply". ;-) If he says the patch is lying in -mm or -ac tree for
a while - like 2 months you might be more in favor to commit, right?
If you know the patch was tested between -git5 and -git6 by 1000 people
within 5 days you wouldn't wait either, right?
Martin

Adrian Bunk wrote:

On Sun, Jul 24, 2005 at 08:45:16PM +0200, Martin MOKREJ? wrote:

well, the idea was to give you a clue how many people did NOT complain
because it either worked or they did not realize/care. The goal
was different. For example, I have 2 computers and both need current acpi
patch to work fine. I went to bugzilla and found nobody has filed such bugs
before - so I did and said it is already fixed in current acpi patch.
But you'd never know that I tested that successfully. And I don't believe
to get emails from lkml that I installed a patch and it did not break
anything. I hope you get the idea now. ;)



in your ACPI example there is a bug/problem (ACPI needs updating).

And ACPI is a good example where even 1000 success reports wouldn't help 
because a slightly different hardware or BIOS version might make the 
difference.


Usually "no bug report" indicates that something is OK.
And if you are unsure whether an unusual setup or hardware is actually 
tested, it's usually the best to ask on linux-kernel whether someone 
could test it.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-24 Thread Adrian Bunk
On Sun, Jul 24, 2005 at 08:45:16PM +0200, Martin MOKREJ? wrote:

> Hi Adrian,

Hi Martin,

>  well, the idea was to give you a clue how many people did NOT complain
> because it either worked or they did not realize/care. The goal
> was different. For example, I have 2 computers and both need current acpi
> patch to work fine. I went to bugzilla and found nobody has filed such bugs
> before - so I did and said it is already fixed in current acpi patch.
> But you'd never know that I tested that successfully. And I don't believe
> to get emails from lkml that I installed a patch and it did not break
> anything. I hope you get the idea now. ;)

in your ACPI example there is a bug/problem (ACPI needs updating).

And ACPI is a good example where even 1000 success reports wouldn't help 
because a slightly different hardware or BIOS version might make the 
difference.

Usually "no bug report" indicates that something is OK.
And if you are unsure whether an unusual setup or hardware is actually 
tested, it's usually the best to ask on linux-kernel whether someone 
could test it.

> Martin

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-24 Thread Martin MOKREJŠ

Hi Adrian,
 well, the idea was to give you a clue how many people did NOT complain
because it either worked or they did not realize/care. The goal
was different. For example, I have 2 computers and both need current acpi
patch to work fine. I went to bugzilla and found nobody has filed such bugs
before - so I did and said it is already fixed in current acpi patch.
But you'd never know that I tested that successfully. And I don't believe
to get emails from lkml that I installed a patch and it did not break
anything. I hope you get the idea now. ;)
Martin

Adrian Bunk wrote:

On Fri, Jul 22, 2005 at 03:34:09AM +0200, Martin MOKREJ? wrote:



Hi,



Hi Martin,



I think the discussion going on here in another thread about lack
of positive information on how many testers successfully tested certain
kernel version can be easily solved with real solution.

How about opening separate "project" in bugzilla.kernel.org named
kernel-testers or whatever, where whenever cvs/svn/bk gatekeepers
would release some kernel patch, would open an empty "bugreport"
for that version, say for 2.6.13-rc3-git4.

Anybody willing to join the crew who cared to download the patch
and tested the kernel would post just a single comment/follow-up
to _that_ "bugreport" with either "positive" rating or URL
of his own bugreport with some new bug. When the bug get's closed
it would be immediately obvious in the 2.6.13-rc3-git4 bug ticket
as that bug will be striked-through as closed.

Then, we could easily just browse through and see that 2.6.13-rc2
was tested by 33 fellows while 3 of them found a problem and 2 such
problems were closed since then.
...



most likely, only a small minory of the people downloading a patch would 
register at such a "project".


The important part of the work, the bug reports, can already today go to 
lnux-kernel and/or the Bugzilla.


You'd spend efforts for such a "project" that would only produce some 
numbers of questionable value.




Martin



cu
Adrian



--
Martin Mokrejs
Email: 'bW9rcmVqc21Acmlib3NvbWUubmF0dXIuY3VuaS5jeg==\n'.decode('base64')
GPG key is at http://www.natur.cuni.cz/~mmokrejs
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-24 Thread Martin MOKREJŠ

Hi Adrian,
 well, the idea was to give you a clue how many people did NOT complain
because it either worked or they did not realize/care. The goal
was different. For example, I have 2 computers and both need current acpi
patch to work fine. I went to bugzilla and found nobody has filed such bugs
before - so I did and said it is already fixed in current acpi patch.
But you'd never know that I tested that successfully. And I don't believe
to get emails from lkml that I installed a patch and it did not break
anything. I hope you get the idea now. ;)
Martin

Adrian Bunk wrote:

On Fri, Jul 22, 2005 at 03:34:09AM +0200, Martin MOKREJ? wrote:



Hi,



Hi Martin,



I think the discussion going on here in another thread about lack
of positive information on how many testers successfully tested certain
kernel version can be easily solved with real solution.

How about opening separate project in bugzilla.kernel.org named
kernel-testers or whatever, where whenever cvs/svn/bk gatekeepers
would release some kernel patch, would open an empty bugreport
for that version, say for 2.6.13-rc3-git4.

Anybody willing to join the crew who cared to download the patch
and tested the kernel would post just a single comment/follow-up
to _that_ bugreport with either positive rating or URL
of his own bugreport with some new bug. When the bug get's closed
it would be immediately obvious in the 2.6.13-rc3-git4 bug ticket
as that bug will be striked-through as closed.

Then, we could easily just browse through and see that 2.6.13-rc2
was tested by 33 fellows while 3 of them found a problem and 2 such
problems were closed since then.
...



most likely, only a small minory of the people downloading a patch would 
register at such a project.


The important part of the work, the bug reports, can already today go to 
lnux-kernel and/or the Bugzilla.


You'd spend efforts for such a project that would only produce some 
numbers of questionable value.




Martin



cu
Adrian



--
Martin Mokrejs
Email: 'bW9rcmVqc21Acmlib3NvbWUubmF0dXIuY3VuaS5jeg==\n'.decode('base64')
GPG key is at http://www.natur.cuni.cz/~mmokrejs
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-24 Thread Adrian Bunk
On Sun, Jul 24, 2005 at 08:45:16PM +0200, Martin MOKREJ? wrote:

 Hi Adrian,

Hi Martin,

  well, the idea was to give you a clue how many people did NOT complain
 because it either worked or they did not realize/care. The goal
 was different. For example, I have 2 computers and both need current acpi
 patch to work fine. I went to bugzilla and found nobody has filed such bugs
 before - so I did and said it is already fixed in current acpi patch.
 But you'd never know that I tested that successfully. And I don't believe
 to get emails from lkml that I installed a patch and it did not break
 anything. I hope you get the idea now. ;)

in your ACPI example there is a bug/problem (ACPI needs updating).

And ACPI is a good example where even 1000 success reports wouldn't help 
because a slightly different hardware or BIOS version might make the 
difference.

Usually no bug report indicates that something is OK.
And if you are unsure whether an unusual setup or hardware is actually 
tested, it's usually the best to ask on linux-kernel whether someone 
could test it.

 Martin

cu
Adrian

-- 

   Is there not promise of rain? Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   Only a promise, Lao Er said.
   Pearl S. Buck - Dragon Seed

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-24 Thread Martin MOKREJŠ

Hi Adrian,
 I think you don't understand me. I do report bugs and will always
do. The point was that developers could be assured there is possibly
no problem when people do NOT report bugs in that piece of code
because they would know that it _was_ tested by 1000 people on 357 different
HW's. And they could even check the .configs, lshw etc. Sure the people
would report a problem, but if you do NOT hear of one then there is either no
problem or nobody cared to report that or nobody tested. So you known
just nothing and you better wait some days, weeks so the patch get's lost
in lkml archives if it doesn't happend it gets into -ac or -mm.

 And that is exactly why I proposed this. Then you will know that 1000
people really cared and used that and most probably then it is reasonable
to expect there is really no bug in the code.

 Take it the other way around. You may be reluctant to commit some
patch to the official tree. ;) The guy who wrote the patch says It was tested,
please apply. ;-) If he says the patch is lying in -mm or -ac tree for
a while - like 2 months you might be more in favor to commit, right?
If you know the patch was tested between -git5 and -git6 by 1000 people
within 5 days you wouldn't wait either, right?
Martin

Adrian Bunk wrote:

On Sun, Jul 24, 2005 at 08:45:16PM +0200, Martin MOKREJ? wrote:

well, the idea was to give you a clue how many people did NOT complain
because it either worked or they did not realize/care. The goal
was different. For example, I have 2 computers and both need current acpi
patch to work fine. I went to bugzilla and found nobody has filed such bugs
before - so I did and said it is already fixed in current acpi patch.
But you'd never know that I tested that successfully. And I don't believe
to get emails from lkml that I installed a patch and it did not break
anything. I hope you get the idea now. ;)



in your ACPI example there is a bug/problem (ACPI needs updating).

And ACPI is a good example where even 1000 success reports wouldn't help 
because a slightly different hardware or BIOS version might make the 
difference.


Usually no bug report indicates that something is OK.
And if you are unsure whether an unusual setup or hardware is actually 
tested, it's usually the best to ask on linux-kernel whether someone 
could test it.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-23 Thread Lee Revell
On Sat, 2005-07-23 at 19:05 +1000, Con Kolivas wrote:
> Indeed, and the purpose of the benchmark is to quantify something rather than 
> leave it to subjective feeling. Fortunately if I was to quantify the current 
> kernel's situation I would say everything is fine.

Agreed.  Unfortunately everything in userspace is not fine.  I think a
lot of these interactivity problems are due to X and broken/bloated
apps.

Lee

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-23 Thread Jesper Krogh
I gmane.linux.kernel, skrev Blaisorblade:
>  Forgot drivers testing? That is where most of the bugs are hidden, and where 
>  wide user testing is definitely needed because of the various hardware bugs 
>  and different configurations existing in real world.

A way that could raise the testing upon a particular kernel, would be to
provide; (debian example follows):
... example .. 
An apt-repository with the newest tagged kernel build modular for the
architecture. 

Just drop all tagged kernels in a common repository that the users can
follow, then I'd be happy to test a new kernel on every reboot on my
system. I'd probably still would respond if anything was broken in the
new kernel.. 

Then it wouldn't be: "try this patch and see if that solves anything"
but do:

apt-get install kernel-image-386-torvalds-linux-2.6-v2.6.13-rc3

(automatically build from the "torvalds/linux-2.6"-branch with tag
"v2.6.13-rc3" using a modular kernel-configuration similar to the one
used in the stock debian kernels.  

Then I find and report something and "Pavel Machek" releases a "try-fix", by
tagging a branch ind a tree and tells me to try
kernel-image-386-pavel-good-2.6-v2.6.13-rc3 
instead. 

(and variations.. acip/no-acip smp, etc. etc. )

... example end .. 


It would be quite a lot central kernel-building, but as far as I can
see, it can be fully automated. 

It would defininately lower the barrier for being able to paticipate in
testing, but I am not the one to decide if that would be a desirable
goal?  Or for that matter, worth the work. 

Jesper
-- 
./Jesper Krogh, [EMAIL PROTECTED], Jabber ID: [EMAIL PROTECTED]


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-23 Thread Con Kolivas
On Sat, 23 Jul 2005 01:34 pm, Lee Revell wrote:
> On Fri, 2005-07-22 at 20:31 -0700, Linus Torvalds wrote:
> > On Fri, 22 Jul 2005, Lee Revell wrote:
> > > Con's interactivity benchmark looks quite promising for finding
> > > scheduler related interactivity regressions.
> >
> > I doubt that _any_ of the regressions that are user-visible are
> > scheduler-related. They all tend to be disk IO issues (bad scheduling or
> > just plain bad drivers), and then sometimes just VM misbehaviour.
> >
> > People are looking at all these RT patches, when the thing is that most
> > nobody will ever be able to tell the difference between 10us and 1ms
> > latencies unless it causes a skip in audio.
>
> I agree re: the RT patches, but what makes Con's benchmark useful is
> that it also tests interactivity (measuring in msecs vs. usecs) with
> everything running SCHED_NORMAL, which is a much better approximation of
> a desktop load.  And the numbers do go well up into the range where
> people would notice, tens and hundreds of ms.

Indeed, and the purpose of the benchmark is to quantify something rather than 
leave it to subjective feeling. Fortunately if I was to quantify the current 
kernel's situation I would say everything is fine.

Cheers,
Con
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-23 Thread Con Kolivas
On Sat, 23 Jul 2005 01:34 pm, Lee Revell wrote:
 On Fri, 2005-07-22 at 20:31 -0700, Linus Torvalds wrote:
  On Fri, 22 Jul 2005, Lee Revell wrote:
   Con's interactivity benchmark looks quite promising for finding
   scheduler related interactivity regressions.
 
  I doubt that _any_ of the regressions that are user-visible are
  scheduler-related. They all tend to be disk IO issues (bad scheduling or
  just plain bad drivers), and then sometimes just VM misbehaviour.
 
  People are looking at all these RT patches, when the thing is that most
  nobody will ever be able to tell the difference between 10us and 1ms
  latencies unless it causes a skip in audio.

 I agree re: the RT patches, but what makes Con's benchmark useful is
 that it also tests interactivity (measuring in msecs vs. usecs) with
 everything running SCHED_NORMAL, which is a much better approximation of
 a desktop load.  And the numbers do go well up into the range where
 people would notice, tens and hundreds of ms.

Indeed, and the purpose of the benchmark is to quantify something rather than 
leave it to subjective feeling. Fortunately if I was to quantify the current 
kernel's situation I would say everything is fine.

Cheers,
Con
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-23 Thread Jesper Krogh
I gmane.linux.kernel, skrev Blaisorblade:
  Forgot drivers testing? That is where most of the bugs are hidden, and where 
  wide user testing is definitely needed because of the various hardware bugs 
  and different configurations existing in real world.

A way that could raise the testing upon a particular kernel, would be to
provide; (debian example follows):
... example .. 
An apt-repository with the newest tagged kernel build modular for the
architecture. 

Just drop all tagged kernels in a common repository that the users can
follow, then I'd be happy to test a new kernel on every reboot on my
system. I'd probably still would respond if anything was broken in the
new kernel.. 

Then it wouldn't be: try this patch and see if that solves anything
but do:

apt-get install kernel-image-386-torvalds-linux-2.6-v2.6.13-rc3

(automatically build from the torvalds/linux-2.6-branch with tag
v2.6.13-rc3 using a modular kernel-configuration similar to the one
used in the stock debian kernels.  

Then I find and report something and Pavel Machek releases a try-fix, by
tagging a branch ind a tree and tells me to try
kernel-image-386-pavel-good-2.6-v2.6.13-rc3 
instead. 

(and variations.. acip/no-acip smp, etc. etc. )

... example end .. 


It would be quite a lot central kernel-building, but as far as I can
see, it can be fully automated. 

It would defininately lower the barrier for being able to paticipate in
testing, but I am not the one to decide if that would be a desirable
goal?  Or for that matter, worth the work. 

Jesper
-- 
./Jesper Krogh, [EMAIL PROTECTED], Jabber ID: [EMAIL PROTECTED]


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-23 Thread Lee Revell
On Sat, 2005-07-23 at 19:05 +1000, Con Kolivas wrote:
 Indeed, and the purpose of the benchmark is to quantify something rather than 
 leave it to subjective feeling. Fortunately if I was to quantify the current 
 kernel's situation I would say everything is fine.

Agreed.  Unfortunately everything in userspace is not fine.  I think a
lot of these interactivity problems are due to X and broken/bloated
apps.

Lee

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Adrian Bunk
On Fri, Jul 22, 2005 at 09:15:14PM -0500, Alejandro Bonilla wrote:
> Lee Revell wrote:
> 
> >On Fri, 2005-07-22 at 20:07 -0500, Alejandro Bonilla wrote:
> > 
> >>I will get flames for this, but my laptop boots faster and sometimes 
> >>responds faster in 2.4.27 than in 2.6.12. Sorry, but this is the fact 
> >>for me. IBM T42.
> >
> >Sorry dude, but there's just no way that any automated process can catch
> >these.
> >
> I'm not looking for an automated process for this. But for all in 
> general, when moving from 2.6.11 to 2.6.12 or from any version to 
> another. (At least in the same kernel branch)
>...

You send:
- a problem description X
- tell that the last working kernel was Y
- tell that it is broken in kernel Z

The probability of any kernel developer being interested in your problem 
increases:
- the better the description X is
- the nearer versions Y and Z are together
- the more recent version Y is

Ideally, you are able to say that patch A in the latest -mm kernel
broke it.

It's perfectly OK to send a description X that says:
- with version Y and the following workload B, everything is working
  perfectly
- with version Z and the same workload B, XMMS is stuttering

If any kernel developer is interested in your bug report, he will tell 
you which data might be interesting for debugging the problem.

The problem is that debugging a problem often requires knowledge about 
possible causes and changes between versions Y and Z in this area. Even 
a kernel developer who perfectly knows one part of the kernel might not 
be able to debug a problem in a completely different area of the kernel.

> .Alejandro

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Alejandro Bonilla

Linus Torvalds wrote:


On Fri, 22 Jul 2005, Lee Revell wrote:
 


Con's interactivity benchmark looks quite promising for finding
scheduler related interactivity regressions.
   



I doubt that _any_ of the regressions that are user-visible are
scheduler-related. They all tend to be disk IO issues (bad scheduling or
just plain bad drivers), and then sometimes just VM misbehaviour.

People are looking at all these RT patches, when the thing is that most
nobody will ever be able to tell the difference between 10us and 1ms
latencies unless it causes a skip in audio.
 

True, and I just couldn't agree more with Lee that lots of the delays 
that one looks at is because of user space. Still, I have some doubt on 
how faster 2.6 is sometimes, where 2.4 is faster in other things.


i.e. As my newbie view, I can see 2.6 running faster in X, Compiling and 
stuff, but I see 2.4 working much faster when running commands, response 
and interaction in the console. But then again, this could be only me...




Linus

 


.Alejandro
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Alejandro Bonilla

Lee Revell wrote:


On Fri, 2005-07-22 at 21:15 -0500, Alejandro Bonilla wrote:
 

OK, I will, but I first of all need to learn how to tell if benchmarks 
are better or worse.
   



Con's interactivity benchmark looks quite promising for finding
scheduler related interactivity regressions.  It certainly has confirmed
what we already knew re: SCHED_FIFO performance, if we extend that to
SCHED_OTHER which is a more interesting problem then there's serious
potential for improvement.  AFAIK no one has posted any 2.4 vs 2.6
interbench results yet...
 


I will give it a try.


I suspect a lot of the boot time issue is due to userspace.  But, it
should be trivial to benchmark this one, just use the TSC or whatever to
measure the time from first kernel entry to execing init().
 

You got it! As a laptop user, I think it just takes too much more. I 
think it is maybe hotplugs fault with the kernel? I don't know how much 
is done by the kernel or userspace but it definitely takes longer.


I could do some sort of benchmarks, but believe me, I hate to say this, 
but I use 2.6 because of much more power managements features in it. 
Else I like 2.4 a lot more. Is like, the feels is sharper. Sometimes 
when I got into a tty1, it takes some time after I put my username in to 
prompt me for a password. This does not occur when I boot with 2.4.27. 
Strange huh?


I don't want to be an ass and say that 2.4 is better, instead I want to 
help and let determine why is it that I feel 2.6 slower.


.Alejandro

Lee 



 



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Lee Revell
On Fri, 2005-07-22 at 20:31 -0700, Linus Torvalds wrote:
> 
> On Fri, 22 Jul 2005, Lee Revell wrote:
> > 
> > Con's interactivity benchmark looks quite promising for finding
> > scheduler related interactivity regressions.
> 
> I doubt that _any_ of the regressions that are user-visible are
> scheduler-related. They all tend to be disk IO issues (bad scheduling or
> just plain bad drivers), and then sometimes just VM misbehaviour.
> 
> People are looking at all these RT patches, when the thing is that most
> nobody will ever be able to tell the difference between 10us and 1ms
> latencies unless it causes a skip in audio.

I agree re: the RT patches, but what makes Con's benchmark useful is
that it also tests interactivity (measuring in msecs vs. usecs) with
everything running SCHED_NORMAL, which is a much better approximation of
a desktop load.  And the numbers do go well up into the range where
people would notice, tens and hundreds of ms.

Lee



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Lee Revell
On Fri, 2005-07-22 at 21:15 -0500, Alejandro Bonilla wrote:
> OK, I will, but I first of all need to learn how to tell if benchmarks 
> are better or worse.

Con's interactivity benchmark looks quite promising for finding
scheduler related interactivity regressions.  It certainly has confirmed
what we already knew re: SCHED_FIFO performance, if we extend that to
SCHED_OTHER which is a more interesting problem then there's serious
potential for improvement.  AFAIK no one has posted any 2.4 vs 2.6
interbench results yet...

I suspect a lot of the boot time issue is due to userspace.  But, it
should be trivial to benchmark this one, just use the TSC or whatever to
measure the time from first kernel entry to execing init().

Lee 

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Alejandro Bonilla

Lee Revell wrote:


On Fri, 2005-07-22 at 20:07 -0500, Alejandro Bonilla wrote:
 

I will get flames for this, but my laptop boots faster and sometimes 
responds faster in 2.4.27 than in 2.6.12. Sorry, but this is the fact 
for me. IBM T42.
   



Sorry dude, but there's just no way that any automated process can catch
these.
 

I'm not looking for an automated process for this. But for all in 
general, when moving from 2.6.11 to 2.6.12 or from any version to 
another. (At least in the same kernel branch)



You will have to provide a detailed bug report (with numbers) like
everyone else so we can fix it.  "Waiting for it to fix itself" is the
WORST thing you can do.
 

I never do this, believe me, but I could if I don't really see a 
problem. But there could really be one behind.



If you find a regression vs. an earlier kernel, please assume that
you're the ONLY one to notice it and respond accordingly.
 

OK, I will, but I first of all need to learn how to tell if benchmarks 
are better or worse.



Lee

 


.Alejandro
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Lee Revell
On Fri, 2005-07-22 at 20:07 -0500, Alejandro Bonilla wrote:
> I will get flames for this, but my laptop boots faster and sometimes 
> responds faster in 2.4.27 than in 2.6.12. Sorry, but this is the fact 
> for me. IBM T42.

Sorry dude, but there's just no way that any automated process can catch
these.

You will have to provide a detailed bug report (with numbers) like
everyone else so we can fix it.  "Waiting for it to fix itself" is the
WORST thing you can do.

If you find a regression vs. an earlier kernel, please assume that
you're the ONLY one to notice it and respond accordingly.

Lee

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Alejandro Bonilla

Blaisorblade wrote:


Adrian Bunk  stusta.de> writes:
 


On Thu, Jul 21, 2005 at 09:40:43PM -0500, Alejandro Bonilla wrote:
   

  How do we know that something is OK or wrong? just by the fact that 
it works or not, it doesn't mean like is OK.


There has to be a process for any user to be able to verify and study a 
problem. We don't have that yet.
 



 

If the user doesn't notice the difference then there's no problem for 
him.
   

Some performance regressions aren't easily noticeable without benchmarks... 
and we've had people claiming unnoticed regressions since 2.6.2 
(http://kerneltrap.org/node/4940)
 

I will get flames for this, but my laptop boots faster and sometimes 
responds faster in 2.4.27 than in 2.6.12. Sorry, but this is the fact 
for me. IBM T42.


If there's a problem the user notices, then the process is to send an 
email to linux-kernel and/or open a bug in the kernel Bugzilla and 
follow the "please send the output of foo" and "please test patch bar" 
instructions.
   

The thing is, I might not be able to know there *are* issues. I most 
just notice that something is strange. And then wait for a new kernel 
version because i might think it is something silly.




 

What comes nearest to what you are talking about is that you run LTP 
and/or various benchmarks against every -git and every -mm kernel and 
report regressions. But this is sinply a task someone could do (and I 
don't know how much of it is already done e.g. at OSDL), and not 
something every user could contribute to.
   



Forgot drivers testing? That is where most of the bugs are hidden, and where 
wide user testing is definitely needed because of the various hardware bugs 
and different configurations existing in real world.
 

This is my opinion too. If someone could do a simple script or 
benchmarking file, then users would be able to report most common 
important differences from previous kernel versions on their systems.


i.e. i would run the script that checks the write speed, CPU, latencys, 
and I don't know how many more tests and then compare it with the 
results that were with the previous git or full kernel release. 
Sometimes the users don't even know the commands to benchmark this parts 
of the systems. I don't know them.


IMHO, I think that publishing statistics about kernel patches downloads would 
be a very Good Thing(tm) to do. Peter, what's your opinion? I think that was 
even talked about at Kernel Summit (or at least I thought of it there), but 
I've not understood if this is going to happen.
 

What can we do here? Can we probably create a project like the janitors 
so that we can report this kind of thing? Should we report here? How can 
we make a script to really benchmark the system and then say, since this 
guy sent a patch for the Pentium M CPU's, things are running slower? Or 
my SCSI drive is running slower since rc2, but not with rc1.


At least if the user notices this kind of things, then one will be able 
to google for patches for your controller for the last weeks and see if 
someone screwed up with a change they sent to the kernel.


In other words, kernel testing is not really easy for normal users, it 
can only really be benchmarked by the one that knows... Which are many, 
but not everyone.


And I really want to give my 2 cent on this.

.Alejandro
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread H. Peter Anvin

David Lang wrote:

On Sat, 23 Jul 2005, Blaisorblade wrote:

IMHO, I think that publishing statistics about kernel patches 
downloads would
be a very Good Thing(tm) to do. Peter, what's your opinion? I think 
that was
even talked about at Kernel Summit (or at least I thought of it 
there), but

I've not understood if this is going to happen.



remember that most downloads will be from mirrors, and they don't get 
stats from them.


David Lang



That, plus there is http+ftp+rsync (not to mention git downloads, etc.) 
and the noise caused by other sites mirroring *us*.


-hpa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread David Lang

On Sat, 23 Jul 2005, Blaisorblade wrote:


IMHO, I think that publishing statistics about kernel patches downloads would
be a very Good Thing(tm) to do. Peter, what's your opinion? I think that was
even talked about at Kernel Summit (or at least I thought of it there), but
I've not understood if this is going to happen.


remember that most downloads will be from mirrors, and they don't get 
stats from them.


David Lang

--
There are two ways of constructing a software design. One way is to make it so 
simple that there are obviously no deficiencies. And the other way is to make 
it so complicated that there are no obvious deficiencies.
 -- C.A.R. Hoare
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Blaisorblade
Adrian Bunk  stusta.de> writes:
> On Thu, Jul 21, 2005 at 09:40:43PM -0500, Alejandro Bonilla wrote:
> > 
> >How do we know that something is OK or wrong? just by the fact that 
> > it works or not, it doesn't mean like is OK.
> > 
> > There has to be a process for any user to be able to verify and study a 
> > problem. We don't have that yet.

> If the user doesn't notice the difference then there's no problem for 
> him.
Some performance regressions aren't easily noticeable without benchmarks... 
and we've had people claiming unnoticed regressions since 2.6.2 
(http://kerneltrap.org/node/4940)
> If there's a problem the user notices, then the process is to send an 
> email to linux-kernel and/or open a bug in the kernel Bugzilla and 
> follow the "please send the output of foo" and "please test patch bar" 
> instructions.

> What comes nearest to what you are talking about is that you run LTP 
> and/or various benchmarks against every -git and every -mm kernel and 
> report regressions. But this is sinply a task someone could do (and I 
> don't know how much of it is already done e.g. at OSDL), and not 
> something every user could contribute to.

Forgot drivers testing? That is where most of the bugs are hidden, and where 
wide user testing is definitely needed because of the various hardware bugs 
and different configurations existing in real world.

IMHO, I think that publishing statistics about kernel patches downloads would 
be a very Good Thing(tm) to do. Peter, what's your opinion? I think that was 
even talked about at Kernel Summit (or at least I thought of it there), but 
I've not understood if this is going to happen.
-- 
Inform me of my mistakes, so I can keep imitating Homer Simpson's "Doh!".
Paolo Giarrusso, aka Blaisorblade (Skype ID "PaoloGiarrusso", ICQ 215621894)
http://www.user-mode-linux.org/~blaisorblade
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Adrian Bunk
On Thu, Jul 21, 2005 at 09:40:43PM -0500, Alejandro Bonilla wrote:
>...
>How does one check if hotplug is working better than before? How do 
> I test the fact that a performance issue seen in the driver is now fixed 
> for me or most of users? How do I get back to a bugzilla and tell that 
> there is a bug somewhere when one can't really know if that is the way 
> it works but is simply ugly, or if there is really a bug?
> 
>My point is that a user like me, can't really get back to this 
> mailing list and say "hey, since 2.6.13-rc1, my PCI bus is having an 
> additional 1ms of latency" We don't really have a process to follow and 
> then be able to say "ahha, so this is different" and then report the 
> problem, even if we can't fix it because of our C and kernel skills.
> 
>How do we know that something is OK or wrong? just by the fact that 
> it works or not, it doesn't mean like is OK.
> 
> There has to be a process for any user to be able to verify and study a 
> problem. We don't have that yet.

If the user doesn't notice the difference then there's no problem for 
him.

If there's a problem the user notices, then the process is to send an 
email to linux-kernel and/or open a bug in the kernel Bugzilla and 
follow the "please send the output of foo" and "please test patch bar" 
instructions.

What comes nearest to what you are talking about is that you run LTP 
and/or various benchmarks against every -git and every -mm kernel and 
report regressions. But this is sinply a task someone could do (and I 
don't know how much of it is already done e.g. at OSDL), and not 
something every user could contribute to.

> .Alejandro

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Adrian Bunk
On Fri, Jul 22, 2005 at 03:34:09AM +0200, Martin MOKREJ? wrote:

> Hi,

Hi Martin,

>  I think the discussion going on here in another thread about lack
> of positive information on how many testers successfully tested certain
> kernel version can be easily solved with real solution.
> 
>  How about opening separate "project" in bugzilla.kernel.org named
> kernel-testers or whatever, where whenever cvs/svn/bk gatekeepers
> would release some kernel patch, would open an empty "bugreport"
> for that version, say for 2.6.13-rc3-git4.
> 
>  Anybody willing to join the crew who cared to download the patch
> and tested the kernel would post just a single comment/follow-up
> to _that_ "bugreport" with either "positive" rating or URL
> of his own bugreport with some new bug. When the bug get's closed
> it would be immediately obvious in the 2.6.13-rc3-git4 bug ticket
> as that bug will be striked-through as closed.
> 
>  Then, we could easily just browse through and see that 2.6.13-rc2
> was tested by 33 fellows while 3 of them found a problem and 2 such
> problems were closed since then.
>...

most likely, only a small minory of the people downloading a patch would 
register at such a "project".

The important part of the work, the bug reports, can already today go to 
lnux-kernel and/or the Bugzilla.

You'd spend efforts for such a "project" that would only produce some 
numbers of questionable value.

> Martin

cu
Adrian

-- 

   "Is there not promise of rain?" Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   "Only a promise," Lao Er said.
   Pearl S. Buck - Dragon Seed

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Adrian Bunk
On Fri, Jul 22, 2005 at 03:34:09AM +0200, Martin MOKREJ? wrote:

 Hi,

Hi Martin,

  I think the discussion going on here in another thread about lack
 of positive information on how many testers successfully tested certain
 kernel version can be easily solved with real solution.
 
  How about opening separate project in bugzilla.kernel.org named
 kernel-testers or whatever, where whenever cvs/svn/bk gatekeepers
 would release some kernel patch, would open an empty bugreport
 for that version, say for 2.6.13-rc3-git4.
 
  Anybody willing to join the crew who cared to download the patch
 and tested the kernel would post just a single comment/follow-up
 to _that_ bugreport with either positive rating or URL
 of his own bugreport with some new bug. When the bug get's closed
 it would be immediately obvious in the 2.6.13-rc3-git4 bug ticket
 as that bug will be striked-through as closed.
 
  Then, we could easily just browse through and see that 2.6.13-rc2
 was tested by 33 fellows while 3 of them found a problem and 2 such
 problems were closed since then.
...

most likely, only a small minory of the people downloading a patch would 
register at such a project.

The important part of the work, the bug reports, can already today go to 
lnux-kernel and/or the Bugzilla.

You'd spend efforts for such a project that would only produce some 
numbers of questionable value.

 Martin

cu
Adrian

-- 

   Is there not promise of rain? Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   Only a promise, Lao Er said.
   Pearl S. Buck - Dragon Seed

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Adrian Bunk
On Thu, Jul 21, 2005 at 09:40:43PM -0500, Alejandro Bonilla wrote:
...
How does one check if hotplug is working better than before? How do 
 I test the fact that a performance issue seen in the driver is now fixed 
 for me or most of users? How do I get back to a bugzilla and tell that 
 there is a bug somewhere when one can't really know if that is the way 
 it works but is simply ugly, or if there is really a bug?
 
My point is that a user like me, can't really get back to this 
 mailing list and say hey, since 2.6.13-rc1, my PCI bus is having an 
 additional 1ms of latency We don't really have a process to follow and 
 then be able to say ahha, so this is different and then report the 
 problem, even if we can't fix it because of our C and kernel skills.
 
How do we know that something is OK or wrong? just by the fact that 
 it works or not, it doesn't mean like is OK.
 
 There has to be a process for any user to be able to verify and study a 
 problem. We don't have that yet.

If the user doesn't notice the difference then there's no problem for 
him.

If there's a problem the user notices, then the process is to send an 
email to linux-kernel and/or open a bug in the kernel Bugzilla and 
follow the please send the output of foo and please test patch bar 
instructions.

What comes nearest to what you are talking about is that you run LTP 
and/or various benchmarks against every -git and every -mm kernel and 
report regressions. But this is sinply a task someone could do (and I 
don't know how much of it is already done e.g. at OSDL), and not 
something every user could contribute to.

 .Alejandro

cu
Adrian

-- 

   Is there not promise of rain? Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   Only a promise, Lao Er said.
   Pearl S. Buck - Dragon Seed

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Blaisorblade
Adrian Bunk bunk at stusta.de writes:
 On Thu, Jul 21, 2005 at 09:40:43PM -0500, Alejandro Bonilla wrote:
  
 How do we know that something is OK or wrong? just by the fact that 
  it works or not, it doesn't mean like is OK.
  
  There has to be a process for any user to be able to verify and study a 
  problem. We don't have that yet.

 If the user doesn't notice the difference then there's no problem for 
 him.
Some performance regressions aren't easily noticeable without benchmarks... 
and we've had people claiming unnoticed regressions since 2.6.2 
(http://kerneltrap.org/node/4940)
 If there's a problem the user notices, then the process is to send an 
 email to linux-kernel and/or open a bug in the kernel Bugzilla and 
 follow the please send the output of foo and please test patch bar 
 instructions.

 What comes nearest to what you are talking about is that you run LTP 
 and/or various benchmarks against every -git and every -mm kernel and 
 report regressions. But this is sinply a task someone could do (and I 
 don't know how much of it is already done e.g. at OSDL), and not 
 something every user could contribute to.

Forgot drivers testing? That is where most of the bugs are hidden, and where 
wide user testing is definitely needed because of the various hardware bugs 
and different configurations existing in real world.

IMHO, I think that publishing statistics about kernel patches downloads would 
be a very Good Thing(tm) to do. Peter, what's your opinion? I think that was 
even talked about at Kernel Summit (or at least I thought of it there), but 
I've not understood if this is going to happen.
-- 
Inform me of my mistakes, so I can keep imitating Homer Simpson's Doh!.
Paolo Giarrusso, aka Blaisorblade (Skype ID PaoloGiarrusso, ICQ 215621894)
http://www.user-mode-linux.org/~blaisorblade
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread David Lang

On Sat, 23 Jul 2005, Blaisorblade wrote:


IMHO, I think that publishing statistics about kernel patches downloads would
be a very Good Thing(tm) to do. Peter, what's your opinion? I think that was
even talked about at Kernel Summit (or at least I thought of it there), but
I've not understood if this is going to happen.


remember that most downloads will be from mirrors, and they don't get 
stats from them.


David Lang

--
There are two ways of constructing a software design. One way is to make it so 
simple that there are obviously no deficiencies. And the other way is to make 
it so complicated that there are no obvious deficiencies.
 -- C.A.R. Hoare
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread H. Peter Anvin

David Lang wrote:

On Sat, 23 Jul 2005, Blaisorblade wrote:

IMHO, I think that publishing statistics about kernel patches 
downloads would
be a very Good Thing(tm) to do. Peter, what's your opinion? I think 
that was
even talked about at Kernel Summit (or at least I thought of it 
there), but

I've not understood if this is going to happen.



remember that most downloads will be from mirrors, and they don't get 
stats from them.


David Lang



That, plus there is http+ftp+rsync (not to mention git downloads, etc.) 
and the noise caused by other sites mirroring *us*.


-hpa
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Alejandro Bonilla

Blaisorblade wrote:


Adrian Bunk bunk at stusta.de writes:
 


On Thu, Jul 21, 2005 at 09:40:43PM -0500, Alejandro Bonilla wrote:
   

  How do we know that something is OK or wrong? just by the fact that 
it works or not, it doesn't mean like is OK.


There has to be a process for any user to be able to verify and study a 
problem. We don't have that yet.
 



 

If the user doesn't notice the difference then there's no problem for 
him.
   

Some performance regressions aren't easily noticeable without benchmarks... 
and we've had people claiming unnoticed regressions since 2.6.2 
(http://kerneltrap.org/node/4940)
 

I will get flames for this, but my laptop boots faster and sometimes 
responds faster in 2.4.27 than in 2.6.12. Sorry, but this is the fact 
for me. IBM T42.


If there's a problem the user notices, then the process is to send an 
email to linux-kernel and/or open a bug in the kernel Bugzilla and 
follow the please send the output of foo and please test patch bar 
instructions.
   

The thing is, I might not be able to know there *are* issues. I most 
just notice that something is strange. And then wait for a new kernel 
version because i might think it is something silly.




 

What comes nearest to what you are talking about is that you run LTP 
and/or various benchmarks against every -git and every -mm kernel and 
report regressions. But this is sinply a task someone could do (and I 
don't know how much of it is already done e.g. at OSDL), and not 
something every user could contribute to.
   



Forgot drivers testing? That is where most of the bugs are hidden, and where 
wide user testing is definitely needed because of the various hardware bugs 
and different configurations existing in real world.
 

This is my opinion too. If someone could do a simple script or 
benchmarking file, then users would be able to report most common 
important differences from previous kernel versions on their systems.


i.e. i would run the script that checks the write speed, CPU, latencys, 
and I don't know how many more tests and then compare it with the 
results that were with the previous git or full kernel release. 
Sometimes the users don't even know the commands to benchmark this parts 
of the systems. I don't know them.


IMHO, I think that publishing statistics about kernel patches downloads would 
be a very Good Thing(tm) to do. Peter, what's your opinion? I think that was 
even talked about at Kernel Summit (or at least I thought of it there), but 
I've not understood if this is going to happen.
 

What can we do here? Can we probably create a project like the janitors 
so that we can report this kind of thing? Should we report here? How can 
we make a script to really benchmark the system and then say, since this 
guy sent a patch for the Pentium M CPU's, things are running slower? Or 
my SCSI drive is running slower since rc2, but not with rc1.


At least if the user notices this kind of things, then one will be able 
to google for patches for your controller for the last weeks and see if 
someone screwed up with a change they sent to the kernel.


In other words, kernel testing is not really easy for normal users, it 
can only really be benchmarked by the one that knows... Which are many, 
but not everyone.


And I really want to give my 2 cent on this.

.Alejandro
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Lee Revell
On Fri, 2005-07-22 at 20:07 -0500, Alejandro Bonilla wrote:
 I will get flames for this, but my laptop boots faster and sometimes 
 responds faster in 2.4.27 than in 2.6.12. Sorry, but this is the fact 
 for me. IBM T42.

Sorry dude, but there's just no way that any automated process can catch
these.

You will have to provide a detailed bug report (with numbers) like
everyone else so we can fix it.  Waiting for it to fix itself is the
WORST thing you can do.

If you find a regression vs. an earlier kernel, please assume that
you're the ONLY one to notice it and respond accordingly.

Lee

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Alejandro Bonilla

Lee Revell wrote:


On Fri, 2005-07-22 at 20:07 -0500, Alejandro Bonilla wrote:
 

I will get flames for this, but my laptop boots faster and sometimes 
responds faster in 2.4.27 than in 2.6.12. Sorry, but this is the fact 
for me. IBM T42.
   



Sorry dude, but there's just no way that any automated process can catch
these.
 

I'm not looking for an automated process for this. But for all in 
general, when moving from 2.6.11 to 2.6.12 or from any version to 
another. (At least in the same kernel branch)



You will have to provide a detailed bug report (with numbers) like
everyone else so we can fix it.  Waiting for it to fix itself is the
WORST thing you can do.
 

I never do this, believe me, but I could if I don't really see a 
problem. But there could really be one behind.



If you find a regression vs. an earlier kernel, please assume that
you're the ONLY one to notice it and respond accordingly.
 

OK, I will, but I first of all need to learn how to tell if benchmarks 
are better or worse.



Lee

 


.Alejandro
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Lee Revell
On Fri, 2005-07-22 at 21:15 -0500, Alejandro Bonilla wrote:
 OK, I will, but I first of all need to learn how to tell if benchmarks 
 are better or worse.

Con's interactivity benchmark looks quite promising for finding
scheduler related interactivity regressions.  It certainly has confirmed
what we already knew re: SCHED_FIFO performance, if we extend that to
SCHED_OTHER which is a more interesting problem then there's serious
potential for improvement.  AFAIK no one has posted any 2.4 vs 2.6
interbench results yet...

I suspect a lot of the boot time issue is due to userspace.  But, it
should be trivial to benchmark this one, just use the TSC or whatever to
measure the time from first kernel entry to execing init().

Lee 

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Lee Revell
On Fri, 2005-07-22 at 20:31 -0700, Linus Torvalds wrote:
 
 On Fri, 22 Jul 2005, Lee Revell wrote:
  
  Con's interactivity benchmark looks quite promising for finding
  scheduler related interactivity regressions.
 
 I doubt that _any_ of the regressions that are user-visible are
 scheduler-related. They all tend to be disk IO issues (bad scheduling or
 just plain bad drivers), and then sometimes just VM misbehaviour.
 
 People are looking at all these RT patches, when the thing is that most
 nobody will ever be able to tell the difference between 10us and 1ms
 latencies unless it causes a skip in audio.

I agree re: the RT patches, but what makes Con's benchmark useful is
that it also tests interactivity (measuring in msecs vs. usecs) with
everything running SCHED_NORMAL, which is a much better approximation of
a desktop load.  And the numbers do go well up into the range where
people would notice, tens and hundreds of ms.

Lee



-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Alejandro Bonilla

Lee Revell wrote:


On Fri, 2005-07-22 at 21:15 -0500, Alejandro Bonilla wrote:
 

OK, I will, but I first of all need to learn how to tell if benchmarks 
are better or worse.
   



Con's interactivity benchmark looks quite promising for finding
scheduler related interactivity regressions.  It certainly has confirmed
what we already knew re: SCHED_FIFO performance, if we extend that to
SCHED_OTHER which is a more interesting problem then there's serious
potential for improvement.  AFAIK no one has posted any 2.4 vs 2.6
interbench results yet...
 


I will give it a try.


I suspect a lot of the boot time issue is due to userspace.  But, it
should be trivial to benchmark this one, just use the TSC or whatever to
measure the time from first kernel entry to execing init().
 

You got it! As a laptop user, I think it just takes too much more. I 
think it is maybe hotplugs fault with the kernel? I don't know how much 
is done by the kernel or userspace but it definitely takes longer.


I could do some sort of benchmarks, but believe me, I hate to say this, 
but I use 2.6 because of much more power managements features in it. 
Else I like 2.4 a lot more. Is like, the feels is sharper. Sometimes 
when I got into a tty1, it takes some time after I put my username in to 
prompt me for a password. This does not occur when I boot with 2.4.27. 
Strange huh?


I don't want to be an ass and say that 2.4 is better, instead I want to 
help and let determine why is it that I feel 2.6 slower.


.Alejandro

Lee 



 



-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Alejandro Bonilla

Linus Torvalds wrote:


On Fri, 22 Jul 2005, Lee Revell wrote:
 


Con's interactivity benchmark looks quite promising for finding
scheduler related interactivity regressions.
   



I doubt that _any_ of the regressions that are user-visible are
scheduler-related. They all tend to be disk IO issues (bad scheduling or
just plain bad drivers), and then sometimes just VM misbehaviour.

People are looking at all these RT patches, when the thing is that most
nobody will ever be able to tell the difference between 10us and 1ms
latencies unless it causes a skip in audio.
 

True, and I just couldn't agree more with Lee that lots of the delays 
that one looks at is because of user space. Still, I have some doubt on 
how faster 2.6 is sometimes, where 2.4 is faster in other things.


i.e. As my newbie view, I can see 2.6 running faster in X, Compiling and 
stuff, but I see 2.4 working much faster when running commands, response 
and interaction in the console. But then again, this could be only me...




Linus

 


.Alejandro
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-22 Thread Adrian Bunk
On Fri, Jul 22, 2005 at 09:15:14PM -0500, Alejandro Bonilla wrote:
 Lee Revell wrote:
 
 On Fri, 2005-07-22 at 20:07 -0500, Alejandro Bonilla wrote:
  
 I will get flames for this, but my laptop boots faster and sometimes 
 responds faster in 2.4.27 than in 2.6.12. Sorry, but this is the fact 
 for me. IBM T42.
 
 Sorry dude, but there's just no way that any automated process can catch
 these.
 
 I'm not looking for an automated process for this. But for all in 
 general, when moving from 2.6.11 to 2.6.12 or from any version to 
 another. (At least in the same kernel branch)
...

You send:
- a problem description X
- tell that the last working kernel was Y
- tell that it is broken in kernel Z

The probability of any kernel developer being interested in your problem 
increases:
- the better the description X is
- the nearer versions Y and Z are together
- the more recent version Y is

Ideally, you are able to say that patch A in the latest -mm kernel
broke it.

It's perfectly OK to send a description X that says:
- with version Y and the following workload B, everything is working
  perfectly
- with version Z and the same workload B, XMMS is stuttering

If any kernel developer is interested in your bug report, he will tell 
you which data might be interesting for debugging the problem.

The problem is that debugging a problem often requires knowledge about 
possible causes and changes between versions Y and Z in this area. Even 
a kernel developer who perfectly knows one part of the kernel might not 
be able to debug a problem in a completely different area of the kernel.

 .Alejandro

cu
Adrian

-- 

   Is there not promise of rain? Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   Only a promise, Lao Er said.
   Pearl S. Buck - Dragon Seed

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-21 Thread Alejandro Bonilla

Martin MOKREJŠ wrote:


Hi,

Mark Nipper wrote:


I have a different idea along these lines but not using
bugzilla.  A nice system for tracking usage of certain components
might be made by having people register using a certain e-mail
address and then submitting their .config as they try out new
versions of kernels.



Nice idea, but I still think it is of interrest on what hardware
was it tested. Maybe also 'dmesg' output would help a bit, but
I still don't know how you'd find that I have _this_ motherboard
instead of another.

I'm a simple Linux user that normally likes to test as much things as 
posible. This is what I would do:


I would make a Summary of the ChangeLog that was done to the kernel, and 
from there encourage people to test those parts. The worst part that I 
face against Linux is that I don't know C enough like to understand what 
the patch that someone sent will really do.


   A user understandable ChangeLog so that people can test those 
changed points would be great. And if those changes could have an 
explanation on how users could troubleshoot the change, then it would be 
fairly awesome.


   I have been subscribed here for more than a year already, and I have 
barely understood a couple of changes that have been done to Drivers and 
to the kernel itself. How can I make sure that the change will really 
work better for me?


   How does one check if hotplug is working better than before? How do 
I test the fact that a performance issue seen in the driver is now fixed 
for me or most of users? How do I get back to a bugzilla and tell that 
there is a bug somewhere when one can't really know if that is the way 
it works but is simply ugly, or if there is really a bug?


   My point is that a user like me, can't really get back to this 
mailing list and say "hey, since 2.6.13-rc1, my PCI bus is having an 
additional 1ms of latency" We don't really have a process to follow and 
then be able to say "ahha, so this is different" and then report the 
problem, even if we can't fix it because of our C and kernel skills.


   How do we know that something is OK or wrong? just by the fact that 
it works or not, it doesn't mean like is OK.


There has to be a process for any user to be able to verify and study a 
problem. We don't have that yet.


.Alejandro
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-21 Thread Martin MOKREJŠ

Hi,

Mark Nipper wrote:

I have a different idea along these lines but not using
bugzilla.  A nice system for tracking usage of certain components
might be made by having people register using a certain e-mail
address and then submitting their .config as they try out new
versions of kernels.


Nice idea, but I still think it is of interrest on what hardware
was it tested. Maybe also 'dmesg' output would help a bit, but
I still don't know how you'd find that I have _this_ motherboard
instead of another.

Second, I'd submit sometimes 2 or even 3 tested hosts. But am
willing to use only single email, though. ;)

I think we'd need some sort of profile, the profile would contain
some HW info, like motherboard type, bios version etc. To extract
that from 'dmesg' would be a nightmare I think.

...


Just an idea.  It might require some minimum
recommendations to users willing to participate.  I know for
example that I statically compile all four I/O schedulers in all


Well, my case too. ;)

Martin
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-21 Thread Mark Nipper
I have a different idea along these lines but not using
bugzilla.  A nice system for tracking usage of certain components
might be made by having people register using a certain e-mail
address and then submitting their .config as they try out new
versions of kernels.

The idea of course is that people will generally only
have compiled their own custom kernels with the drivers and
components they tend to use most.  It might be enough to ask
people who use this system to only submit mostly customized
configurations as opposed to distribution style kernel
configurations where almost everything is compiled as a module.

Anyway, the end result being that kernel developers could
ultimately refer to this system and see as they change things
whether a lot of people are hitting the components in the kernel
which might have been affected by their changes.  If even one
hundred people report using some specific subsystem which has
recently undergone significant change without any reports of
problems, then the developer can rest somewhat more easily
knowing their changes were probably made without incident.

Just an idea.  It might require some minimum
recommendations to users willing to participate.  I know for
example that I statically compile all four I/O schedulers in all
my kernels currently even though I always let the kernel select
whichever is the default and never change it myself.  Obviously
it would make more sense for me to axe those schedulers which are
not absolutely necessary to make whatever statistics being
gathered on my particular configuration more useful to a
developer checking to see which schedulers are being used and to
what extent.

-- 
Mark Nippere-contacts:
4475 Carter Creek Parkway   [EMAIL PROTECTED]
Apartment 724   http://nipsy.bitgnome.net/
Bryan, Texas, 77802-4481   AIM/Yahoo: texasnipsy ICQ: 66971617
(979)575-3193  MSN: [EMAIL PROTECTED]

-BEGIN GEEK CODE BLOCK-
Version: 3.1
GG/IT d- s++:+ a- C++$ UBL$ P--->+++ L+++$ !E---
W++(--) N+ o K++ w(---) O++ M V(--) PS+++(+) PE(--)
Y+ PGP t+ 5 X R tv b+++@ DI+(++) D+ G e h r++ y+(**)
--END GEEK CODE BLOCK--

---begin random quote of the moment---
This sentence no verb.
end random quote of the moment
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Giving developers clue how many testers verified certain kernel version

2005-07-21 Thread Martin MOKREJŠ

Hi,
 I think the discussion going on here in another thread about lack
of positive information on how many testers successfully tested certain
kernel version can be easily solved with real solution.

 How about opening separate "project" in bugzilla.kernel.org named
kernel-testers or whatever, where whenever cvs/svn/bk gatekeepers
would release some kernel patch, would open an empty "bugreport"
for that version, say for 2.6.13-rc3-git4.

 Anybody willing to join the crew who cared to download the patch
and tested the kernel would post just a single comment/follow-up
to _that_ "bugreport" with either "positive" rating or URL
of his own bugreport with some new bug. When the bug get's closed
it would be immediately obvious in the 2.6.13-rc3-git4 bug ticket
as that bug will be striked-through as closed.

 Then, we could easily just browse through and see that 2.6.13-rc2
was tested by 33 fellows while 3 of them found a problem and 2 such
problems were closed since then.

 I know what would be really helpfull if the testers would report
let's say motherboard type, HIGHMEM/NO-HIGHMEM, ACPI/NO-ACPI,
SMP/NO-SMP and few more hints and if teh database would keep those
having same hardware + config as a single record. It could even just
watch few lines in .config file when uploaded.

 Well I'm sure you got my point, maybe it would be easier to write
some tiny database from scratch instead of tweaking bugzilla to suit
this king of solution.
;-)
Martin
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Giving developers clue how many testers verified certain kernel version

2005-07-21 Thread Martin MOKREJŠ

Hi,
 I think the discussion going on here in another thread about lack
of positive information on how many testers successfully tested certain
kernel version can be easily solved with real solution.

 How about opening separate project in bugzilla.kernel.org named
kernel-testers or whatever, where whenever cvs/svn/bk gatekeepers
would release some kernel patch, would open an empty bugreport
for that version, say for 2.6.13-rc3-git4.

 Anybody willing to join the crew who cared to download the patch
and tested the kernel would post just a single comment/follow-up
to _that_ bugreport with either positive rating or URL
of his own bugreport with some new bug. When the bug get's closed
it would be immediately obvious in the 2.6.13-rc3-git4 bug ticket
as that bug will be striked-through as closed.

 Then, we could easily just browse through and see that 2.6.13-rc2
was tested by 33 fellows while 3 of them found a problem and 2 such
problems were closed since then.

 I know what would be really helpfull if the testers would report
let's say motherboard type, HIGHMEM/NO-HIGHMEM, ACPI/NO-ACPI,
SMP/NO-SMP and few more hints and if teh database would keep those
having same hardware + config as a single record. It could even just
watch few lines in .config file when uploaded.

 Well I'm sure you got my point, maybe it would be easier to write
some tiny database from scratch instead of tweaking bugzilla to suit
this king of solution.
;-)
Martin
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-21 Thread Mark Nipper
I have a different idea along these lines but not using
bugzilla.  A nice system for tracking usage of certain components
might be made by having people register using a certain e-mail
address and then submitting their .config as they try out new
versions of kernels.

The idea of course is that people will generally only
have compiled their own custom kernels with the drivers and
components they tend to use most.  It might be enough to ask
people who use this system to only submit mostly customized
configurations as opposed to distribution style kernel
configurations where almost everything is compiled as a module.

Anyway, the end result being that kernel developers could
ultimately refer to this system and see as they change things
whether a lot of people are hitting the components in the kernel
which might have been affected by their changes.  If even one
hundred people report using some specific subsystem which has
recently undergone significant change without any reports of
problems, then the developer can rest somewhat more easily
knowing their changes were probably made without incident.

Just an idea.  It might require some minimum
recommendations to users willing to participate.  I know for
example that I statically compile all four I/O schedulers in all
my kernels currently even though I always let the kernel select
whichever is the default and never change it myself.  Obviously
it would make more sense for me to axe those schedulers which are
not absolutely necessary to make whatever statistics being
gathered on my particular configuration more useful to a
developer checking to see which schedulers are being used and to
what extent.

-- 
Mark Nippere-contacts:
4475 Carter Creek Parkway   [EMAIL PROTECTED]
Apartment 724   http://nipsy.bitgnome.net/
Bryan, Texas, 77802-4481   AIM/Yahoo: texasnipsy ICQ: 66971617
(979)575-3193  MSN: [EMAIL PROTECTED]

-BEGIN GEEK CODE BLOCK-
Version: 3.1
GG/IT d- s++:+ a- C++$ UBL$ P---+++ L+++$ !E---
W++(--) N+ o K++ w(---) O++ M V(--) PS+++(+) PE(--)
Y+ PGP t+ 5 X R tv b+++@ DI+(++) D+ G e h r++ y+(**)
--END GEEK CODE BLOCK--

---begin random quote of the moment---
This sentence no verb.
end random quote of the moment
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-21 Thread Martin MOKREJŠ

Hi,

Mark Nipper wrote:

I have a different idea along these lines but not using
bugzilla.  A nice system for tracking usage of certain components
might be made by having people register using a certain e-mail
address and then submitting their .config as they try out new
versions of kernels.


Nice idea, but I still think it is of interrest on what hardware
was it tested. Maybe also 'dmesg' output would help a bit, but
I still don't know how you'd find that I have _this_ motherboard
instead of another.

Second, I'd submit sometimes 2 or even 3 tested hosts. But am
willing to use only single email, though. ;)

I think we'd need some sort of profile, the profile would contain
some HW info, like motherboard type, bios version etc. To extract
that from 'dmesg' would be a nightmare I think.

...


Just an idea.  It might require some minimum
recommendations to users willing to participate.  I know for
example that I statically compile all four I/O schedulers in all


Well, my case too. ;)

Martin
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Giving developers clue how many testers verified certain kernel version

2005-07-21 Thread Alejandro Bonilla

Martin MOKREJŠ wrote:


Hi,

Mark Nipper wrote:


I have a different idea along these lines but not using
bugzilla.  A nice system for tracking usage of certain components
might be made by having people register using a certain e-mail
address and then submitting their .config as they try out new
versions of kernels.



Nice idea, but I still think it is of interrest on what hardware
was it tested. Maybe also 'dmesg' output would help a bit, but
I still don't know how you'd find that I have _this_ motherboard
instead of another.

I'm a simple Linux user that normally likes to test as much things as 
posible. This is what I would do:


I would make a Summary of the ChangeLog that was done to the kernel, and 
from there encourage people to test those parts. The worst part that I 
face against Linux is that I don't know C enough like to understand what 
the patch that someone sent will really do.


   A user understandable ChangeLog so that people can test those 
changed points would be great. And if those changes could have an 
explanation on how users could troubleshoot the change, then it would be 
fairly awesome.


   I have been subscribed here for more than a year already, and I have 
barely understood a couple of changes that have been done to Drivers and 
to the kernel itself. How can I make sure that the change will really 
work better for me?


   How does one check if hotplug is working better than before? How do 
I test the fact that a performance issue seen in the driver is now fixed 
for me or most of users? How do I get back to a bugzilla and tell that 
there is a bug somewhere when one can't really know if that is the way 
it works but is simply ugly, or if there is really a bug?


   My point is that a user like me, can't really get back to this 
mailing list and say hey, since 2.6.13-rc1, my PCI bus is having an 
additional 1ms of latency We don't really have a process to follow and 
then be able to say ahha, so this is different and then report the 
problem, even if we can't fix it because of our C and kernel skills.


   How do we know that something is OK or wrong? just by the fact that 
it works or not, it doesn't mean like is OK.


There has to be a process for any user to be able to verify and study a 
problem. We don't have that yet.


.Alejandro
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/