Re: [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 + SD 0.46

2007-04-27 Thread Bill Davidsen

Con Kolivas wrote:

On Friday 27 April 2007 08:00, Bill Davidsen wrote:
  

Ingo Molnar wrote:


* Ed Tomlinson <[EMAIL PROTECTED]> wrote:
  

SD 0.46 1-2 FPS
cfs v5 nice -19 219-233 FPS
cfs v5 nice 0   1000-1996
  

   cfs v5 nice -10  60-65 FPS


the problem is, the glxgears portion of this test is an _inverse_
testcase.

The reason? glxgears on true 3D hardware will _not_ use X, it will
directly use the 3D driver of the kernel. So by renicing X to -19 you
give the xterms more chance to show stuff - the performance of the
glxgears will 'degrade' - but that is what you asked for: glxgears is
'just another CPU hog' that competes with X, it's not a "true" X client.

if you are after glxgears performance in this test then you'll get the
best performance out of this by renicing X to +19 or even SCHED_BATCH.
  

Several points on this...

First, I don't think this is accelerated in the way you mean, the
machine is a test server, with motherboard video using the 945G video
driver. Given the limitations of the support in that setup, I don't
think it qualified as "true 3D hardware," although I guess I could try
using the vesafb version as a test.

The 2nd thing I note is that on FC6 this scheduler seems to confuse
'top' to some degree, since the glxgears is shown as taking 51% of the
CPU (one core), while the state breakdown shows about 73% in idle,
waitio, and int. image attached.



top by itself certainly cannot be trusted to give true representation of the 
cpu usage I'm afraid. It's not as convoluted as, say, trying to track memory 
usage of an application, but top's resolution being tied to HZ accounting 
makes it not reliable in that regard.
  

After I upgrade the kernel and cfs to the absolute latest I'll repeat
this, as well as test with vesafb, and my planned run under heavy load.



I have a problem with your test case Bill. Its behaviour would depend on how 
gpu bound vs cpu bound vs accelerated vs non-accelerated your graphics card 
is. I get completely different results to those of the other testers given 
the different hardware configuration and I don't think my results are 
valuable. My problem with this testcase is - What would you define 
as "perfect" behaviour for your test case? It seems far too arbitrary.


  
It was more intended to give an immediate feedback on gross behavior. On 
some old schedulers (2.4.x) it visibly ran one xterm after the other, 
while on 2.6.2[01] that behavior is gone and all schedulers give equal 
time as seen by the eye. Looking at the behavior with line and jump 
scroll, under load or not, X nice or nasty, allows a quick check on 
where the bad cases are if any exist.


I intended it as a quick way to determine really, visibly, bad 
scheduling, not a a test for quantifying performance. The fact that fps 
varies by almost an order of magnitude with some earlier versions of the 
schedulers is certainly a red flag to me that there's a corner case, and 
something I care more about than glxgears will be inconsistent as well.


Hopefully in that context, as a relatively quick way to try nice and 
load values, it's a useful tool.


--
bill davidsen <[EMAIL PROTECTED]>
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [REPORT] First glitch1 results, 2.6.21-rc7-git6-CFSv5 + SD 0.46

2007-04-27 Thread Bill Davidsen

Con Kolivas wrote:

On Friday 27 April 2007 08:00, Bill Davidsen wrote:
  

Ingo Molnar wrote:


* Ed Tomlinson [EMAIL PROTECTED] wrote:
  

SD 0.46 1-2 FPS
cfs v5 nice -19 219-233 FPS
cfs v5 nice 0   1000-1996
  

   cfs v5 nice -10  60-65 FPS


the problem is, the glxgears portion of this test is an _inverse_
testcase.

The reason? glxgears on true 3D hardware will _not_ use X, it will
directly use the 3D driver of the kernel. So by renicing X to -19 you
give the xterms more chance to show stuff - the performance of the
glxgears will 'degrade' - but that is what you asked for: glxgears is
'just another CPU hog' that competes with X, it's not a true X client.

if you are after glxgears performance in this test then you'll get the
best performance out of this by renicing X to +19 or even SCHED_BATCH.
  

Several points on this...

First, I don't think this is accelerated in the way you mean, the
machine is a test server, with motherboard video using the 945G video
driver. Given the limitations of the support in that setup, I don't
think it qualified as true 3D hardware, although I guess I could try
using the vesafb version as a test.

The 2nd thing I note is that on FC6 this scheduler seems to confuse
'top' to some degree, since the glxgears is shown as taking 51% of the
CPU (one core), while the state breakdown shows about 73% in idle,
waitio, and int. image attached.



top by itself certainly cannot be trusted to give true representation of the 
cpu usage I'm afraid. It's not as convoluted as, say, trying to track memory 
usage of an application, but top's resolution being tied to HZ accounting 
makes it not reliable in that regard.
  

After I upgrade the kernel and cfs to the absolute latest I'll repeat
this, as well as test with vesafb, and my planned run under heavy load.



I have a problem with your test case Bill. Its behaviour would depend on how 
gpu bound vs cpu bound vs accelerated vs non-accelerated your graphics card 
is. I get completely different results to those of the other testers given 
the different hardware configuration and I don't think my results are 
valuable. My problem with this testcase is - What would you define 
as perfect behaviour for your test case? It seems far too arbitrary.


  
It was more intended to give an immediate feedback on gross behavior. On 
some old schedulers (2.4.x) it visibly ran one xterm after the other, 
while on 2.6.2[01] that behavior is gone and all schedulers give equal 
time as seen by the eye. Looking at the behavior with line and jump 
scroll, under load or not, X nice or nasty, allows a quick check on 
where the bad cases are if any exist.


I intended it as a quick way to determine really, visibly, bad 
scheduling, not a a test for quantifying performance. The fact that fps 
varies by almost an order of magnitude with some earlier versions of the 
schedulers is certainly a red flag to me that there's a corner case, and 
something I care more about than glxgears will be inconsistent as well.


Hopefully in that context, as a relatively quick way to try nice and 
load values, it's a useful tool.


--
bill davidsen [EMAIL PROTECTED]
 CTO TMR Associates, Inc
 Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 + SD 0.46

2007-04-26 Thread Ed Tomlinson
On Thursday 26 April 2007 18:56, Con Kolivas wrote:
> On Friday 27 April 2007 08:00, Bill Davidsen wrote:
> > Ingo Molnar wrote:
> > > * Ed Tomlinson <[EMAIL PROTECTED]> wrote:
> > >>> SD 0.46 1-2 FPS
> > >>> cfs v5 nice -19 219-233 FPS
> > >>> cfs v5 nice 0   1000-1996
> > >>
> > >>cfs v5 nice -10  60-65 FPS
> > >
> > > the problem is, the glxgears portion of this test is an _inverse_
> > > testcase.
> > >
> > > The reason? glxgears on true 3D hardware will _not_ use X, it will
> > > directly use the 3D driver of the kernel. So by renicing X to -19 you
> > > give the xterms more chance to show stuff - the performance of the
> > > glxgears will 'degrade' - but that is what you asked for: glxgears is
> > > 'just another CPU hog' that competes with X, it's not a "true" X client.
> > >
> > > if you are after glxgears performance in this test then you'll get the
> > > best performance out of this by renicing X to +19 or even SCHED_BATCH.
> >
> > Several points on this...
> >
> > First, I don't think this is accelerated in the way you mean, the
> > machine is a test server, with motherboard video using the 945G video
> > driver. Given the limitations of the support in that setup, I don't
> > think it qualified as "true 3D hardware," although I guess I could try
> > using the vesafb version as a test.
> >
> > The 2nd thing I note is that on FC6 this scheduler seems to confuse
> > 'top' to some degree, since the glxgears is shown as taking 51% of the
> > CPU (one core), while the state breakdown shows about 73% in idle,
> > waitio, and int. image attached.
> 
> top by itself certainly cannot be trusted to give true representation of the 
> cpu usage I'm afraid. It's not as convoluted as, say, trying to track memory 
> usage of an application, but top's resolution being tied to HZ accounting 
> makes it not reliable in that regard.
> >
> > After I upgrade the kernel and cfs to the absolute latest I'll repeat
> > this, as well as test with vesafb, and my planned run under heavy load.
> 
> I have a problem with your test case Bill. Its behaviour would depend on how 
> gpu bound vs cpu bound vs accelerated vs non-accelerated your graphics card 
> is. I get completely different results to those of the other testers given 
> the different hardware configuration and I don't think my results are 
> valuable. My problem with this testcase is - What would you define 
> as "perfect" behaviour for your test case? It seems far too arbitrary.

Con,

One thing I did not mention in all this is that renicing the glxgears process 
to -10
gets SD to give about 1000FPS, indeed you get most of this performance at -5 
too.
All in all SD does a very good job here.

Get well soon!
Ed
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 + SD 0.46

2007-04-26 Thread Con Kolivas
On Friday 27 April 2007 08:00, Bill Davidsen wrote:
> Ingo Molnar wrote:
> > * Ed Tomlinson <[EMAIL PROTECTED]> wrote:
> >>> SD 0.46   1-2 FPS
> >>> cfs v5 nice -19   219-233 FPS
> >>> cfs v5 nice 0 1000-1996
> >>
> >>cfs v5 nice -10  60-65 FPS
> >
> > the problem is, the glxgears portion of this test is an _inverse_
> > testcase.
> >
> > The reason? glxgears on true 3D hardware will _not_ use X, it will
> > directly use the 3D driver of the kernel. So by renicing X to -19 you
> > give the xterms more chance to show stuff - the performance of the
> > glxgears will 'degrade' - but that is what you asked for: glxgears is
> > 'just another CPU hog' that competes with X, it's not a "true" X client.
> >
> > if you are after glxgears performance in this test then you'll get the
> > best performance out of this by renicing X to +19 or even SCHED_BATCH.
>
> Several points on this...
>
> First, I don't think this is accelerated in the way you mean, the
> machine is a test server, with motherboard video using the 945G video
> driver. Given the limitations of the support in that setup, I don't
> think it qualified as "true 3D hardware," although I guess I could try
> using the vesafb version as a test.
>
> The 2nd thing I note is that on FC6 this scheduler seems to confuse
> 'top' to some degree, since the glxgears is shown as taking 51% of the
> CPU (one core), while the state breakdown shows about 73% in idle,
> waitio, and int. image attached.

top by itself certainly cannot be trusted to give true representation of the 
cpu usage I'm afraid. It's not as convoluted as, say, trying to track memory 
usage of an application, but top's resolution being tied to HZ accounting 
makes it not reliable in that regard.
>
> After I upgrade the kernel and cfs to the absolute latest I'll repeat
> this, as well as test with vesafb, and my planned run under heavy load.

I have a problem with your test case Bill. Its behaviour would depend on how 
gpu bound vs cpu bound vs accelerated vs non-accelerated your graphics card 
is. I get completely different results to those of the other testers given 
the different hardware configuration and I don't think my results are 
valuable. My problem with this testcase is - What would you define 
as "perfect" behaviour for your test case? It seems far too arbitrary.

-- 
-ck
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [REPORT] First glitch1 results, 2.6.21-rc7-git6-CFSv5 + SD 0.46

2007-04-26 Thread Con Kolivas
On Friday 27 April 2007 08:00, Bill Davidsen wrote:
 Ingo Molnar wrote:
  * Ed Tomlinson [EMAIL PROTECTED] wrote:
  SD 0.46   1-2 FPS
  cfs v5 nice -19   219-233 FPS
  cfs v5 nice 0 1000-1996
 
 cfs v5 nice -10  60-65 FPS
 
  the problem is, the glxgears portion of this test is an _inverse_
  testcase.
 
  The reason? glxgears on true 3D hardware will _not_ use X, it will
  directly use the 3D driver of the kernel. So by renicing X to -19 you
  give the xterms more chance to show stuff - the performance of the
  glxgears will 'degrade' - but that is what you asked for: glxgears is
  'just another CPU hog' that competes with X, it's not a true X client.
 
  if you are after glxgears performance in this test then you'll get the
  best performance out of this by renicing X to +19 or even SCHED_BATCH.

 Several points on this...

 First, I don't think this is accelerated in the way you mean, the
 machine is a test server, with motherboard video using the 945G video
 driver. Given the limitations of the support in that setup, I don't
 think it qualified as true 3D hardware, although I guess I could try
 using the vesafb version as a test.

 The 2nd thing I note is that on FC6 this scheduler seems to confuse
 'top' to some degree, since the glxgears is shown as taking 51% of the
 CPU (one core), while the state breakdown shows about 73% in idle,
 waitio, and int. image attached.

top by itself certainly cannot be trusted to give true representation of the 
cpu usage I'm afraid. It's not as convoluted as, say, trying to track memory 
usage of an application, but top's resolution being tied to HZ accounting 
makes it not reliable in that regard.

 After I upgrade the kernel and cfs to the absolute latest I'll repeat
 this, as well as test with vesafb, and my planned run under heavy load.

I have a problem with your test case Bill. Its behaviour would depend on how 
gpu bound vs cpu bound vs accelerated vs non-accelerated your graphics card 
is. I get completely different results to those of the other testers given 
the different hardware configuration and I don't think my results are 
valuable. My problem with this testcase is - What would you define 
as perfect behaviour for your test case? It seems far too arbitrary.

-- 
-ck
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [REPORT] First glitch1 results, 2.6.21-rc7-git6-CFSv5 + SD 0.46

2007-04-26 Thread Ed Tomlinson
On Thursday 26 April 2007 18:56, Con Kolivas wrote:
 On Friday 27 April 2007 08:00, Bill Davidsen wrote:
  Ingo Molnar wrote:
   * Ed Tomlinson [EMAIL PROTECTED] wrote:
   SD 0.46 1-2 FPS
   cfs v5 nice -19 219-233 FPS
   cfs v5 nice 0   1000-1996
  
  cfs v5 nice -10  60-65 FPS
  
   the problem is, the glxgears portion of this test is an _inverse_
   testcase.
  
   The reason? glxgears on true 3D hardware will _not_ use X, it will
   directly use the 3D driver of the kernel. So by renicing X to -19 you
   give the xterms more chance to show stuff - the performance of the
   glxgears will 'degrade' - but that is what you asked for: glxgears is
   'just another CPU hog' that competes with X, it's not a true X client.
  
   if you are after glxgears performance in this test then you'll get the
   best performance out of this by renicing X to +19 or even SCHED_BATCH.
 
  Several points on this...
 
  First, I don't think this is accelerated in the way you mean, the
  machine is a test server, with motherboard video using the 945G video
  driver. Given the limitations of the support in that setup, I don't
  think it qualified as true 3D hardware, although I guess I could try
  using the vesafb version as a test.
 
  The 2nd thing I note is that on FC6 this scheduler seems to confuse
  'top' to some degree, since the glxgears is shown as taking 51% of the
  CPU (one core), while the state breakdown shows about 73% in idle,
  waitio, and int. image attached.
 
 top by itself certainly cannot be trusted to give true representation of the 
 cpu usage I'm afraid. It's not as convoluted as, say, trying to track memory 
 usage of an application, but top's resolution being tied to HZ accounting 
 makes it not reliable in that regard.
 
  After I upgrade the kernel and cfs to the absolute latest I'll repeat
  this, as well as test with vesafb, and my planned run under heavy load.
 
 I have a problem with your test case Bill. Its behaviour would depend on how 
 gpu bound vs cpu bound vs accelerated vs non-accelerated your graphics card 
 is. I get completely different results to those of the other testers given 
 the different hardware configuration and I don't think my results are 
 valuable. My problem with this testcase is - What would you define 
 as perfect behaviour for your test case? It seems far too arbitrary.

Con,

One thing I did not mention in all this is that renicing the glxgears process 
to -10
gets SD to give about 1000FPS, indeed you get most of this performance at -5 
too.
All in all SD does a very good job here.

Get well soon!
Ed
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 + SD 0.46

2007-04-24 Thread Ingo Molnar

* Ed Tomlinson <[EMAIL PROTECTED]> wrote:

> > SD 0.46 1-2 FPS
> > cfs v5 nice -19 219-233 FPS
> > cfs v5 nice 0   1000-1996
>cfs v5 nice -10  60-65 FPS

the problem is, the glxgears portion of this test is an _inverse_ 
testcase.

The reason? glxgears on true 3D hardware will _not_ use X, it will 
directly use the 3D driver of the kernel. So by renicing X to -19 you 
give the xterms more chance to show stuff - the performance of the 
glxgears will 'degrade' - but that is what you asked for: glxgears is 
'just another CPU hog' that competes with X, it's not a "true" X client.

if you are after glxgears performance in this test then you'll get the 
best performance out of this by renicing X to +19 or even SCHED_BATCH.

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [REPORT] First glitch1 results, 2.6.21-rc7-git6-CFSv5 + SD 0.46

2007-04-24 Thread Ingo Molnar

* Ed Tomlinson [EMAIL PROTECTED] wrote:

  SD 0.46 1-2 FPS
  cfs v5 nice -19 219-233 FPS
  cfs v5 nice 0   1000-1996
cfs v5 nice -10  60-65 FPS

the problem is, the glxgears portion of this test is an _inverse_ 
testcase.

The reason? glxgears on true 3D hardware will _not_ use X, it will 
directly use the 3D driver of the kernel. So by renicing X to -19 you 
give the xterms more chance to show stuff - the performance of the 
glxgears will 'degrade' - but that is what you asked for: glxgears is 
'just another CPU hog' that competes with X, it's not a true X client.

if you are after glxgears performance in this test then you'll get the 
best performance out of this by renicing X to +19 or even SCHED_BATCH.

Ingo
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 + SD 0.46

2007-04-23 Thread Ed Tomlinson
On Monday 23 April 2007 19:45, Ed Tomlinson wrote:
> On Monday 23 April 2007 17:57, Bill Davidsen wrote:
> > I am not sure a binary attachment will go thru, I will move to the web 
> > ste if not.
> 
> I did a quick try of this script here.
> 
> With SD 0.46 with X at nice 0 I was getting 1-2 frames per second.  I decided 
> to try cfs v5.
> The option disable auto renicing did not work so many threads other than X 
> are now at -19...
> 
> SD 0.46   1-2 FPS
> cfs v5 nice -19   219-233 FPS
> cfs v5 nice 0 1000-1996
   cfs v5 nice -10  60-65 FPS
> 
> Looks like, in this case, nice -19 for X is NOT a good idea.
> 
> Kernel is 2.6.20.7 (gentoo) UP amd64 with HZ 300 voluntary prempt (a fully 
> premptable kernel eventually 
> locks up switching between 32 and 64 apps)

Thanks
Ed 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [REPORT] First "glitch1" results, 2.6.21-rc7-git6-CFSv5 + SD 0.46

2007-04-23 Thread Ed Tomlinson
On Monday 23 April 2007 17:57, Bill Davidsen wrote:
> I am not sure a binary attachment will go thru, I will move to the web 
> ste if not.

I did a quick try of this script here.

With SD 0.46 with X at nice 0 I was getting 1-2 frames per second.  I decided 
to try cfs v5.
The option disable auto renicing did not work so many threads other than X are 
now at -19...

SD 0.46 1-2 FPS
cfs v5 nice -19 219-233 FPS
cfs v5 nice 0   1000-1996

Looks like, in this case, nice -19 for X is NOT a good idea.

Kernel is 2.6.20.7 (gentoo) UP amd64 with HZ 300 voluntary prempt (a fully 
premptable kernel eventually 
locks up switching between 32 and 64 apps)

Thanks,

Ed Tomlinson
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [REPORT] First glitch1 results, 2.6.21-rc7-git6-CFSv5 + SD 0.46

2007-04-23 Thread Ed Tomlinson
On Monday 23 April 2007 17:57, Bill Davidsen wrote:
 I am not sure a binary attachment will go thru, I will move to the web 
 ste if not.

I did a quick try of this script here.

With SD 0.46 with X at nice 0 I was getting 1-2 frames per second.  I decided 
to try cfs v5.
The option disable auto renicing did not work so many threads other than X are 
now at -19...

SD 0.46 1-2 FPS
cfs v5 nice -19 219-233 FPS
cfs v5 nice 0   1000-1996

Looks like, in this case, nice -19 for X is NOT a good idea.

Kernel is 2.6.20.7 (gentoo) UP amd64 with HZ 300 voluntary prempt (a fully 
premptable kernel eventually 
locks up switching between 32 and 64 apps)

Thanks,

Ed Tomlinson
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [REPORT] First glitch1 results, 2.6.21-rc7-git6-CFSv5 + SD 0.46

2007-04-23 Thread Ed Tomlinson
On Monday 23 April 2007 19:45, Ed Tomlinson wrote:
 On Monday 23 April 2007 17:57, Bill Davidsen wrote:
  I am not sure a binary attachment will go thru, I will move to the web 
  ste if not.
 
 I did a quick try of this script here.
 
 With SD 0.46 with X at nice 0 I was getting 1-2 frames per second.  I decided 
 to try cfs v5.
 The option disable auto renicing did not work so many threads other than X 
 are now at -19...
 
 SD 0.46   1-2 FPS
 cfs v5 nice -19   219-233 FPS
 cfs v5 nice 0 1000-1996
   cfs v5 nice -10  60-65 FPS
 
 Looks like, in this case, nice -19 for X is NOT a good idea.
 
 Kernel is 2.6.20.7 (gentoo) UP amd64 with HZ 300 voluntary prempt (a fully 
 premptable kernel eventually 
 locks up switching between 32 and 64 apps)

Thanks
Ed 
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/