Re: [PATCH net-next] net/ncsi: Define {add, kill}_vid callbacks for !CONFIG_NET_NCSI

2017-08-31 Thread Vernon Mauery
On 31-Aug-2017 01:38 PM, Samuel Mendoza-Jonas wrote:
> Patch "net/ncsi: Configure VLAN tag filter" defined two new callback
> functions in include/net/ncsi.h, but neglected the !CONFIG_NET_NCSI
> case. This can cause a build error if these are referenced elsewhere
> without NCSI enabled, for example in ftgmac100:
> 
> >>> ERROR: "ncsi_vlan_rx_kill_vid" 
> >>> [drivers/net/ethernet/faraday/ftgmac100.ko] undefined!
> >>> ERROR: "ncsi_vlan_rx_add_vid" [drivers/net/ethernet/faraday/ftgmac100.ko] 
> >>> undefined!
> 
> Add definitions for !CONFIG_NET_NCSI to bring it into line with the rest
> of ncsi.h
> 
> Signed-off-by: Samuel Mendoza-Jonas 
> ---
>  include/net/ncsi.h | 8 
>  1 file changed, 8 insertions(+)
> 
> diff --git a/include/net/ncsi.h b/include/net/ncsi.h
> index 1f96af46df49..2b13b6b91a4d 100644
> --- a/include/net/ncsi.h
> +++ b/include/net/ncsi.h
> @@ -36,6 +36,14 @@ int ncsi_start_dev(struct ncsi_dev *nd);
>  void ncsi_stop_dev(struct ncsi_dev *nd);
>  void ncsi_unregister_dev(struct ncsi_dev *nd);
>  #else /* !CONFIG_NET_NCSI */
> +int ncsi_vlan_rx_add_vid(struct net_device *dev, __be16 proto, u16 vid)
> +{
> + return -ENOTTY;
> +}
> +int ncsi_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid)
> +{
> + return -ENOTTY;
> +}

These should be static functions because they are defined in the header 
file or you will get multiple symbol definitions.

--Vernon

>  static inline struct ncsi_dev *ncsi_register_dev(struct net_device *dev,
>   void (*notifier)(struct ncsi_dev *nd))
>  {
> -- 
> 2.14.1
> 


Re: [PATCH net-next] net/ncsi: Define {add, kill}_vid callbacks for !CONFIG_NET_NCSI

2017-08-31 Thread Vernon Mauery
On 31-Aug-2017 01:38 PM, Samuel Mendoza-Jonas wrote:
> Patch "net/ncsi: Configure VLAN tag filter" defined two new callback
> functions in include/net/ncsi.h, but neglected the !CONFIG_NET_NCSI
> case. This can cause a build error if these are referenced elsewhere
> without NCSI enabled, for example in ftgmac100:
> 
> >>> ERROR: "ncsi_vlan_rx_kill_vid" 
> >>> [drivers/net/ethernet/faraday/ftgmac100.ko] undefined!
> >>> ERROR: "ncsi_vlan_rx_add_vid" [drivers/net/ethernet/faraday/ftgmac100.ko] 
> >>> undefined!
> 
> Add definitions for !CONFIG_NET_NCSI to bring it into line with the rest
> of ncsi.h
> 
> Signed-off-by: Samuel Mendoza-Jonas 
> ---
>  include/net/ncsi.h | 8 
>  1 file changed, 8 insertions(+)
> 
> diff --git a/include/net/ncsi.h b/include/net/ncsi.h
> index 1f96af46df49..2b13b6b91a4d 100644
> --- a/include/net/ncsi.h
> +++ b/include/net/ncsi.h
> @@ -36,6 +36,14 @@ int ncsi_start_dev(struct ncsi_dev *nd);
>  void ncsi_stop_dev(struct ncsi_dev *nd);
>  void ncsi_unregister_dev(struct ncsi_dev *nd);
>  #else /* !CONFIG_NET_NCSI */
> +int ncsi_vlan_rx_add_vid(struct net_device *dev, __be16 proto, u16 vid)
> +{
> + return -ENOTTY;
> +}
> +int ncsi_vlan_rx_kill_vid(struct net_device *dev, __be16 proto, u16 vid)
> +{
> + return -ENOTTY;
> +}

These should be static functions because they are defined in the header 
file or you will get multiple symbol definitions.

--Vernon

>  static inline struct ncsi_dev *ncsi_register_dev(struct net_device *dev,
>   void (*notifier)(struct ncsi_dev *nd))
>  {
> -- 
> 2.14.1
> 


NetXen driver causing slab corruption in -RT kernels

2007-09-18 Thread Vernon Mauery
In doing some stress testing of the NetXen driver, I found that my machine was 
dying in all sorts of weird ways.  I saw several different crashes, BUG 
messages in the TCP stack and some assert messages in the TCP stack as well.  
I really didn't think that there could be six different bugs all at once in 
the TCP/IP stack, so I started looking at possible memory corruption.

I first saw this on 2.6.16-rt22 with a backported netxen driver from 2.6.22.  
I figured I should try the latest kernel, so I tried it on 2.6.23-rc6-git7 
but could not trigger the slab corruption messages with CONFIG_DEBUG_SLAB, so 
I figured the race must only exist in the -RT kernels.  Next I tried 
2.6.23-rc6-git7-rt1 (I applied patch-2.6.23-rc4-rt1 patch to 2.6.23-rc6-git7 
and fixed the 5 failing hunks).  After an hour or so, lots of slab corruption 
messages showed up:

Slab corruption: size-2048 start=f40c4670, len=2048
Slab corruption: size-2048 start=f313cf48, len=2048
Redzone: 0x9f911029d74e35b/0x9f911029d74e35b.
Last user: [](kfree+0x80/0x95)
010: 6b 6b 00 0e 1e 00 16 13 00 0e 1e 00 19 3d 08 00
020: 45 00 05 dc 92 ab 40 00 40 11 8a 5b 0a 02 02 03
030: 0a 02 02 04 80 0c 80 0d 05 c8 dc 39 00 00 00 00
040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
060: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Prev obj: start=f313c730, len=2048
Redzone: 0xd84156c5635688c0/0xd84156c5635688c0.
Last user: [](netxen_post_rx_buffers_nodb+0x62/0x1f0 [netxen_nic])
000: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a
010: 5a 5a 00 0e 1e 00 16 13 00 0e 1e 00 19 3d 08 00
Next obj: start=f313d760, len=2048
Redzone: 0xd84156c5635688c0/0xd84156c5635688c0.
Last user: [](netxen_post_rx_buffers_nodb+0x62/0x1f0 [netxen_nic])
000: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a
010: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a
Slab corruption: size-2048 start=f395a6f0, len=2048
Redzone: 0x9f911029d74e35b/0x9f911029d74e35b.
Last user: [](kfree+0x80/0x95)
010: 6b 6b 00 0e 1e 00 16 13 00 0e 1e 00 19 3d 08 00
020: 45 00 05 dc 92 ac 40 00 40 11 8a 5a 0a 02 02 03
030: 0a 02 02 04 80 0c 80 0d 05 c8 dc 39 00 00 00 00
040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
060: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Next obj: start=f395af08, len=2048
Redzone: 0xd84156c5635688c0/0xd84156c5635688c0.
Last user: [](netxen_post_rx_buffers_nodb+0x62/0x1f0 [netxen_nic])
000: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a
010: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a
Redzone: 0x9f911029d74e35b/0x9f911029d74e35b.
Last user: [](kfree+0x80/0x95)
010: 6b 6b 00 0e 1e 00 16 13 00 0e 1e 00 19 3d 08 00
020: 45 00 05 dc 92 aa 40 00 40 11 8a 5c 0a 02 02 03
030: 0a 02 02 04 80 0c 80 0d 05 c8 dc 39 00 00 00 00
040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
060: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Next obj: start=f40c4e88, len=2048
Redzone: 0xd84156c5635688c0/0xd84156c5635688c0.
Last user: [](netxen_post_rx_buffers_nodb+0x62/0x1f0 [netxen_nic])
000: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a
010: 5a 5a 00 0e 1e 00 16 13 00 0e 1e 00 19 3d 08 00

The stress test that I am running is basically a mixed bag of stuff I threw 
together.  It runs eight concurrent netperf TCP streams and two concurrent 
UDP streams in both directions, (and on both 10GbE interfaces), ping -f in 
both directions, some more disk/cpu loads in the background and a little bit 
of NFS traffic thrown in for good measure.

--Vernon
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


NetXen driver causing slab corruption in -RT kernels

2007-09-18 Thread Vernon Mauery
In doing some stress testing of the NetXen driver, I found that my machine was 
dying in all sorts of weird ways.  I saw several different crashes, BUG 
messages in the TCP stack and some assert messages in the TCP stack as well.  
I really didn't think that there could be six different bugs all at once in 
the TCP/IP stack, so I started looking at possible memory corruption.

I first saw this on 2.6.16-rt22 with a backported netxen driver from 2.6.22.  
I figured I should try the latest kernel, so I tried it on 2.6.23-rc6-git7 
but could not trigger the slab corruption messages with CONFIG_DEBUG_SLAB, so 
I figured the race must only exist in the -RT kernels.  Next I tried 
2.6.23-rc6-git7-rt1 (I applied patch-2.6.23-rc4-rt1 patch to 2.6.23-rc6-git7 
and fixed the 5 failing hunks).  After an hour or so, lots of slab corruption 
messages showed up:

Slab corruption: size-2048 start=f40c4670, len=2048
Slab corruption: size-2048 start=f313cf48, len=2048
Redzone: 0x9f911029d74e35b/0x9f911029d74e35b.
Last user: [c0166be4](kfree+0x80/0x95)
010: 6b 6b 00 0e 1e 00 16 13 00 0e 1e 00 19 3d 08 00
020: 45 00 05 dc 92 ab 40 00 40 11 8a 5b 0a 02 02 03
030: 0a 02 02 04 80 0c 80 0d 05 c8 dc 39 00 00 00 00
040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
060: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Prev obj: start=f313c730, len=2048
Redzone: 0xd84156c5635688c0/0xd84156c5635688c0.
Last user: [f8f06186](netxen_post_rx_buffers_nodb+0x62/0x1f0 [netxen_nic])
000: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a
010: 5a 5a 00 0e 1e 00 16 13 00 0e 1e 00 19 3d 08 00
Next obj: start=f313d760, len=2048
Redzone: 0xd84156c5635688c0/0xd84156c5635688c0.
Last user: [f8f06186](netxen_post_rx_buffers_nodb+0x62/0x1f0 [netxen_nic])
000: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a
010: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a
Slab corruption: size-2048 start=f395a6f0, len=2048
Redzone: 0x9f911029d74e35b/0x9f911029d74e35b.
Last user: [c0166be4](kfree+0x80/0x95)
010: 6b 6b 00 0e 1e 00 16 13 00 0e 1e 00 19 3d 08 00
020: 45 00 05 dc 92 ac 40 00 40 11 8a 5a 0a 02 02 03
030: 0a 02 02 04 80 0c 80 0d 05 c8 dc 39 00 00 00 00
040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
060: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Next obj: start=f395af08, len=2048
Redzone: 0xd84156c5635688c0/0xd84156c5635688c0.
Last user: [f8f06186](netxen_post_rx_buffers_nodb+0x62/0x1f0 [netxen_nic])
000: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a
010: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a
Redzone: 0x9f911029d74e35b/0x9f911029d74e35b.
Last user: [c0166be4](kfree+0x80/0x95)
010: 6b 6b 00 0e 1e 00 16 13 00 0e 1e 00 19 3d 08 00
020: 45 00 05 dc 92 aa 40 00 40 11 8a 5c 0a 02 02 03
030: 0a 02 02 04 80 0c 80 0d 05 c8 dc 39 00 00 00 00
040: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
050: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
060: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Next obj: start=f40c4e88, len=2048
Redzone: 0xd84156c5635688c0/0xd84156c5635688c0.
Last user: [f8f06186](netxen_post_rx_buffers_nodb+0x62/0x1f0 [netxen_nic])
000: 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a 5a
010: 5a 5a 00 0e 1e 00 16 13 00 0e 1e 00 19 3d 08 00

The stress test that I am running is basically a mixed bag of stuff I threw 
together.  It runs eight concurrent netperf TCP streams and two concurrent 
UDP streams in both directions, (and on both 10GbE interfaces), ping -f in 
both directions, some more disk/cpu loads in the background and a little bit 
of NFS traffic thrown in for good measure.

--Vernon
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [-RT] multiple streams have degraded performance

2007-06-19 Thread Vernon Mauery
On Monday 18 June 2007 10:12:21 pm Vernon Mauery wrote:
> In looking at the performance characteristics of my network I found that
> 2.6.21.5-rt15 suffers from degraded thoughput with multiple threads.  The
> test that I did this with is simply invoking 1, 2, 4, and 8 instances of
> netperf at a time and measuring the total throughput.  I have two 4-way
> machines connected with 10GbE cards.  I tested several kernels (some older
> and some newer) and found that the only thing in common was that with -RT
> kernels the performance went down with concurrent streams.

I just tested this using lo instead of the 10GbE adapter.  I found similar 
results.  Since this makes it reproducible by just about anybody (maybe the 
only factor now is the number of CPUs), I have attached the script that I 
test things with.

So with the script run like ./stream_test 127.0.0.1 on 2.6.21 and 
2.6.21.5-rt17 I got the following:

2.6.21
===
default: 1 streams: Send at 2790.3 Mb/s, Receive at 2790.3 Mb/s
default: 2 streams: Send at 4129.4 Mb/s, Receive at 4128.7 Mb/s
default: 4 streams: Send at 7949.6 Mb/s, Receive at 7735.5 Mb/s
default: 8 streams: Send at 7930.7 Mb/s, Receive at 7910.1 Mb/s
1Msock: 1 streams: Send at 2810.7 Mb/s, Receive at 2810.7 Mb/s
1Msock: 2 streams: Send at 4093.4 Mb/s, Receive at 4092.6 Mb/s
1Msock: 4 streams: Send at 7887.8 Mb/s, Receive at 7880.4 Mb/s
1Msock: 8 streams: Send at 8091.7 Mb/s, Receive at 8082.2 Mb/s

2.6.21.5-rt17
==
default: 1 streams: Send at 938.2 Mb/s, Receive at 938.2 Mb/s
default: 2 streams: Send at 1476.3 Mb/s, Receive at 1436.9 Mb/s
default: 4 streams: Send at 1489.8 Mb/s, Receive at 1145.0 Mb/s
default: 8 streams: Send at 1099.8 Mb/s, Receive at 1079.1 Mb/s
1Msock: 1 streams: Send at 921.4 Mb/s, Receive at 920.4 Mb/s
1Msock: 2 streams: Send at 1332.2 Mb/s, Receive at 1311.5 Mb/s
1Msock: 4 streams: Send at 1483.0 Mb/s, Receive at 1137.8 Mb/s
1Msock: 8 streams: Send at 1446.2 Mb/s, Receive at 1135.6 Mb/s

--Vernon

> While the test was showing the numbers for receiving as well as sending,
> the receiving numbers are not reliable because that machine was running a
> -RT kernel for these tests.
>
> I was just wondering if anyone had seen this problem before or would have
> any idea on where to start hunting for the solution.
>
> --Vernon
>
> The key for this is 'default' was invoked like:
> netperf -c -C -l 60 -H 10.2.2.4 -t UDP_STREAM -- -m 1472 -M 1472
> and '1Msock' was invoked like:
> netperf -c -C -l 60 -H 10.2.2.4 -t UDP_STREAM -- -m 1472 -M 1472 -s 1M -S
> 1M
>
> 2.6.21
> ==
> default: 1 streams: Send at 2844.2 Mb/s, Receive at 2840.1 Mb/s
> default: 2 streams: Send at 3927.9 Mb/s, Receive at 3603.9 Mb/s
> default: 4 streams: Send at 4197.4 Mb/s, Receive at 3776.3 Mb/s
> default: 8 streams: Send at 4223.9 Mb/s, Receive at 3848.9 Mb/s
> 1Msock: 1 streams: Send at 4232.3 Mb/s, Receive at 3914.4 Mb/s
> 1Msock: 2 streams: Send at 5428.8 Mb/s, Receive at 3853.2 Mb/s
> 1Msock: 4 streams: Send at 6202.1 Mb/s, Receive at 3774.8 Mb/s
> 1Msock: 8 streams: Send at 6225.1 Mb/s, Receive at 3754.7 Mb/s
>
> 2.6.21.5-rt15
> ===
> default: 1 streams: Send at 3091.6 Mb/s, Receive at 3048.1 Mb/s
> default: 2 streams: Send at 3768.8 Mb/s, Receive at 3714.2 Mb/s
> default: 4 streams: Send at 1873.6 Mb/s, Receive at 1825.9 Mb/s
> default: 8 streams: Send at 1806.5 Mb/s, Receive at 1792.7 Mb/s
> 1Msock: 1 streams: Send at 3680.4 Mb/s, Receive at 3255.6 Mb/s
> 1Msock: 2 streams: Send at 4129.8 Mb/s, Receive at 3991.5 Mb/s
> 1Msock: 4 streams: Send at 1862.1 Mb/s, Receive at 1787.1 Mb/s
> 1Msock: 8 streams: Send at 1790.2 Mb/s, Receive at 1556.8 Mb/s


#!/bin/sh
# File: stream_test
# Description: test network throughput with a varying number of streams
# Author: Vernon Mauery <[EMAIL PROTECTED]>
# Copyright: IBM Corporation (C) 2007


instances="1 2 4 8"
msg_size=
time=60
TEST=UDP_STREAM
sock_size=

function usage() {
echo "usage: $0 [-t|-u] [-c N] [-m N] [-s N] [-T N] [-x ...] "
echo "  -t, -u  run TCP or UDP tests respectively (default: UDP)"
echo "  -c Nrun N concurrent instances (default \"1 2 4 8\")"
echo "  -m Nset message size to N bytes (default 1472)"
echo "  -s Nset socket buffer size (default 1M)"
echo "  -T Nrun test for N seconds (default 60)"
echo "  -x '...'pass extra ags to netperf (included after -- )"
echo "  -h  display this message"
}

while [ $# -gt 0 ]; do
case $1 in
-t)
TEST=TCP_STREAM
;;
-u)
TEST=UDP_STREAM
;;
-c)
shift
instances=$1

Re: [-RT] multiple streams have degraded performance

2007-06-19 Thread Vernon Mauery
On Tuesday 19 June 2007 8:38:50 am Peter Zijlstra wrote:
> On Tue, 2007-06-19 at 07:25 -0700, Vernon Mauery wrote:
> > On Monday 18 June 2007 11:51:38 pm Peter Zijlstra wrote:
> > > On Mon, 2007-06-18 at 22:12 -0700, Vernon Mauery wrote:
> > > > In looking at the performance characteristics of my network I found
> > > > that 2.6.21.5-rt15 suffers from degraded thoughput with multiple
> > > > threads.  The test that I did this with is simply invoking 1, 2, 4,
> > > > and 8 instances of netperf at a time and measuring the total
> > > > throughput.  I have two 4-way machines connected with 10GbE cards.  I
> > > > tested several kernels (some older and some newer) and found that the
> > > > only thing in common was that with -RT kernels the performance went
> > > > down with concurrent streams.
> > > >
> > > > While the test was showing the numbers for receiving as well as
> > > > sending, the receiving numbers are not reliable because that machine
> > > > was running a -RT kernel for these tests.
> > > >
> > > > I was just wondering if anyone had seen this problem before or would
> > > > have any idea on where to start hunting for the solution.
> > >
> > > could you enable CONFIG_LOCK_STAT
> > >
> > > echo 0 > /proc/lock_stat
> > >
> > > 
> > >
> > > and report the output of (preferably not 80 column wrapped):
> > >
> > > grep : /proc/lock_stat | head
> >
> > /proc/lock_stat stayed empty for the duration of the test.  I am guessing
> > this means there was no lock contention.
>
> Most likely caused by the issue below.
>
> > I do see this on the console:
> > BUG: scheduling with irqs disabled: IRQ-8414/0x/9494
> > caller is wait_for_completion+0x85/0xc4
> >
> > Call Trace:
> >  [] dump_trace+0xaa/0x32a
> >  [] show_trace+0x41/0x64
> >  [] dump_stack+0x15/0x17
> >  [] schedule+0x82/0x102
> >  [] wait_for_completion+0x85/0xc4
> >  [] set_cpus_allowed+0xa1/0xc8
> >  [] do_softirq_from_hardirq+0x105/0x12d
> >  [] do_irqd+0x2a8/0x32f
> >  [] kthread+0xf5/0x128
> >  [] child_rip+0xa/0x12
> >
> > INFO: lockdep is turned off.
> >
> > I haven't seen this until I enabled lock_stat and ran the test.
>
> I think this is what causes the lack of output. It disables all lock
> tracking features...
>
> > > or otherwise if there are any highly contended network locks listed?
> >
> > Any other ideas for debugging this?
>
> fixing the above bug would help :-)
>
> Ingo says that should be fixed in -rt17, so if you could give that a
> spin...

I just tested with -rt17 and the BUG message is gone, but I still don't see 
any entries in.  I don't see any other suspicious message so I think it is 
really strange that is not listing any locks at all.

--Vernon
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [-RT] multiple streams have degraded performance

2007-06-19 Thread Vernon Mauery
On Monday 18 June 2007 11:51:38 pm Peter Zijlstra wrote:
> On Mon, 2007-06-18 at 22:12 -0700, Vernon Mauery wrote:
> > In looking at the performance characteristics of my network I found that
> > 2.6.21.5-rt15 suffers from degraded thoughput with multiple threads.  The
> > test that I did this with is simply invoking 1, 2, 4, and 8 instances of
> > netperf at a time and measuring the total throughput.  I have two 4-way
> > machines connected with 10GbE cards.  I tested several kernels (some
> > older and some newer) and found that the only thing in common was that
> > with -RT kernels the performance went down with concurrent streams.
> >
> > While the test was showing the numbers for receiving as well as sending,
> > the receiving numbers are not reliable because that machine was running a
> > -RT kernel for these tests.
> >
> > I was just wondering if anyone had seen this problem before or would have
> > any idea on where to start hunting for the solution.
>
> could you enable CONFIG_LOCK_STAT
>
> echo 0 > /proc/lock_stat
>
> 
>
> and report the output of (preferably not 80 column wrapped):
>
> grep : /proc/lock_stat | head

/proc/lock_stat stayed empty for the duration of the test.  I am guessing this 
means there was no lock contention.

I do see this on the console:
BUG: scheduling with irqs disabled: IRQ-8414/0x/9494
caller is wait_for_completion+0x85/0xc4

Call Trace:
 [] dump_trace+0xaa/0x32a
 [] show_trace+0x41/0x64
 [] dump_stack+0x15/0x17
 [] schedule+0x82/0x102
 [] wait_for_completion+0x85/0xc4
 [] set_cpus_allowed+0xa1/0xc8
 [] do_softirq_from_hardirq+0x105/0x12d
 [] do_irqd+0x2a8/0x32f
 [] kthread+0xf5/0x128
 [] child_rip+0xa/0x12

INFO: lockdep is turned off.

I haven't seen this until I enabled lock_stat and ran the test.

> or otherwise if there are any highly contended network locks listed?

Any other ideas for debugging this?

--Vernon
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [-RT] multiple streams have degraded performance

2007-06-19 Thread Vernon Mauery
On Monday 18 June 2007 11:51:38 pm Peter Zijlstra wrote:
 On Mon, 2007-06-18 at 22:12 -0700, Vernon Mauery wrote:
  In looking at the performance characteristics of my network I found that
  2.6.21.5-rt15 suffers from degraded thoughput with multiple threads.  The
  test that I did this with is simply invoking 1, 2, 4, and 8 instances of
  netperf at a time and measuring the total throughput.  I have two 4-way
  machines connected with 10GbE cards.  I tested several kernels (some
  older and some newer) and found that the only thing in common was that
  with -RT kernels the performance went down with concurrent streams.
 
  While the test was showing the numbers for receiving as well as sending,
  the receiving numbers are not reliable because that machine was running a
  -RT kernel for these tests.
 
  I was just wondering if anyone had seen this problem before or would have
  any idea on where to start hunting for the solution.

 could you enable CONFIG_LOCK_STAT

 echo 0  /proc/lock_stat

 run your test

 and report the output of (preferably not 80 column wrapped):

 grep : /proc/lock_stat | head

/proc/lock_stat stayed empty for the duration of the test.  I am guessing this 
means there was no lock contention.

I do see this on the console:
BUG: scheduling with irqs disabled: IRQ-8414/0x/9494
caller is wait_for_completion+0x85/0xc4

Call Trace:
 [8106f3b2] dump_trace+0xaa/0x32a
 [8106f673] show_trace+0x41/0x64
 [8106f6ab] dump_stack+0x15/0x17
 [8106566f] schedule+0x82/0x102
 [81065774] wait_for_completion+0x85/0xc4
 [81092043] set_cpus_allowed+0xa1/0xc8
 [810986e2] do_softirq_from_hardirq+0x105/0x12d
 [810ca6cc] do_irqd+0x2a8/0x32f
 [8103469d] kthread+0xf5/0x128
 [81060f68] child_rip+0xa/0x12

INFO: lockdep is turned off.

I haven't seen this until I enabled lock_stat and ran the test.

 or otherwise if there are any highly contended network locks listed?

Any other ideas for debugging this?

--Vernon
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [-RT] multiple streams have degraded performance

2007-06-19 Thread Vernon Mauery
On Tuesday 19 June 2007 8:38:50 am Peter Zijlstra wrote:
 On Tue, 2007-06-19 at 07:25 -0700, Vernon Mauery wrote:
  On Monday 18 June 2007 11:51:38 pm Peter Zijlstra wrote:
   On Mon, 2007-06-18 at 22:12 -0700, Vernon Mauery wrote:
In looking at the performance characteristics of my network I found
that 2.6.21.5-rt15 suffers from degraded thoughput with multiple
threads.  The test that I did this with is simply invoking 1, 2, 4,
and 8 instances of netperf at a time and measuring the total
throughput.  I have two 4-way machines connected with 10GbE cards.  I
tested several kernels (some older and some newer) and found that the
only thing in common was that with -RT kernels the performance went
down with concurrent streams.
   
While the test was showing the numbers for receiving as well as
sending, the receiving numbers are not reliable because that machine
was running a -RT kernel for these tests.
   
I was just wondering if anyone had seen this problem before or would
have any idea on where to start hunting for the solution.
  
   could you enable CONFIG_LOCK_STAT
  
   echo 0  /proc/lock_stat
  
   run your test
  
   and report the output of (preferably not 80 column wrapped):
  
   grep : /proc/lock_stat | head
 
  /proc/lock_stat stayed empty for the duration of the test.  I am guessing
  this means there was no lock contention.

 Most likely caused by the issue below.

  I do see this on the console:
  BUG: scheduling with irqs disabled: IRQ-8414/0x/9494
  caller is wait_for_completion+0x85/0xc4
 
  Call Trace:
   [8106f3b2] dump_trace+0xaa/0x32a
   [8106f673] show_trace+0x41/0x64
   [8106f6ab] dump_stack+0x15/0x17
   [8106566f] schedule+0x82/0x102
   [81065774] wait_for_completion+0x85/0xc4
   [81092043] set_cpus_allowed+0xa1/0xc8
   [810986e2] do_softirq_from_hardirq+0x105/0x12d
   [810ca6cc] do_irqd+0x2a8/0x32f
   [8103469d] kthread+0xf5/0x128
   [81060f68] child_rip+0xa/0x12
 
  INFO: lockdep is turned off.
 
  I haven't seen this until I enabled lock_stat and ran the test.

 I think this is what causes the lack of output. It disables all lock
 tracking features...

   or otherwise if there are any highly contended network locks listed?
 
  Any other ideas for debugging this?

 fixing the above bug would help :-)

 Ingo says that should be fixed in -rt17, so if you could give that a
 spin...

I just tested with -rt17 and the BUG message is gone, but I still don't see 
any entries in.  I don't see any other suspicious message so I think it is 
really strange that is not listing any locks at all.

--Vernon
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [-RT] multiple streams have degraded performance

2007-06-19 Thread Vernon Mauery
On Monday 18 June 2007 10:12:21 pm Vernon Mauery wrote:
 In looking at the performance characteristics of my network I found that
 2.6.21.5-rt15 suffers from degraded thoughput with multiple threads.  The
 test that I did this with is simply invoking 1, 2, 4, and 8 instances of
 netperf at a time and measuring the total throughput.  I have two 4-way
 machines connected with 10GbE cards.  I tested several kernels (some older
 and some newer) and found that the only thing in common was that with -RT
 kernels the performance went down with concurrent streams.

I just tested this using lo instead of the 10GbE adapter.  I found similar 
results.  Since this makes it reproducible by just about anybody (maybe the 
only factor now is the number of CPUs), I have attached the script that I 
test things with.

So with the script run like ./stream_test 127.0.0.1 on 2.6.21 and 
2.6.21.5-rt17 I got the following:

2.6.21
===
default: 1 streams: Send at 2790.3 Mb/s, Receive at 2790.3 Mb/s
default: 2 streams: Send at 4129.4 Mb/s, Receive at 4128.7 Mb/s
default: 4 streams: Send at 7949.6 Mb/s, Receive at 7735.5 Mb/s
default: 8 streams: Send at 7930.7 Mb/s, Receive at 7910.1 Mb/s
1Msock: 1 streams: Send at 2810.7 Mb/s, Receive at 2810.7 Mb/s
1Msock: 2 streams: Send at 4093.4 Mb/s, Receive at 4092.6 Mb/s
1Msock: 4 streams: Send at 7887.8 Mb/s, Receive at 7880.4 Mb/s
1Msock: 8 streams: Send at 8091.7 Mb/s, Receive at 8082.2 Mb/s

2.6.21.5-rt17
==
default: 1 streams: Send at 938.2 Mb/s, Receive at 938.2 Mb/s
default: 2 streams: Send at 1476.3 Mb/s, Receive at 1436.9 Mb/s
default: 4 streams: Send at 1489.8 Mb/s, Receive at 1145.0 Mb/s
default: 8 streams: Send at 1099.8 Mb/s, Receive at 1079.1 Mb/s
1Msock: 1 streams: Send at 921.4 Mb/s, Receive at 920.4 Mb/s
1Msock: 2 streams: Send at 1332.2 Mb/s, Receive at 1311.5 Mb/s
1Msock: 4 streams: Send at 1483.0 Mb/s, Receive at 1137.8 Mb/s
1Msock: 8 streams: Send at 1446.2 Mb/s, Receive at 1135.6 Mb/s

--Vernon

 While the test was showing the numbers for receiving as well as sending,
 the receiving numbers are not reliable because that machine was running a
 -RT kernel for these tests.

 I was just wondering if anyone had seen this problem before or would have
 any idea on where to start hunting for the solution.

 --Vernon

 The key for this is 'default' was invoked like:
 netperf -c -C -l 60 -H 10.2.2.4 -t UDP_STREAM -- -m 1472 -M 1472
 and '1Msock' was invoked like:
 netperf -c -C -l 60 -H 10.2.2.4 -t UDP_STREAM -- -m 1472 -M 1472 -s 1M -S
 1M

 2.6.21
 ==
 default: 1 streams: Send at 2844.2 Mb/s, Receive at 2840.1 Mb/s
 default: 2 streams: Send at 3927.9 Mb/s, Receive at 3603.9 Mb/s
 default: 4 streams: Send at 4197.4 Mb/s, Receive at 3776.3 Mb/s
 default: 8 streams: Send at 4223.9 Mb/s, Receive at 3848.9 Mb/s
 1Msock: 1 streams: Send at 4232.3 Mb/s, Receive at 3914.4 Mb/s
 1Msock: 2 streams: Send at 5428.8 Mb/s, Receive at 3853.2 Mb/s
 1Msock: 4 streams: Send at 6202.1 Mb/s, Receive at 3774.8 Mb/s
 1Msock: 8 streams: Send at 6225.1 Mb/s, Receive at 3754.7 Mb/s

 2.6.21.5-rt15
 ===
 default: 1 streams: Send at 3091.6 Mb/s, Receive at 3048.1 Mb/s
 default: 2 streams: Send at 3768.8 Mb/s, Receive at 3714.2 Mb/s
 default: 4 streams: Send at 1873.6 Mb/s, Receive at 1825.9 Mb/s
 default: 8 streams: Send at 1806.5 Mb/s, Receive at 1792.7 Mb/s
 1Msock: 1 streams: Send at 3680.4 Mb/s, Receive at 3255.6 Mb/s
 1Msock: 2 streams: Send at 4129.8 Mb/s, Receive at 3991.5 Mb/s
 1Msock: 4 streams: Send at 1862.1 Mb/s, Receive at 1787.1 Mb/s
 1Msock: 8 streams: Send at 1790.2 Mb/s, Receive at 1556.8 Mb/s


#!/bin/sh
# File: stream_test
# Description: test network throughput with a varying number of streams
# Author: Vernon Mauery [EMAIL PROTECTED]
# Copyright: IBM Corporation (C) 2007


instances=1 2 4 8
msg_size=
time=60
TEST=UDP_STREAM
sock_size=

function usage() {
echo usage: $0 [-t|-u] [-c N] [-m N] [-s N] [-T N] [-x ...] target ip 
addr
echo   -t, -u  run TCP or UDP tests respectively (default: UDP)
echo   -c Nrun N concurrent instances (default \1 2 4 8\)
echo   -m Nset message size to N bytes (default 1472)
echo   -s Nset socket buffer size (default 1M)
echo   -T Nrun test for N seconds (default 60)
echo   -x '...'pass extra ags to netperf (included after -- )
echo   -h  display this message
}

while [ $# -gt 0 ]; do
case $1 in
-t)
TEST=TCP_STREAM
;;
-u)
TEST=UDP_STREAM
;;
-c)
shift
instances=$1
;;
-m)
shift
msg_size=$1
;;
-s)
shift
sock_size=$1
;;
-T)
shift
time=$1

[-RT] multiple streams have degraded performance

2007-06-18 Thread Vernon Mauery
In looking at the performance characteristics of my network I found that 
2.6.21.5-rt15 suffers from degraded thoughput with multiple threads.  The 
test that I did this with is simply invoking 1, 2, 4, and 8 instances of 
netperf at a time and measuring the total throughput.  I have two 4-way 
machines connected with 10GbE cards.  I tested several kernels (some older 
and some newer) and found that the only thing in common was that with -RT 
kernels the performance went down with concurrent streams.

While the test was showing the numbers for receiving as well as sending, the 
receiving numbers are not reliable because that machine was running a -RT 
kernel for these tests.

I was just wondering if anyone had seen this problem before or would have any 
idea on where to start hunting for the solution.

--Vernon

The key for this is 'default' was invoked like:
netperf -c -C -l 60 -H 10.2.2.4 -t UDP_STREAM -- -m 1472 -M 1472
and '1Msock' was invoked like:
netperf -c -C -l 60 -H 10.2.2.4 -t UDP_STREAM -- -m 1472 -M 1472 -s 1M -S 1M

2.6.21
==
default: 1 streams: Send at 2844.2 Mb/s, Receive at 2840.1 Mb/s
default: 2 streams: Send at 3927.9 Mb/s, Receive at 3603.9 Mb/s
default: 4 streams: Send at 4197.4 Mb/s, Receive at 3776.3 Mb/s
default: 8 streams: Send at 4223.9 Mb/s, Receive at 3848.9 Mb/s
1Msock: 1 streams: Send at 4232.3 Mb/s, Receive at 3914.4 Mb/s
1Msock: 2 streams: Send at 5428.8 Mb/s, Receive at 3853.2 Mb/s
1Msock: 4 streams: Send at 6202.1 Mb/s, Receive at 3774.8 Mb/s
1Msock: 8 streams: Send at 6225.1 Mb/s, Receive at 3754.7 Mb/s

2.6.21.5-rt15
===
default: 1 streams: Send at 3091.6 Mb/s, Receive at 3048.1 Mb/s
default: 2 streams: Send at 3768.8 Mb/s, Receive at 3714.2 Mb/s
default: 4 streams: Send at 1873.6 Mb/s, Receive at 1825.9 Mb/s
default: 8 streams: Send at 1806.5 Mb/s, Receive at 1792.7 Mb/s
1Msock: 1 streams: Send at 3680.4 Mb/s, Receive at 3255.6 Mb/s
1Msock: 2 streams: Send at 4129.8 Mb/s, Receive at 3991.5 Mb/s
1Msock: 4 streams: Send at 1862.1 Mb/s, Receive at 1787.1 Mb/s
1Msock: 8 streams: Send at 1790.2 Mb/s, Receive at 1556.8 Mb/s
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[-RT] multiple streams have degraded performance

2007-06-18 Thread Vernon Mauery
In looking at the performance characteristics of my network I found that 
2.6.21.5-rt15 suffers from degraded thoughput with multiple threads.  The 
test that I did this with is simply invoking 1, 2, 4, and 8 instances of 
netperf at a time and measuring the total throughput.  I have two 4-way 
machines connected with 10GbE cards.  I tested several kernels (some older 
and some newer) and found that the only thing in common was that with -RT 
kernels the performance went down with concurrent streams.

While the test was showing the numbers for receiving as well as sending, the 
receiving numbers are not reliable because that machine was running a -RT 
kernel for these tests.

I was just wondering if anyone had seen this problem before or would have any 
idea on where to start hunting for the solution.

--Vernon

The key for this is 'default' was invoked like:
netperf -c -C -l 60 -H 10.2.2.4 -t UDP_STREAM -- -m 1472 -M 1472
and '1Msock' was invoked like:
netperf -c -C -l 60 -H 10.2.2.4 -t UDP_STREAM -- -m 1472 -M 1472 -s 1M -S 1M

2.6.21
==
default: 1 streams: Send at 2844.2 Mb/s, Receive at 2840.1 Mb/s
default: 2 streams: Send at 3927.9 Mb/s, Receive at 3603.9 Mb/s
default: 4 streams: Send at 4197.4 Mb/s, Receive at 3776.3 Mb/s
default: 8 streams: Send at 4223.9 Mb/s, Receive at 3848.9 Mb/s
1Msock: 1 streams: Send at 4232.3 Mb/s, Receive at 3914.4 Mb/s
1Msock: 2 streams: Send at 5428.8 Mb/s, Receive at 3853.2 Mb/s
1Msock: 4 streams: Send at 6202.1 Mb/s, Receive at 3774.8 Mb/s
1Msock: 8 streams: Send at 6225.1 Mb/s, Receive at 3754.7 Mb/s

2.6.21.5-rt15
===
default: 1 streams: Send at 3091.6 Mb/s, Receive at 3048.1 Mb/s
default: 2 streams: Send at 3768.8 Mb/s, Receive at 3714.2 Mb/s
default: 4 streams: Send at 1873.6 Mb/s, Receive at 1825.9 Mb/s
default: 8 streams: Send at 1806.5 Mb/s, Receive at 1792.7 Mb/s
1Msock: 1 streams: Send at 3680.4 Mb/s, Receive at 3255.6 Mb/s
1Msock: 2 streams: Send at 4129.8 Mb/s, Receive at 3991.5 Mb/s
1Msock: 4 streams: Send at 1862.1 Mb/s, Receive at 1787.1 Mb/s
1Msock: 8 streams: Send at 1790.2 Mb/s, Receive at 1556.8 Mb/s
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


dmi_table counting in ipmi_si_intf.c

2005-08-09 Thread Vernon Mauery
I am working on getting one of the IBM blades to use ipmi and have run
into a problem.  The driver doesn't load because it says it can't find
the device.

dmidecode shows that there are 39 entries and that the last one is the
BMC.  I looked into dmi_table and noticed that it parses the table by
length and by number of entries.  But I found that it goes from i=1 to
i
---

diff -uar a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
--- a/drivers/char/ipmi/ipmi_si_intf.c  2005-08-09 08:11:41.0 -0700
+++ b/drivers/char/ipmi/ipmi_si_intf.c  2005-08-09 08:12:51.0 -0700
@@ -1690,7 +1690,7 @@ static int dmi_table(u32 base, int len, 
u8__iomem *buf;
struct dmi_header __iomem *dm;
u8__iomem *data;
-   int   i=1;
+   int   i=0;
int   status=-1;
int   intf_num = 0;
 


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


dmi_table counting in ipmi_si_intf.c

2005-08-09 Thread Vernon Mauery
I am working on getting one of the IBM blades to use ipmi and have run
into a problem.  The driver doesn't load because it says it can't find
the device.

dmidecode shows that there are 39 entries and that the last one is the
BMC.  I looked into dmi_table and noticed that it parses the table by
length and by number of entries.  But I found that it goes from i=1 to
inum.  This causes it to skip the last entry in the table.  Is there a
reason it is i=1 instead of i=0?  or for that matter inum instead of
i=num?

Ensure that all dmi table entries get parsed.

Signed-off-by: Vernon Mauery [EMAIL PROTECTED]
---

diff -uar a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
--- a/drivers/char/ipmi/ipmi_si_intf.c  2005-08-09 08:11:41.0 -0700
+++ b/drivers/char/ipmi/ipmi_si_intf.c  2005-08-09 08:12:51.0 -0700
@@ -1690,7 +1690,7 @@ static int dmi_table(u32 base, int len, 
u8__iomem *buf;
struct dmi_header __iomem *dm;
u8__iomem *data;
-   int   i=1;
+   int   i=0;
int   status=-1;
int   intf_num = 0;
 


-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: set keyboard repeat rate: EVIOCGREP and EVIOCSREP

2005-04-08 Thread Vernon Mauery
Vernon Mauery wrote:
> I was wondering if anyone knows how to change the repeatrate on a USB 
> keyboard with a 2.4 kernel.  The system is a legacy free system (no ps2 
> port), so kbdrate does nothing.  With evdev loaded, the keyboard and mouse 
> (both USB devices) get registered with the event system and show up as 
> /dev/input/event[01].  I know the event subsystem does software key repeating 
> and was wondering how to change that.
> 
> I poked around and found the EVIOCGREP and EVIOCSREP ioctls, but when I tried 
> using them, the ioctl returned invalid parameter.  Upon further 
> investigation, I found that the ioctl definitions (located in the 
> linux/input.h header file) are not used in kernel land.  That would explain 
> why it failed, but that just means I ran into a dead end.  Were those 
> definitions legacy code from 2.2 or is it something that never got 
> implemented, only defined?  I also noticed that the defines are gone in 2.6.  
> So how _does_ one go about changing the repeat rate on a keyboard input 
> device in 2.4?
> 

Just in case anyone cares, I spent some more time poking around in the event 
code and it looks like the way to do this seems to be exposed by the evdev 
module.  If you write to /dev/input/eventX an input_event that contains an 
event of type EV_REP with either REP_DELAY or REP_PERIOD as the code and a 
value in milliseconds, I think it is supposed to set up the software auto 
repeat for you.  But with the atkbd driver, you have to turn off hardware auto 
repeat for this to take effect.  

--Vernon
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: set keyboard repeat rate: EVIOCGREP and EVIOCSREP

2005-04-08 Thread Vernon Mauery
Vernon Mauery wrote:
 I was wondering if anyone knows how to change the repeatrate on a USB 
 keyboard with a 2.4 kernel.  The system is a legacy free system (no ps2 
 port), so kbdrate does nothing.  With evdev loaded, the keyboard and mouse 
 (both USB devices) get registered with the event system and show up as 
 /dev/input/event[01].  I know the event subsystem does software key repeating 
 and was wondering how to change that.
 
 I poked around and found the EVIOCGREP and EVIOCSREP ioctls, but when I tried 
 using them, the ioctl returned invalid parameter.  Upon further 
 investigation, I found that the ioctl definitions (located in the 
 linux/input.h header file) are not used in kernel land.  That would explain 
 why it failed, but that just means I ran into a dead end.  Were those 
 definitions legacy code from 2.2 or is it something that never got 
 implemented, only defined?  I also noticed that the defines are gone in 2.6.  
 So how _does_ one go about changing the repeat rate on a keyboard input 
 device in 2.4?
 

Just in case anyone cares, I spent some more time poking around in the event 
code and it looks like the way to do this seems to be exposed by the evdev 
module.  If you write to /dev/input/eventX an input_event that contains an 
event of type EV_REP with either REP_DELAY or REP_PERIOD as the code and a 
value in milliseconds, I think it is supposed to set up the software auto 
repeat for you.  But with the atkbd driver, you have to turn off hardware auto 
repeat for this to take effect.  

--Vernon
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


set keyboard repeat rate: EVIOCGREP and EVIOCSREP

2005-04-07 Thread Vernon Mauery
I was wondering if anyone knows how to change the repeatrate on a USB keyboard 
with a 2.4 kernel.  The system is a legacy free system (no ps2 port), so 
kbdrate does nothing.  With evdev loaded, the keyboard and mouse (both USB 
devices) get registered with the event system and show up as 
/dev/input/event[01].  I know the event subsystem does software key repeating 
and was wondering how to change that.

I poked around and found the EVIOCGREP and EVIOCSREP ioctls, but when I tried 
using them, the ioctl returned invalid parameter.  Upon further investigation, 
I found that the ioctl definitions (located in the linux/input.h header file) 
are not used in kernel land.  That would explain why it failed, but that just 
means I ran into a dead end.  Were those definitions legacy code from 2.2 or is 
it something that never got implemented, only defined?  I also noticed that the 
defines are gone in 2.6.  So how _does_ one go about changing the repeat rate 
on a keyboard input device in 2.4?

Thanks in advance for your help.

--Vernon Mauery
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


set keyboard repeat rate: EVIOCGREP and EVIOCSREP

2005-04-07 Thread Vernon Mauery
I was wondering if anyone knows how to change the repeatrate on a USB keyboard 
with a 2.4 kernel.  The system is a legacy free system (no ps2 port), so 
kbdrate does nothing.  With evdev loaded, the keyboard and mouse (both USB 
devices) get registered with the event system and show up as 
/dev/input/event[01].  I know the event subsystem does software key repeating 
and was wondering how to change that.

I poked around and found the EVIOCGREP and EVIOCSREP ioctls, but when I tried 
using them, the ioctl returned invalid parameter.  Upon further investigation, 
I found that the ioctl definitions (located in the linux/input.h header file) 
are not used in kernel land.  That would explain why it failed, but that just 
means I ran into a dead end.  Were those definitions legacy code from 2.2 or is 
it something that never got implemented, only defined?  I also noticed that the 
defines are gone in 2.6.  So how _does_ one go about changing the repeat rate 
on a keyboard input device in 2.4?

Thanks in advance for your help.

--Vernon Mauery
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: security issue: hard disk lock

2005-04-05 Thread Vernon Mauery
Horst von Brand wrote:
> Jonas Diemer <[EMAIL PROTECTED]> said:
> 
> [...]
> 
> 
>>I figured there could be a kernel compiled-in option that will make the
>>kernel lock all drives found during bootup. then, a malicous program
>>would need to install a different kernel in order to harm the drive,
>>which would be much more secure.
> 
> 
> Doing it in initrd should be plenty of time, no need to involve the kernel.

Technically, according to the article, the only safe time to do it is in the 
BIOS or in one of their special safe CDs that freezes the drive before the boot 
loader loads.  This makes sense because a particularly malicious place to put 
something like this is a worm that attaches to your boot loader.  Then, even 
doing it in the kernel at boot time is too late.

--Vernon

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: security issue: hard disk lock

2005-04-05 Thread Vernon Mauery
Horst von Brand wrote:
 Jonas Diemer [EMAIL PROTECTED] said:
 
 [...]
 
 
I figured there could be a kernel compiled-in option that will make the
kernel lock all drives found during bootup. then, a malicous program
would need to install a different kernel in order to harm the drive,
which would be much more secure.
 
 
 Doing it in initrd should be plenty of time, no need to involve the kernel.

Technically, according to the article, the only safe time to do it is in the 
BIOS or in one of their special safe CDs that freezes the drive before the boot 
loader loads.  This makes sense because a particularly malicious place to put 
something like this is a worm that attaches to your boot loader.  Then, even 
doing it in the kernel at boot time is too late.

--Vernon

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Keystroke simulator

2005-03-29 Thread Vernon Mauery
Mister Google wrote:
> Is there a way to simulate a keystroke to a program, ie. have a program
> send it something so that as far as it's concerned, say, the "P" key has
> been pressed?
> 
Look at the input system.  Documentation/input/input-programming.txt has a 
great tutorial on how to do this.  

--Vernon

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Keystroke simulator

2005-03-29 Thread Vernon Mauery
Mister Google wrote:
 Is there a way to simulate a keystroke to a program, ie. have a program
 send it something so that as far as it's concerned, say, the P key has
 been pressed?
 
Look at the input system.  Documentation/input/input-programming.txt has a 
great tutorial on how to do this.  

--Vernon

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ACPI] Call for help: list of machines with working S3

2005-02-17 Thread Vernon Mauery
Carl-Daniel Hailfinger wrote:
> 1. A first step towards better DSDTs would be to make the ASL compiler
> complain about the same things which are complained about by the
> in-kernel ACPI interpreter. An example would be the following:
> 
> acpi_processor-0496 [10] acpi_processor_get_inf: Invalid PBLK length [7]
> 
> The ASL compiler will not complain about it, yet the kernel will
> refuse to do any processor throttling with a PBLK length of 7.

This is like getting gcc to complain about run-time bugs in a program.  The 
compiler of a language (ASL in this case) compiles the language, regardless of 
run-time bugs because it can only detect syntax errors.  And iasl does that 
pretty well.  

--Vernon
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ACPI] Call for help: list of machines with working S3

2005-02-17 Thread Vernon Mauery
Carl-Daniel Hailfinger wrote:
 1. A first step towards better DSDTs would be to make the ASL compiler
 complain about the same things which are complained about by the
 in-kernel ACPI interpreter. An example would be the following:
 
 acpi_processor-0496 [10] acpi_processor_get_inf: Invalid PBLK length [7]
 
 The ASL compiler will not complain about it, yet the kernel will
 refuse to do any processor throttling with a PBLK length of 7.

This is like getting gcc to complain about run-time bugs in a program.  The 
compiler of a language (ASL in this case) compiles the language, regardless of 
run-time bugs because it can only detect syntax errors.  And iasl does that 
pretty well.  

--Vernon
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ACPI] Call for help: list of machines with working S3

2005-02-15 Thread Vernon Mauery
Pavel Machek wrote:

> 
> Table of known working systems:
> 
> Model   hack (or "how to do it")
> --
> IBM TP R32 / Type 2658-MMG  none (1)
IBM TP T40 / Type 2373-MU4  none (1)
IBM TP R50p / Type 1832-22U s3_bios (2)
> Athlon HP Omnibook XE3none (1)
> Compaq Armada E500 - P3-700 none (1) (S1 also works OK)
> IBM t41p  none (1)

--Vernon
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [ACPI] Call for help: list of machines with working S3

2005-02-15 Thread Vernon Mauery
Pavel Machek wrote:

 
 Table of known working systems:
 
 Model   hack (or how to do it)
 --
 IBM TP R32 / Type 2658-MMG  none (1)
IBM TP T40 / Type 2373-MU4  none (1)
IBM TP R50p / Type 1832-22U s3_bios (2)
 Athlon HP Omnibook XE3none (1)
 Compaq Armada E500 - P3-700 none (1) (S1 also works OK)
 IBM t41p  none (1)

--Vernon
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/