Re: [RFC 0/2][BNX2]: Add iSCSI support to BNX2 devices.

2007-08-04 Thread Jeff Garzik

Michael Chan wrote:

[BNX2]: Add iSCSI support to BNX2 devices.

Modify bnx2 and add a cnic driver to support some offload functions
needed by iSCSI.

Add a new open-iscsi driver to support iSCSI offload on bnx2 devices.

Signed-off-by: Anil Veerabhadrappa [EMAIL PROTECTED]
Signed-off-by: Michael Chan [EMAIL PROTECTED]

--

The complete patch is in:

ftp://[EMAIL PROTECTED]/0001-BNX2-Add-iSCSI-support-to-BNX2-devices.patch

I broke this into 2 patches and omitted the firmware blob in the next 2
emails for review.


patch #2/2 did not make it (to me personally nor to 
http://marc.info/?l=linux-netdev)


Jeff



-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/23] per device dirty throttling -v8

2007-08-04 Thread Ray Lee
(adding netdev cc:)

On 8/4/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 On Sat, 4 Aug 2007, Ingo Molnar wrote:

  * Ingo Molnar [EMAIL PROTECTED] wrote:
 
  There are positive reports in the never-ending my system crawls like
  an XT when copying large files bugzilla entry:
 
   http://bugzilla.kernel.org/show_bug.cgi?id=7372
 
  i forgot this entry:
 
   We recently upgraded our office to gigabit Ethernet and got some big
AMD64 / 3ware boxes for file and vmware servers... only to find them
almost useless under any kind of real load. I've built some patched
2.6.21.6 kernels (using the bdi throttling patch you mentioned) to
see if our various Debian Etch boxes run better. So far my testing
shows a *great* improvement over the stock Debian 2.6.18 kernel on
our configurations. 
 
  and bdi has been in -mm in the past i think, so we also know (to a
  certain degree) that it does not hurt those workloads that are fine
  either.
 
  [ my personal interest in this is the following regression: every time i
   start a large kernel build with DEBUG_INFO on a quad-core 4GB RAM box,
   i get up to 30 seconds complete pauses in Vim (and most other tasks),
   during plain editing of the source code. (which happens when Vim tries
   to write() to its swap/undo-file.) ]

 I have an issue that sounds like it's related.

 I've got a syslog server that's got two Opteron 246 cpu's, 16G ram, 2x140G
 15k rpm drives (fusion MPT hardware mirroring), 16x500G 7200rpm SATA
 drives on 3ware 9500 cards (software raid6) running 2.6.20.3 with hz set
 at default and preempt turned off.

 I have syslog doing buffered writes to the SCSI drives and every 5 min a
 cron job copies the data to the raid array.

 I've found that if I do anything significant on the large raid array that
 the system looses a significant amount of the UDP syslog traffic, even
 though there should be pleanty of ram and cpu (and the spindles involved
 in the writes are not being touched), even a grep can cause up to 40%
 losses in the syslog traffic. I've experimented with nice levels (nicing
 down the grep and nicing up the syslogd) without a noticable effect on the
 losses.

 I've been planning to try a new kernel with hz=1000 to see if that would
 help, and after that experiment with the various preempt settings, but it
 sounds like the per-device queues may actually be more relavent to the
 problem.

 what would you suggest I test, and in what order and combination?

At least on a surface level, your report has some similarities to
http://lkml.org/lkml/2007/5/21/84 . In that message, John Miller
mentions several things he tried without effect:

 - I increased the max allowed receive buffer through
 proc/sys/net/core/rmem_max and the application calls the right
 syscall. netstat -su does not show any packet receive errors.

 - After getting kernel: swapper: page allocation failure.
 order:0, mode:0x20, I increased /proc/sys/vm/min_free_kbytes

 - ixgb.txt in kernel network documentation suggests to increase
 net.core.netdev_max_backlog to 30. This did not help.

 - I also had to increase net.core.optmem_max, because the default
 value was too small for 700 multicast groups.

As they're all pretty simple to test, it may be worthwhile to give
them a shot just to rule things out.

Ray
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: strange tcp behavior

2007-08-04 Thread Evgeniy Polyakov
On Fri, Aug 03, 2007 at 07:29:58PM +0100, Simon Arlott ([EMAIL PROTECTED]) 
wrote:
 On 03/08/07 18:39, Evgeniy Polyakov wrote:
  On Fri, Aug 03, 2007 at 05:51:42PM +0100, Simon Arlott ([EMAIL PROTECTED]) 
  wrote:
  
  17:38:03.533589 IP 192.168.7.4.50550  192.168.7.8.2500: R 
  82517592:82517592(0) win 1500 (raw)
  vs
  17:37:38.383085 IP 192.168.7.8.2500  192.168.7.4.50550: R 
  4259643274:4259643274(0) ack 1171836829 win 14360
  What happened there ?
 
 Erm... you seem to have removed parts of my message in a way that doesn't 
 make sense...

Sorry, I left line I tought were enough to understand your point.

 On Fri, Aug 03, 2007 at 05:51:42PM +0100, Simon Arlott wrote:
  17:38:04.536277 IP 192.168.7.8.2500  192.168.7.4.50550: R 1:1(0) ack 17 
  win 14360
  vs
  17:37:38.383085 IP 192.168.7.8.2500  192.168.7.4.50550: R 
  4259643274:4259643274(0) ack 1171836829 win 14360
  What happened there ?
 
 The first one is the RST sent when the connection is close()d without 
 reading, and the second one is the same RST but after other connection 
 has been made on the same ports using a different socket.

I understood it, and your question is about possibility for those
numbers to be roughly the same. Answer is 'no', it is not possible
(possible, but with extremely low probability).
If it is - this is a bug in ISN generation algo and must be fixed.

  It is the same situation, which would happen if you will spam remote
  side with RST packets with arbitrary sequence number in hope that it
  will reset some connection.
 
 Isn't it still possible that the connection that got reset is left open 
 (possibly for days) until another connection using the same ports is 
 using roughly the same sequence numbers?

Of course it is possible, but it very unlikely. Practically it is
impossible in modern OSes - ISN generation algos are designed to prevent
this from happening.

 -- 
 Simon Arlott

-- 
Evgeniy Polyakov
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Linksys Gigabit USB2.0 adapter (asix) regression

2007-08-04 Thread Erik Slagter
David Hollis wrote:

 They are either garbled are they are not passed on the wire. The
 transmitted packets are shown by tshark, but a tshark run on the other
 end of the line does not show them.

 Platform is indeed x86, to be precise: fedora 7, kernel 2.6.22-rc6, cpu
 pentium M, dell laptop inspiron 9300, ICH6.

 If you want me to test something please yell, it's no trouble at all to
 change a few lines in the driver's source and recompile the module.
 
 Could you send me a complete dmesg dump when the driver is compiled with
 DEBUG enabled (at least from then usb logs that the device was inserted
 to the end).  I'll need to see what it reports the values of the
 registers.

Please see attachment.

 Have you tried using the F7 2.6.22 kernel?  I know that has worked fine
 for me on my system.

I tried vanilla 2.6.22 and now 2.6.23-rc1

 Please note I cannot send mail to you: (conversation with
 dhollis.dyndns.org[71.251.104.159] timed out while sending MAIL FROM)
 
 I've fixed that issue so my mail delivery isn't sporadic.

I still got it last week.
usb 1-8: new high speed USB device using ehci_hcd and address 11
usb 1-8: configuration #1 chosen from 1 choice
usb%d: asix_read_cmd() cmd=0x1e value=0x index=0x size=1
drivers/net/usb/asix.c: GPIO Status: 0x0002
usb%d: asix_write_cmd() cmd=0x0d value=0x index=0x size=0
usb%d: asix_read_cmd() cmd=0x0b value=0x0017 index=0x size=2
usb%d: asix_write_cmd() cmd=0x0e value=0x index=0x size=0
drivers/net/usb/asix.c: EEPROM index 0x17 is 0x0580
drivers/net/usb/asix.c: GPIO0: 0, PhyMode: 0
usb%d: asix_write_gpio() - value = 0x008c
usb%d: asix_write_cmd() cmd=0x1f value=0x008c index=0x size=0
usb%d: asix_write_gpio() - value = 0x003c
usb%d: asix_write_cmd() cmd=0x1f value=0x003c index=0x size=0
usb%d: asix_write_gpio() - value = 0x001c
usb%d: asix_write_cmd() cmd=0x1f value=0x001c index=0x size=0
usb%d: asix_write_gpio() - value = 0x003c
usb%d: asix_write_cmd() cmd=0x1f value=0x003c index=0x size=0
usb%d: asix_write_cmd() cmd=0x20 value=0x index=0x size=0
usb%d: asix_write_cmd() cmd=0x20 value=0x0048 index=0x size=0
usb%d: asix_write_rx_ctl() - mode = 0x
usb%d: asix_write_cmd() cmd=0x10 value=0x index=0x size=0
usb%d: asix_read_cmd() cmd=0x13 value=0x index=0x size=6
usb%d: asix_get_phy_addr()
usb%d: asix_read_cmd() cmd=0x19 value=0x index=0x size=2
usb%d: asix_get_phy_addr() returning 0x18e0
usb%d: asix_write_cmd() cmd=0x06 value=0x index=0x size=0
usb%d: asix_read_cmd() cmd=0x07 value=0x0018 index=0x0002 size=2
usb%d: asix_write_cmd() cmd=0x0a value=0x index=0x size=0
usb%d: asix_mdio_read() phy_id=0x18, loc=0x02, returns=0x0141
usb%d: asix_write_cmd() cmd=0x06 value=0x index=0x size=0
usb%d: asix_read_cmd() cmd=0x07 value=0x0018 index=0x0003 size=2
usb%d: asix_write_cmd() cmd=0x0a value=0x index=0x size=0
usb%d: asix_mdio_read() phy_id=0x18, loc=0x03, returns=0x0cc2
drivers/net/usb/asix.c: PHYID=0x01410cc2
usb%d: marvell_phy_init()
usb%d: asix_write_cmd() cmd=0x06 value=0x index=0x size=0
usb%d: asix_read_cmd() cmd=0x07 value=0x0018 index=0x001b size=2
usb%d: asix_write_cmd() cmd=0x0a value=0x index=0x size=0
usb%d: asix_mdio_read() phy_id=0x18, loc=0x1b, returns=0x848f
usb%d: MII_MARVELL_STATUS = 0x848f
usb%d: asix_mdio_write() phy_id=0x18, loc=0x14, val=0x0082
usb%d: asix_write_cmd() cmd=0x06 value=0x index=0x size=0
usb%d: asix_write_cmd() cmd=0x08 value=0x0018 index=0x0014 size=2
usb%d: asix_write_cmd() cmd=0x0a value=0x index=0x size=0
usb%d: asix_write_cmd() cmd=0x06 value=0x index=0x size=0
usb%d: asix_read_cmd() cmd=0x07 value=0x0018 index=0x0018 size=2
usb%d: asix_write_cmd() cmd=0x0a value=0x index=0x size=0
usb%d: asix_mdio_read() phy_id=0x18, loc=0x18, returns=0x4100
usb%d: MII_MARVELL_LED_CTRL (1) = 0x4100
usb%d: asix_mdio_write() phy_id=0x18, loc=0x18, val=0x4101
usb%d: asix_write_cmd() cmd=0x06 value=0x index=0x size=0
usb%d: asix_write_cmd() cmd=0x08 value=0x0018 index=0x0018 size=2
usb%d: asix_write_cmd() cmd=0x0a value=0x index=0x size=0
usb%d: asix_write_cmd() cmd=0x06 value=0x index=0x size=0
usb%d: asix_read_cmd() cmd=0x07 value=0x0018 index=0x0018 size=2
usb%d: asix_write_cmd() cmd=0x0a value=0x index=0x size=0
usb%d: asix_mdio_read() phy_id=0x18, loc=0x18, returns=0x4101
usb%d: MII_MARVELL_LED_CTRL (2) = 0x4101
usb%d: asix_mdio_write() phy_id=0x18, loc=0x00, val=0x9000
usb%d: asix_write_cmd() cmd=0x06 value=0x index=0x size=0
usb%d: asix_write_cmd() cmd=0x08 value=0x0018 index=0x size=2
usb%d: asix_write_cmd() cmd=0x0a value=0x index=0x size=0
usb%d: asix_mdio_write() phy_id=0x18, loc=0x04, val=0x05e1
usb%d: asix_write_cmd() cmd=0x06 value=0x index=0x size=0
usb%d: asix_write_cmd() cmd=0x08 value=0x0018 index=0x0004 size=2
usb%d: asix_write_cmd() cmd=0x0a value=0x index=0x size=0
usb%d: asix_mdio_write() 

Re: Distributed storage.

2007-08-04 Thread Evgeniy Polyakov
On Fri, Aug 03, 2007 at 06:19:16PM -0700, Daniel Phillips ([EMAIL PROTECTED]) 
wrote:
 It depends on the characteristics of the physical and virtual block 
 devices involved.  Slow block devices can produce surprising effects.  
 Ddsnap still qualifies as slow under certain circumstances (big 
 linear write immediately following a new snapshot). Before we added 
 throttling we would see as many as 800,000 bios in flight.  Nice to 

Mmm, sounds tasty to work with such a system :)

 know the system can actually survive this... mostly.  But memory 
 deadlock is a clear and present danger under those conditions and we 
 did hit it (not to mention that read latency sucked beyond belief). 
 
 Anyway, we added a simple counting semaphore to throttle the bio traffic 
 to a reasonable number and behavior became much nicer, but most 
 importantly, this satisfies one of the primary requirements for 
 avoiding block device memory deadlock: a strictly bounded amount of bio 
 traffic in flight.  In fact, we allow some bounded number of 
 non-memalloc bios *plus* however much traffic the mm wants to throw at 
 us in memalloc mode, on the assumption that the mm knows what it is 
 doing and imposes its own bound of in flight bios per device.   This 
 needs auditing obviously, but the mm either does that or is buggy.  In 
 practice, with this throttling in place we never saw more than 2,000 in 
 flight no matter how hard we hit it, which is about the number we were 
 aiming at.  Since we draw our reserve from the main memalloc pool, we 
 can easily handle 2,000 bios in flight, even under extreme conditions.
 
 See:
 http://zumastor.googlecode.com/svn/trunk/ddsnap/kernel/dm-ddsnap.c
 down(info-throttle_sem);
 
 To be sure, I am not very proud of this throttling mechanism for various 
 reasons, but the thing is, _any_ throttling mechanism no matter how 
 sucky solves the deadlock problem.  Over time I want to move the 

make_request_fn is always called in process context, we can wait in it
for memory in mempool. Although that means we already in trouble.

I agree, any kind of high-boundary leveling must be implemented in
device itself, since block layer does not know what device is at the end
and what it will need to process given block request.

-- 
Evgeniy Polyakov
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Distributed storage.

2007-08-04 Thread Evgeniy Polyakov
Hi Daniel.
 On Tuesday 31 July 2007 10:13, Evgeniy Polyakov wrote:
  * storage can be formed on top of remote nodes and be exported
  simultaneously (iSCSI is peer-to-peer only, NBD requires device
  mapper and is synchronous)
 
 In fact, NBD has nothing to do with device mapper.  I use it as a 
 physical target underneath ddraid (a device mapper plugin) just like I 
 would use your DST if it proves out.

I meant to create a storage on top of several nodes one needs to have
device mapper or something like that on top of NBD itself. To further
export resulted device one needs another userspace NDB application and 
so on. DST simplifies that greatly.

DST original code worked as device mapper plugin too, but its two 
additional allocations (io and clone) per block request ended up for me 
as a show stopper.

-- 
Evgeniy Polyakov
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: strange tcp behavior

2007-08-04 Thread Evgeniy Polyakov
On Fri, Aug 03, 2007 at 01:04:51PM -0700, David Miller ([EMAIL PROTECTED]) 
wrote:
 From: Evgeniy Polyakov [EMAIL PROTECTED]
 Date: Fri, 3 Aug 2007 12:22:42 +0400
 
  On Thu, Aug 02, 2007 at 07:21:34PM -0700, David Miller ([EMAIL PROTECTED]) 
  wrote:
   What in the world are we doing allowing stream sockets to autobind?
   That is totally bogus.  Even if we autobind, that won't make a connect
   happen.
  
  For accepted socket it is perfectly valid assumption - we could autobind 
  it during the first send. Or may bind it during accept. Its a matter of
  taste I think. Autobinding during first sending can end up being a 
  protection against DoS in some obscure rare case...
 
 accept()ed socket is by definition fully bound and already in
 established state.

That what I meant - it binds during accept (well it can not be called
real binding), but could be autobound during first send to needed port.
Maybe that was one of intentions, don't know.

-- 
Evgeniy Polyakov
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: strange tcp behavior

2007-08-04 Thread Evgeniy Polyakov
On Fri, Aug 03, 2007 at 02:17:17PM -0700, David Miller ([EMAIL PROTECTED]) 
wrote:
 From: Evgeniy Polyakov [EMAIL PROTECTED]
 Date: Fri, 3 Aug 2007 12:22:42 +0400
 
  Maybe recvmsg should be changed too for symmetry?
 
 I took a look at this, and it's not %100 trivial.
 
 Let's do this later, and only sendmsg for now in order to
 fix the bug in the stable branches.

I've tested your patch, besides there was an offset in one of hooks, 
it works perfectly ok.

Feel free to add my ack, tested-by or whatever is needed for this :)
Your patch fixes the problem.

Actually inet_sendmsg() can be renamed to something less misleading,
since it is not used by TCP now.

-- 
Evgeniy Polyakov
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC 0/2][BNX2]: Add iSCSI support to BNX2 devices.

2007-08-04 Thread Michael Chan
Jeff Garzik wrote:

 Michael Chan wrote:
  [BNX2]: Add iSCSI support to BNX2 devices.
  
  Modify bnx2 and add a cnic driver to support some offload functions
  needed by iSCSI.
  
  Add a new open-iscsi driver to support iSCSI offload on 
 bnx2 devices.
  
  Signed-off-by: Anil Veerabhadrappa [EMAIL PROTECTED]
  Signed-off-by: Michael Chan [EMAIL PROTECTED]
  
  --
  
  The complete patch is in:
  
  
 
ftp://[EMAIL PROTECTED]/0001-BNX2-Add-iSCSI-support-to-BNX2
-devices.patch
 
  I broke this into 2 patches and omitted the firmware blob in the
next 2
  emails for review.

 patch #2/2 did not make it (to me personally nor to 
 http://marc.info/?l=linux-netdev)

Probably too big.  The complete patch is available from FTP above.
I'll try to break it up some more and resend later.







-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Distributed storage.

2007-08-04 Thread Evgeniy Polyakov
On Fri, Aug 03, 2007 at 09:04:51AM +0400, Manu Abraham ([EMAIL PROTECTED]) 
wrote:
 On 7/31/07, Evgeniy Polyakov [EMAIL PROTECTED] wrote:
 
  TODO list currently includes following main items:
  * redundancy algorithm (drop me a request of your own, but it is highly
  unlikley that Reed-Solomon based will ever be used - it is too slow
  for distributed RAID, I consider WEAVER codes)
 
 
 LDPC codes[1][2] have been replacing Turbo code[3] with regards to
 communication links and we have been seeing that transition. (maybe
 helpful, came to mind seeing the mention of Turbo code) Don't know how
 weaver compares to LDPC, though found some comparisons [4][5] But
 looking at fault tolerance figures, i guess Weaver is much better.
 
 [1] http://www.ldpc-codes.com/
 [2] http://portal.acm.org/citation.cfm?id=1240497
 [3] http://en.wikipedia.org/wiki/Turbo_code
 [4] 
 http://domino.research.ibm.com/library/cyberdig.nsf/papers/BD559022A190D41C85257212006CEC11/$File/rj10391.pdf
 [5] http://hplabs.hp.com/personal/Jay_Wylie/publications/wylie_dsn2007.pdf

LDPC codes require to solve N order matrix over finite field - exactly
the reason I do not want to use Reed-Solomon codes even with optimized
non-Vandermonde matrix. I will investigate LDPC further though.
Turbo codes are like flow cipher compared to RS codes being block
ciphers. Transport media is reliable in data storages, otherwise they
would not even exist.

-- 
Evgeniy Polyakov
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/23] per device dirty throttling -v8

2007-08-04 Thread david

On Sat, 4 Aug 2007, Ray Lee wrote:


(adding netdev cc:)

On 8/4/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

On Sat, 4 Aug 2007, Ingo Molnar wrote:


* Ingo Molnar [EMAIL PROTECTED] wrote:


There are positive reports in the never-ending my system crawls like
an XT when copying large files bugzilla entry:

 http://bugzilla.kernel.org/show_bug.cgi?id=7372


i forgot this entry:

 We recently upgraded our office to gigabit Ethernet and got some big
  AMD64 / 3ware boxes for file and vmware servers... only to find them
  almost useless under any kind of real load. I've built some patched
  2.6.21.6 kernels (using the bdi throttling patch you mentioned) to
  see if our various Debian Etch boxes run better. So far my testing
  shows a *great* improvement over the stock Debian 2.6.18 kernel on
  our configurations. 

and bdi has been in -mm in the past i think, so we also know (to a
certain degree) that it does not hurt those workloads that are fine
either.

[ my personal interest in this is the following regression: every time i
 start a large kernel build with DEBUG_INFO on a quad-core 4GB RAM box,
 i get up to 30 seconds complete pauses in Vim (and most other tasks),
 during plain editing of the source code. (which happens when Vim tries
 to write() to its swap/undo-file.) ]


I have an issue that sounds like it's related.

I've got a syslog server that's got two Opteron 246 cpu's, 16G ram, 2x140G
15k rpm drives (fusion MPT hardware mirroring), 16x500G 7200rpm SATA
drives on 3ware 9500 cards (software raid6) running 2.6.20.3 with hz set
at default and preempt turned off.

I have syslog doing buffered writes to the SCSI drives and every 5 min a
cron job copies the data to the raid array.

I've found that if I do anything significant on the large raid array that
the system looses a significant amount of the UDP syslog traffic, even
though there should be pleanty of ram and cpu (and the spindles involved
in the writes are not being touched), even a grep can cause up to 40%
losses in the syslog traffic. I've experimented with nice levels (nicing
down the grep and nicing up the syslogd) without a noticable effect on the
losses.

I've been planning to try a new kernel with hz=1000 to see if that would
help, and after that experiment with the various preempt settings, but it
sounds like the per-device queues may actually be more relavent to the
problem.

what would you suggest I test, and in what order and combination?


At least on a surface level, your report has some similarities to
http://lkml.org/lkml/2007/5/21/84 . In that message, John Miller
mentions several things he tried without effect:

 - I increased the max allowed receive buffer through
 proc/sys/net/core/rmem_max and the application calls the right
 syscall. netstat -su does not show any packet receive errors.

 - After getting kernel: swapper: page allocation failure.
 order:0, mode:0x20, I increased /proc/sys/vm/min_free_kbytes

 - ixgb.txt in kernel network documentation suggests to increase
 net.core.netdev_max_backlog to 30. This did not help.

 - I also had to increase net.core.optmem_max, because the default
 value was too small for 700 multicast groups.

As they're all pretty simple to test, it may be worthwhile to give
them a shot just to rule things out.


I will try them later today.

I forgot to mention that the filesystems are ext2 for the mirrored high 
speed disks and xfs for the 8TB array.


David Lang
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH][RESEND] Avoid possible NULL pointer deref in 3c359 driver

2007-08-04 Thread Jesper Juhl
(Resending old patch originally submitted at 1/7-2007 02:19)


In xl_freemem(), if dev_if is NULL, the line 
   struct xl_private *xl_priv =(struct xl_private *)dev-priv;
will cause a NULL pointer dereference. However, if we move 
that assignment below the 'if' statement that tests for a NULL 
'dev', then that NULL deref can never happen.
It never hurts to be safe :-)


Signed-off-by: Jesper Juhl [EMAIL PROTECTED]
--- 

diff --git a/drivers/net/tokenring/3c359.c b/drivers/net/tokenring/3c359.c
index e22a3f5..671f4da 100644
--- a/drivers/net/tokenring/3c359.c
+++ b/drivers/net/tokenring/3c359.c
@@ -1044,15 +1044,17 @@ static void xl_freemem(struct net_device *dev)
 static irqreturn_t xl_interrupt(int irq, void *dev_id) 
 {
struct net_device *dev = (struct net_device *)dev_id;
-   struct xl_private *xl_priv =(struct xl_private *)dev-priv;
-   u8 __iomem * xl_mmio = xl_priv-xl_mmio ; 
-   u16 intstatus, macstatus  ;
+   struct xl_private *xl_priv;
+   u8 __iomem * xl_mmio; 
+   u16 intstatus, macstatus;
 
if (!dev) { 
-   printk(KERN_WARNING Device structure dead, aaa !\n) ;
+   printk(KERN_WARNING 3c359: Device structure dead, aaa!\n);
return IRQ_NONE; 
}
 
+   xl_priv = (struct xl_private *)dev-priv;
+   xl_mmio = xl_priv-xl_mmio;
intstatus = readw(xl_mmio + MMIO_INTSTATUS) ;  
 
if (!(intstatus  1)) /* We didn't generate the interrupt */



-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] ixgbe: New driver for Pci-Express 10GbE 82598 support

2007-08-04 Thread Bill Fink
On Fri, 3 Aug 2007, Auke Kok wrote:

 This patch adds support for the Intel 82598 PCI-Express 10GbE
 chipset. Devices will be available on the market soon.
 
 This version of the driver is largely the same as the last release:
 
 * Driver uses a single RX and single TX queue, each using 1 MSI-X
   irq vector.
 * Driver runs in NAPI mode only
 * Driver is largely multiqueue-ready (TM)

...

 diff --git a/Documentation/networking/ixgbe.txt 
 b/Documentation/networking/ixgbe.txt
 new file mode 100644
 index 000..823d69c
 --- /dev/null
 +++ b/Documentation/networking/ixgbe.txt
 @@ -0,0 +1,72 @@
 +Linux* Base Driver for the 10 Gigabit Family of Adapters
 +
 +
 +July 09, 2007
 +
 +
 +Contents
 +
 +
 +- In This Release
 +- Identifying Your Adapter
 +- Command Line Parameters

There is no section Command Line Parameters in the document.

-Bill



 +- Support
 +
 +In This Release
 +===
 +
 +This file describes the Linux* Base Driver for the 10 Gigabit PCI Express
 +Family of Adapters.  This driver supports the 2.6.x kernel. This driver
 +includes support for Itanium(R)2-based systems.
 +
 +The following features are now available in supported kernels:
 + - Native VLANs
 + - Channel Bonding (teaming)
 + - SNMP
 +
 +Channel Bonding documentation can be found in the Linux kernel source:
 +/Documentation/networking/bonding.txt
 +
 +Instructions on updating ethtool can be found in the section Additional
 +Configurations later in this document.
 +
 +
 +Identifying Your Adapter
 +
 +
 +The following Intel network adapters are compatible with the drivers in this
 +release:
 +
 +Controller  Adapter Name Physical Layer
 +--   --
 +82598   Intel(R) 10GbE-LR/LRM/SR
 +Server Adapters  10G Base -SR (850 nm optical fiber)
 + 10G Base -LRM (850 nm optical fiber)
 + 10G Base -LR (1310 nm optical fiber)
 +
 +For more information on how to identify your adapter, go to the Adapter 
 +Driver ID Guide at:
 +
 +http://support.intel.com/support/network/sb/CS-012904.htm
 +
 +For the latest Intel network drivers for Linux, refer to the following
 +website.  In the search field, enter your adapter name or type, or use the
 +networking link on the left to search for your adapter:
 +
 +http://downloadfinder.intel.com/scripts-df/support_intel.asp
 +
 +
 +Support
 +===
 +
 +For general information, go to the Intel support website at:
 +
 +http://support.intel.com
 +
 +or the Intel Wired Networking project hosted by Sourceforge at:
 +
 +http://sourceforge.net/projects/e1000
 +
 +If an issue is identified with the released source code on the supported
 +kernel with a supported adapter, email the specific information related
 +to the issue to [EMAIL PROTECTED]
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] ixgbe: New driver for Pci-Express 10GbE 82598 support

2007-08-04 Thread Kok, Auke

Bill Fink wrote:

On Fri, 3 Aug 2007, Auke Kok wrote:

+
+Contents
+
+
+- In This Release
+- Identifying Your Adapter
+- Command Line Parameters


There is no section Command Line Parameters in the document.

-Bill



hmm yes, I removed all of them :)

Obviously I'll update the index. Thanks

Auke
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: strange tcp behavior

2007-08-04 Thread David Miller
From: Evgeniy Polyakov [EMAIL PROTECTED]
Date: Sat, 4 Aug 2007 20:51:51 +0400

 On Fri, Aug 03, 2007 at 02:17:17PM -0700, David Miller ([EMAIL PROTECTED]) 
 wrote:
  From: Evgeniy Polyakov [EMAIL PROTECTED]
  Date: Fri, 3 Aug 2007 12:22:42 +0400
  
   Maybe recvmsg should be changed too for symmetry?
  
  I took a look at this, and it's not %100 trivial.
  
  Let's do this later, and only sendmsg for now in order to
  fix the bug in the stable branches.
 
 I've tested your patch, besides there was an offset in one of hooks, 
 it works perfectly ok.
 
 Feel free to add my ack, tested-by or whatever is needed for this :)
 Your patch fixes the problem.

It is already merged to Linus's tree long before you found a chance to
test it :-)  So it would be difficult for me to do so.

-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [patch 2.6.23-rc1] add xt_statistic.h to the header list for usermode programs

2007-08-04 Thread David Miller
From: Chuck Ebbert [EMAIL PROTECTED]
Date: Fri, 03 Aug 2007 13:28:09 -0400

 Add xt_statistic.h to the list of headers to install.
 
 Apparently needed to build newer versions of iptables.
 
 Signed-off-by: Chuck Ebbert [EMAIL PROTECTED]

Applied, thanks.
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html