slow write performance with software RAID on nvme storage

2019-03-29 Thread Rick Warner
Hi All, We've been testing a 24 drive NVME software RAID and getting far lower write speeds than expected.  The drives are connected with PLX chips such that 12 drives are on 1 x16 connection and the other 12 drives use another x16 link  The system is a Supermicro 2029U-TN24R4T.  The drives

WARNING: Software Raid 0 on SSD's and discard corrupts data

2015-05-21 Thread Holger Kiehl
Hello, all users using a Software Raid 0 on SSD's with discard should disable discard, if they use any recent kernel since mid-April 2015. The bug was introduced by commit 47d68979cc968535cb87f3e5f2e6a3533ea48fbd and the fix is not yet in Linus tree. The fix can be found here: http

WARNING: Software Raid 0 on SSD's and discard corrupts data

2015-05-21 Thread Holger Kiehl
Hello, all users using a Software Raid 0 on SSD's with discard should disable discard, if they use any recent kernel since mid-April 2015. The bug was introduced by commit 47d68979cc968535cb87f3e5f2e6a3533ea48fbd and the fix is not yet in Linus tree. The fix can be found here: http

Anomaly with 2 x 840Pro SSDs in software raid 1

2013-09-20 Thread Andrei Banu
Hello, We have a troubling server fitted with 2 840Pro Samsung SSDs. Besides other problems addressed also here a while ago (to which I have still found no solution) we have one more anomaly (or so I believe). Although both SSDs worked 100% of the time their wear is very different. /dev/sda

Anomaly with 2 x 840Pro SSDs in software raid 1

2013-09-20 Thread Andrei Banu
Hello, We have a troubling server fitted with 2 840Pro Samsung SSDs. Besides other problems addressed also here a while ago (to which I have still found no solution) we have one more anomaly (or so I believe). Although both SSDs worked 100% of the time their wear is very different. /dev/sda

Re: oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-16 Thread Andrew Morton
t pcix sata controller, > > > and a nvidia pci based video card. > > > > > > I have the os on a pata drive, and have made a software raid array > > > consisting of 4 sata drives attached to the pcix sata controller. > > > I created the array, and formatt

Re: oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-16 Thread jeffunit
of ram, an intel stl-2 motherboard. > It also has a promise 100 tx2 pata controller, > a supermicro marvell based 8 port pcix sata controller, > and a nvidia pci based video card. > > I have the os on a pata drive, and have made a software raid array > consisting of 4 sata driv

Re: oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-16 Thread Herbert Xu
On Sun, Dec 16, 2007 at 07:56:56PM +0800, Herbert Xu wrote: > > What's spooky is that I just did a google and we've had reports > since 1998 all crashing on exactly the same line in tcp_recvmsg. However, there's been no reports at all since 2000 apart from this one so the earlier ones are

Re: oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-16 Thread Herbert Xu
Andrew Morton <[EMAIL PROTECTED]> wrote: > >> Dec 7 17:20:53 sata_fileserver kernel: Code: 6c 39 df 74 59 8d b6 00 >> 00 00 00 85 db 74 4f 8b 55 cc 8d 43 20 8b 0a 3b 48 18 0f 88 f4 05 00 >> 00 89 ce 2b 70 18 8b 83 90 00 00 00 <0f> b6 50 0d 89 d0 83 e0 02 3c >> 01 8b 43 50 83 d6 ff 39 c6 0f 82

Re: oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-16 Thread Andrew Morton
so has a promise 100 tx2 pata controller, > a supermicro marvell based 8 port pcix sata controller, > and a nvidia pci based video card. > > I have the os on a pata drive, and have made a software raid array > consisting of 4 sata drives attached to the pcix sata controlle

Re: oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-16 Thread Andrew Morton
a dual processor pentium III 933 system. It has 3gb of ram, an intel stl-2 motherboard. It also has a promise 100 tx2 pata controller, a supermicro marvell based 8 port pcix sata controller, and a nvidia pci based video card. I have the os on a pata drive, and have made a software

Re: oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-16 Thread Andrew Morton
controller, a supermicro marvell based 8 port pcix sata controller, and a nvidia pci based video card. I have the os on a pata drive, and have made a software raid array consisting of 4 sata drives attached to the pcix sata controller. I created the array, and formatted with reiserfs 3.6 I

Re: oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-16 Thread Herbert Xu
Andrew Morton [EMAIL PROTECTED] wrote: Dec 7 17:20:53 sata_fileserver kernel: Code: 6c 39 df 74 59 8d b6 00 00 00 00 85 db 74 4f 8b 55 cc 8d 43 20 8b 0a 3b 48 18 0f 88 f4 05 00 00 89 ce 2b 70 18 8b 83 90 00 00 00 0f b6 50 0d 89 d0 83 e0 02 3c 01 8b 43 50 83 d6 ff 39 c6 0f 82 This means

Re: oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-16 Thread Herbert Xu
On Sun, Dec 16, 2007 at 07:56:56PM +0800, Herbert Xu wrote: What's spooky is that I just did a google and we've had reports since 1998 all crashing on exactly the same line in tcp_recvmsg. However, there's been no reports at all since 2000 apart from this one so the earlier ones are probably

Re: oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-16 Thread jeffunit
motherboard. It also has a promise 100 tx2 pata controller, a supermicro marvell based 8 port pcix sata controller, and a nvidia pci based video card. I have the os on a pata drive, and have made a software raid array consisting of 4 sata drives attached to the pcix sata controller. I created

oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-07 Thread jeffunit
, and a nvidia pci based video card. I have the os on a pata drive, and have made a software raid array consisting of 4 sata drives attached to the pcix sata controller. I created the array, and formatted with reiserfs 3.6 I have run bonnie++ (filesystem benchmark) on the array without incident. When I use

oops with 2.6.23.1, marvel, software raid, reiserfs and samba

2007-12-07 Thread jeffunit
, and a nvidia pci based video card. I have the os on a pata drive, and have made a software raid array consisting of 4 sata drives attached to the pcix sata controller. I created the array, and formatted with reiserfs 3.6 I have run bonnie++ (filesystem benchmark) on the array without incident. When I use

Re: [patch v5 1/1] md: Software Raid autodetect dev list not array

2007-08-29 Thread Randy Dunlap
Michael J. Evans wrote: From: Michael J. Evans <[EMAIL PROTECTED]> In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Mic

[patch v5 1/1] md: Software Raid autodetect dev list not array

2007-08-29 Thread Michael J. Evans
From: Michael J. Evans <[EMAIL PROTECTED]> In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Michael J. Evans <[EMAIL

[patch v5 1/1] md: Software Raid autodetect dev list not array

2007-08-29 Thread Michael J. Evans
From: Michael J. Evans [EMAIL PROTECTED] In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Michael J. Evans [EMAIL PROTECTED

Re: [patch v5 1/1] md: Software Raid autodetect dev list not array

2007-08-29 Thread Randy Dunlap
Michael J. Evans wrote: From: Michael J. Evans [EMAIL PROTECTED] In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Michael J

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Randy Dunlap
Michael Evans wrote: On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote: Michael Evans wrote: On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote: Michael Evans wrote: On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote: Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael Evans
On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote: > Michael Evans wrote: > > On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote: > >> Michael Evans wrote: > >>> On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote: > Michael Evans wrote: > > Oh, I see. I forgot about the changelogs.

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Randy Dunlap
Michael Evans wrote: On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote: Michael Evans wrote: On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote: Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd send out version 5 now, but I'm not sure what kernel version to make the

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael Evans
On 8/28/07, Randy Dunlap <[EMAIL PROTECTED]> wrote: > Michael Evans wrote: > > On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote: > >> Michael Evans wrote: > >>> Oh, I see. I forgot about the changelogs. I'd send out version 5 > >>> now, but I'm not sure what kernel version to make the patch

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Randy Dunlap
Michael Evans wrote: On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote: Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd send out version 5 now, but I'm not sure what kernel version to make the patch against. 2.6.23-rc4 is on kernel.org and I don't see any git snapshots.

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael Evans
On 8/28/07, Bill Davidsen <[EMAIL PROTECTED]> wrote: > Michael Evans wrote: > > Oh, I see. I forgot about the changelogs. I'd send out version 5 > > now, but I'm not sure what kernel version to make the patch against. > > 2.6.23-rc4 is on kernel.org and I don't see any git snapshots. > >

Re: [patch v5 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael J. Evans
From: Michael J. Evans <[EMAIL PROTECTED]> In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Michael J. Evans <[EMAIL

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael J. Evans
On Tuesday 28 August 2007, Jan Engelhardt wrote: > > On Aug 28 2007 06:08, Michael Evans wrote: > > > >Oh, I see. I forgot about the changelogs. I'd send out version 5 > >now, but I'm not sure what kernel version to make the patch against. > >2.6.23-rc4 is on kernel.org and I don't see any git

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Bill Davidsen
Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd send out version 5 now, but I'm not sure what kernel version to make the patch against. 2.6.23-rc4 is on kernel.org and I don't see any git snapshots. Additionally I never could tell what git tree was the 'mainline' as it isn't

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Jan Engelhardt
On Aug 28 2007 06:08, Michael Evans wrote: > >Oh, I see. I forgot about the changelogs. I'd send out version 5 >now, but I'm not sure what kernel version to make the patch against. >2.6.23-rc4 is on kernel.org and I don't see any git snapshots. 2.6.23-rc4 is a snapshot in itself, a tagged one

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael Evans
On 8/27/07, Randy Dunlap <[EMAIL PROTECTED]> wrote: > Michael J. Evans wrote: > > On Monday 27 August 2007, Randy Dunlap wrote: > >> On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote: > >> > >>> = > >>> ---

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael Evans
On 8/27/07, Randy Dunlap [EMAIL PROTECTED] wrote: Michael J. Evans wrote: On Monday 27 August 2007, Randy Dunlap wrote: On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote: = --- linux/drivers/md/md.c.orig 2007-08-21

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Jan Engelhardt
On Aug 28 2007 06:08, Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd send out version 5 now, but I'm not sure what kernel version to make the patch against. 2.6.23-rc4 is on kernel.org and I don't see any git snapshots. 2.6.23-rc4 is a snapshot in itself, a tagged one at

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael J. Evans
On Tuesday 28 August 2007, Jan Engelhardt wrote: On Aug 28 2007 06:08, Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd send out version 5 now, but I'm not sure what kernel version to make the patch against. 2.6.23-rc4 is on kernel.org and I don't see any git snapshots.

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael Evans
On 8/28/07, Bill Davidsen [EMAIL PROTECTED] wrote: Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd send out version 5 now, but I'm not sure what kernel version to make the patch against. 2.6.23-rc4 is on kernel.org and I don't see any git snapshots. Additionally I

Re: [patch v5 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael J. Evans
From: Michael J. Evans [EMAIL PROTECTED] In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Michael J. Evans [EMAIL PROTECTED

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Bill Davidsen
Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd send out version 5 now, but I'm not sure what kernel version to make the patch against. 2.6.23-rc4 is on kernel.org and I don't see any git snapshots. Additionally I never could tell what git tree was the 'mainline' as it isn't

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Randy Dunlap
Michael Evans wrote: On 8/28/07, Bill Davidsen [EMAIL PROTECTED] wrote: Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd send out version 5 now, but I'm not sure what kernel version to make the patch against. 2.6.23-rc4 is on kernel.org and I don't see any git snapshots.

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael Evans
On 8/28/07, Randy Dunlap [EMAIL PROTECTED] wrote: Michael Evans wrote: On 8/28/07, Bill Davidsen [EMAIL PROTECTED] wrote: Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd send out version 5 now, but I'm not sure what kernel version to make the patch against.

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Randy Dunlap
Michael Evans wrote: On 8/28/07, Randy Dunlap [EMAIL PROTECTED] wrote: Michael Evans wrote: On 8/28/07, Bill Davidsen [EMAIL PROTECTED] wrote: Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd send out version 5 now, but I'm not sure what kernel version to make the patch

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Michael Evans
On 8/28/07, Randy Dunlap [EMAIL PROTECTED] wrote: Michael Evans wrote: On 8/28/07, Randy Dunlap [EMAIL PROTECTED] wrote: Michael Evans wrote: On 8/28/07, Bill Davidsen [EMAIL PROTECTED] wrote: Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd send out version 5 now,

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-28 Thread Randy Dunlap
Michael Evans wrote: On 8/28/07, Randy Dunlap [EMAIL PROTECTED] wrote: Michael Evans wrote: On 8/28/07, Randy Dunlap [EMAIL PROTECTED] wrote: Michael Evans wrote: On 8/28/07, Bill Davidsen [EMAIL PROTECTED] wrote: Michael Evans wrote: Oh, I see. I forgot about the changelogs. I'd send

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Randy Dunlap
Michael J. Evans wrote: On Monday 27 August 2007, Randy Dunlap wrote: On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote: = --- linux/drivers/md/md.c.orig 2007-08-21 03:19:42.511576248 -0700 +++ linux/drivers/md/md.c

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Michael J. Evans
On Monday 27 August 2007, Randy Dunlap wrote: > On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote: > > > = > > --- linux/drivers/md/md.c.orig 2007-08-21 03:19:42.511576248 -0700 > > +++ linux/drivers/md/md.c 2007-08-21

Re: [patch v4 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Michael J. Evans
From: Michael J. Evans <[EMAIL PROTECTED]> In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Michael J. Evans <[EMAIL

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Randy Dunlap
On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote: > Note: between 2.6.22 and 2.6.23-rc3-git5 > rdev = md_import_device(dev,0, 0); > became > rdev = md_import_device(dev,0, 90); > So the patch has been edited to patch around that line. (might be fuzzy) so

[patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Michael J. Evans
From: Michael J. Evans <[EMAIL PROTECTED]> In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Michael J. Evans <[EMAIL

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Michael Evans
On 8/26/07, Kyle Moffett <[EMAIL PROTECTED]> wrote: > On Aug 26, 2007, at 08:20:45, Michael Evans wrote: > > Also, I forgot to mention, the reason I added the counters was > > mostly for debugging. However they're also as useful in the same > > way that listing the partitions when a new disk is

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Michael Evans
On 8/26/07, Kyle Moffett [EMAIL PROTECTED] wrote: On Aug 26, 2007, at 08:20:45, Michael Evans wrote: Also, I forgot to mention, the reason I added the counters was mostly for debugging. However they're also as useful in the same way that listing the partitions when a new disk is added can

[patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Michael J. Evans
From: Michael J. Evans [EMAIL PROTECTED] In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Michael J. Evans [EMAIL PROTECTED

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Randy Dunlap
On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote: Note: between 2.6.22 and 2.6.23-rc3-git5 rdev = md_import_device(dev,0, 0); became rdev = md_import_device(dev,0, 90); So the patch has been edited to patch around that line. (might be fuzzy) so you

Re: [patch v4 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Michael J. Evans
From: Michael J. Evans [EMAIL PROTECTED] In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Michael J. Evans [EMAIL PROTECTED

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Michael J. Evans
On Monday 27 August 2007, Randy Dunlap wrote: On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote: = --- linux/drivers/md/md.c.orig 2007-08-21 03:19:42.511576248 -0700 +++ linux/drivers/md/md.c 2007-08-21

Re: [patch v3 1/1] md: Software Raid autodetect dev list not array

2007-08-27 Thread Randy Dunlap
Michael J. Evans wrote: On Monday 27 August 2007, Randy Dunlap wrote: On Mon, 27 Aug 2007 15:16:21 -0700 Michael J. Evans wrote: = --- linux/drivers/md/md.c.orig 2007-08-21 03:19:42.511576248 -0700 +++ linux/drivers/md/md.c

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Kyle Moffett
On Aug 26, 2007, at 08:20:45, Michael Evans wrote: Also, I forgot to mention, the reason I added the counters was mostly for debugging. However they're also as useful in the same way that listing the partitions when a new disk is added can be (in fact this augments that and the existing

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Michael Evans
On 8/26/07, Randy Dunlap <[EMAIL PROTECTED]> wrote: > On Sun, 26 Aug 2007 04:51:24 -0700 Michael J. Evans wrote: > > > From: Michael J. Evans <[EMAIL PROTECTED]> > > > > Is there any way to tell the user what device (or partition?) is > bein skipped? This printk should just print (confirm) that

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Randy Dunlap
On Sun, 26 Aug 2007 04:51:24 -0700 Michael J. Evans wrote: > From: Michael J. Evans <[EMAIL PROTECTED]> > > In current release kernels the md module (Software RAID) uses a static array > (dev_t[128]) to store partition/device info temporarily for autostart. > > This pa

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Michael Evans
On 8/26/07, Jan Engelhardt <[EMAIL PROTECTED]> wrote: > > On Aug 26 2007 04:51, Michael J. Evans wrote: > > { > >- if (dev_cnt >= 0 && dev_cnt < 127) > >- detected_devices[dev_cnt++] = dev; > >+ struct detected_devices_node *node_detected_dev; > >+ node_detected_dev =

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Jan Engelhardt
On Aug 26 2007 04:51, Michael J. Evans wrote: > { >- if (dev_cnt >= 0 && dev_cnt < 127) >- detected_devices[dev_cnt++] = dev; >+ struct detected_devices_node *node_detected_dev; >+ node_detected_dev = kzalloc(sizeof(*node_detected_dev), GFP_KERNEL);\ What's the \ good

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Michael Evans
Also, I forgot to mention, the reason I added the counters was mostly for debugging. However they're also as useful in the same way that listing the partitions when a new disk is added can be (in fact this augments that and the existing messages the autodetect routines provide). As for using

[patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Michael J. Evans
From: Michael J. Evans <[EMAIL PROTECTED]> In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Michael J. Evans <[EMAIL

[patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Michael J. Evans
From: Michael J. Evans [EMAIL PROTECTED] In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list. Signed-off-by: Michael J. Evans [EMAIL PROTECTED

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Michael Evans
Also, I forgot to mention, the reason I added the counters was mostly for debugging. However they're also as useful in the same way that listing the partitions when a new disk is added can be (in fact this augments that and the existing messages the autodetect routines provide). As for using

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Jan Engelhardt
On Aug 26 2007 04:51, Michael J. Evans wrote: { - if (dev_cnt = 0 dev_cnt 127) - detected_devices[dev_cnt++] = dev; + struct detected_devices_node *node_detected_dev; + node_detected_dev = kzalloc(sizeof(*node_detected_dev), GFP_KERNEL);\ What's the \ good for,

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Michael Evans
On 8/26/07, Jan Engelhardt [EMAIL PROTECTED] wrote: On Aug 26 2007 04:51, Michael J. Evans wrote: { - if (dev_cnt = 0 dev_cnt 127) - detected_devices[dev_cnt++] = dev; + struct detected_devices_node *node_detected_dev; + node_detected_dev =

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Randy Dunlap
On Sun, 26 Aug 2007 04:51:24 -0700 Michael J. Evans wrote: From: Michael J. Evans [EMAIL PROTECTED] In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Michael Evans
On 8/26/07, Randy Dunlap [EMAIL PROTECTED] wrote: On Sun, 26 Aug 2007 04:51:24 -0700 Michael J. Evans wrote: From: Michael J. Evans [EMAIL PROTECTED] Is there any way to tell the user what device (or partition?) is bein skipped? This printk should just print (confirm) that

Re: [patch v2 1/1] md: Software Raid autodetect dev list not array

2007-08-26 Thread Kyle Moffett
On Aug 26, 2007, at 08:20:45, Michael Evans wrote: Also, I forgot to mention, the reason I added the counters was mostly for debugging. However they're also as useful in the same way that listing the partitions when a new disk is added can be (in fact this augments that and the existing

Re: [patch 1/1] md: Software Raid autodetect dev list not array

2007-08-23 Thread Michael Evans
TED]> wrote: > On Wednesday August 22, [EMAIL PROTECTED] wrote: > > From: Michael J. Evans <[EMAIL PROTECTED]> > > > > In current release kernels the md module (Software RAID) uses a static array > > (dev_t[128]) to store partition/device info temporarily for

Re: [patch 1/1] md: Software Raid autodetect dev list not array

2007-08-23 Thread Neil Brown
On Wednesday August 22, [EMAIL PROTECTED] wrote: > From: Michael J. Evans <[EMAIL PROTECTED]> > > In current release kernels the md module (Software RAID) uses a static array > (dev_t[128]) to store partition/device info temporarily for autostart. > > This patch re

Re: [patch 1/1] md: Software Raid autodetect dev list not array

2007-08-23 Thread Neil Brown
On Wednesday August 22, [EMAIL PROTECTED] wrote: From: Michael J. Evans [EMAIL PROTECTED] In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array with a list

Re: [patch 1/1] md: Software Raid autodetect dev list not array

2007-08-23 Thread Michael Evans
: On Wednesday August 22, [EMAIL PROTECTED] wrote: From: Michael J. Evans [EMAIL PROTECTED] In current release kernels the md module (Software RAID) uses a static array (dev_t[128]) to store partition/device info temporarily for autostart. This patch replaces that static array

[PATCH] [442/2many] MAINTAINERS - SOFTWARE RAID (Multiple Disks) SUPPORT

2007-08-13 Thread joe
Add file pattern to MAINTAINER entry Signed-off-by: Joe Perches <[EMAIL PROTECTED]> diff --git a/MAINTAINERS b/MAINTAINERS index d17ae3d..29a2179 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4205,6 +4205,8 @@ P:Neil Brown M: [EMAIL PROTECTED] L: [EMAIL PROTECTED] S:

[PATCH] [442/2many] MAINTAINERS - SOFTWARE RAID (Multiple Disks) SUPPORT

2007-08-13 Thread joe
Add file pattern to MAINTAINER entry Signed-off-by: Joe Perches [EMAIL PROTECTED] diff --git a/MAINTAINERS b/MAINTAINERS index d17ae3d..29a2179 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -4205,6 +4205,8 @@ P:Neil Brown M: [EMAIL PROTECTED] L: [EMAIL PROTECTED] S:

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Theodore Tso
On Mon, Jul 30, 2007 at 09:39:39PM +0200, Miklos Szeredi wrote: > > Extrapolating these %cpu number makes ZFS the fastest. > > > > Are you sure these numbers are correct? > > Note, that %cpu numbers for fuse filesystems are inherently skewed, > because the CPU usage of the filesystem process

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Justin Piszcz
On Mon, 30 Jul 2007, Miklos Szeredi wrote: Extrapolating these %cpu number makes ZFS the fastest. Are you sure these numbers are correct? Note, that %cpu numbers for fuse filesystems are inherently skewed, because the CPU usage of the filesystem process itself is not taken into account.

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Dave Kleikamp
On Mon, 2007-07-30 at 10:29 -0400, Justin Piszcz wrote: > Overall JFS seems the fastest but reviewing the mailing list for JFS it > seems like there a lot of problems, especially when people who use JFS > 1 > year, their speed goes to 5 MiB/s over time and the defragfs tool has been >

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Miklos Szeredi
> Extrapolating these %cpu number makes ZFS the fastest. > > Are you sure these numbers are correct? Note, that %cpu numbers for fuse filesystems are inherently skewed, because the CPU usage of the filesystem process itself is not taken into account. So the numbers are not all that good, but

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Al Boldi
Justin Piszcz wrote: > CONFIG: > > Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems. > Kernel was 2.6.21 or 2.6.22, did these awhile ago. > Hardware was SATA with PCI-e only, nothing on the PCI bus. > > ZFS was userspace+fuse of course. Wow

bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Justin Piszcz
CONFIG: Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems. Kernel was 2.6.21 or 2.6.22, did these awhile ago. Hardware was SATA with PCI-e only, nothing on the PCI bus. ZFS was userspace+fuse of course. Reiser was V3. EXT4 was created using the recommended options on its

bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Justin Piszcz
CONFIG: Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems. Kernel was 2.6.21 or 2.6.22, did these awhile ago. Hardware was SATA with PCI-e only, nothing on the PCI bus. ZFS was userspace+fuse of course. Reiser was V3. EXT4 was created using the recommended options on its

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Al Boldi
Justin Piszcz wrote: CONFIG: Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems. Kernel was 2.6.21 or 2.6.22, did these awhile ago. Hardware was SATA with PCI-e only, nothing on the PCI bus. ZFS was userspace+fuse of course. Wow! Userspace and still that efficient

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Miklos Szeredi
Extrapolating these %cpu number makes ZFS the fastest. Are you sure these numbers are correct? Note, that %cpu numbers for fuse filesystems are inherently skewed, because the CPU usage of the filesystem process itself is not taken into account. So the numbers are not all that good, but

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Dave Kleikamp
On Mon, 2007-07-30 at 10:29 -0400, Justin Piszcz wrote: Overall JFS seems the fastest but reviewing the mailing list for JFS it seems like there a lot of problems, especially when people who use JFS 1 year, their speed goes to 5 MiB/s over time and the defragfs tool has been removed(?)

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Justin Piszcz
On Mon, 30 Jul 2007, Miklos Szeredi wrote: Extrapolating these %cpu number makes ZFS the fastest. Are you sure these numbers are correct? Note, that %cpu numbers for fuse filesystems are inherently skewed, because the CPU usage of the filesystem process itself is not taken into account.

Re: bonnie++ benchmarks for ext2,ext3,ext4,jfs,reiserfs,xfs,zfs on software raid 5

2007-07-30 Thread Theodore Tso
On Mon, Jul 30, 2007 at 09:39:39PM +0200, Miklos Szeredi wrote: Extrapolating these %cpu number makes ZFS the fastest. Are you sure these numbers are correct? Note, that %cpu numbers for fuse filesystems are inherently skewed, because the CPU usage of the filesystem process itself is

Re: Software RAID 5 - Two reads are faster than one on a SW RAID5?

2007-07-20 Thread Justin Piszcz
On Fri, 20 Jul 2007, Lennart Sorensen wrote: On Fri, Jul 20, 2007 at 09:58:50AM -0400, Justin Piszcz wrote: I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS. I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for x86_64, when I ran md5sum -c MD5SUMS, I

Re: Software RAID 5 - Two reads are faster than one on a SW RAID5?

2007-07-20 Thread Lennart Sorensen
On Fri, Jul 20, 2007 at 09:58:50AM -0400, Justin Piszcz wrote: > I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS. > > I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for > x86_64, when I ran md5sum -c MD5SUMS, I see ~280-320MB/s. When I ran the >

Software RAID 5 - Two reads are faster than one on a SW RAID5?

2007-07-20 Thread Justin Piszcz
I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS. I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for x86_64, when I ran md5sum -c MD5SUMS, I see ~280-320MB/s. When I ran the second one I see upwards of what I should be seeing 500-520MB/s. NOTE::

Software RAID 5 - Two reads are faster than one on a SW RAID5?

2007-07-20 Thread Justin Piszcz
I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS. I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for x86_64, when I ran md5sum -c MD5SUMS, I see ~280-320MB/s. When I ran the second one I see upwards of what I should be seeing 500-520MB/s. NOTE::

Re: Software RAID 5 - Two reads are faster than one on a SW RAID5?

2007-07-20 Thread Lennart Sorensen
On Fri, Jul 20, 2007 at 09:58:50AM -0400, Justin Piszcz wrote: I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS. I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for x86_64, when I ran md5sum -c MD5SUMS, I see ~280-320MB/s. When I ran the second

Re: Software RAID 5 - Two reads are faster than one on a SW RAID5?

2007-07-20 Thread Justin Piszcz
On Fri, 20 Jul 2007, Lennart Sorensen wrote: On Fri, Jul 20, 2007 at 09:58:50AM -0400, Justin Piszcz wrote: I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS. I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for x86_64, when I ran md5sum -c MD5SUMS, I

Re: Help needed: Partitioned software raid > 2TB

2007-06-16 Thread Alexander E. Patrakov
Jan Engelhardt wrote: I am not sure (would have to check again), but I believe both opensuse and fedora (the latter of which uses LVM for all partitions by default) have that working, while still using GRUB. Keyword: partitions. I.e., they partition the hard drive (so that the first 31

Re: Help needed: Partitioned software raid > 2TB

2007-06-16 Thread Jan Engelhardt
On Jun 16 2007 11:38, Alexander E. Patrakov wrote: > Jan Engelhardt wrote: >> On Jun 15 2007 16:03, Christian Schmidt wrote: > >> > Thanks for the clarification. I didn't use LVM on the device on purpose, >> > as root on LVM requires initrd (which I strongly dislike as >> >

Re: Help needed: Partitioned software raid 2TB

2007-06-16 Thread Jan Engelhardt
On Jun 16 2007 11:38, Alexander E. Patrakov wrote: Jan Engelhardt wrote: On Jun 15 2007 16:03, Christian Schmidt wrote: Thanks for the clarification. I didn't use LVM on the device on purpose, as root on LVM requires initrd (which I strongly dislike as yet-another-point-of-failure). As

Re: Help needed: Partitioned software raid 2TB

2007-06-16 Thread Alexander E. Patrakov
Jan Engelhardt wrote: I am not sure (would have to check again), but I believe both opensuse and fedora (the latter of which uses LVM for all partitions by default) have that working, while still using GRUB. Keyword: partitions. I.e., they partition the hard drive (so that the first 31

Re: Help needed: Partitioned software raid > 2TB

2007-06-15 Thread Alexander E. Patrakov
Jan Engelhardt wrote: On Jun 15 2007 16:03, Christian Schmidt wrote: Thanks for the clarification. I didn't use LVM on the device on purpose, as root on LVM requires initrd (which I strongly dislike as yet-another-point-of-failure). As LVM is on the large partition anyway I'll just add the

Re: Help needed: Partitioned software raid > 2TB

2007-06-15 Thread Jan Engelhardt
On Jun 15 2007 16:03, Christian Schmidt wrote: >Hi Andi, > >Andi Kleen wrote: >> Christian Schmidt <[EMAIL PROTECTED]> writes: >>> Where is the inherent limit? The partitioning software, or partitioning >>> all by itself? >> >> DOS style partitioning don't support more than 2TB. You either need

Re: Help needed: Partitioned software raid > 2TB

2007-06-15 Thread Christian Schmidt
Hi Andi, Andi Kleen wrote: > Christian Schmidt <[EMAIL PROTECTED]> writes: >> Where is the inherent limit? The partitioning software, or partitioning >> all by itself? > > DOS style partitioning don't support more than 2TB. You either need > to use EFI partitions (e.g. using parted) or LVM.

  1   2   3   >